abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
In some embodiments, a single instruction is provided that has an opcode, a first field to represent a packed data source/destination operand, a second field to represent a first packed data source operand, and a third field to represent a second packed data source operand. Packed data elements of the first and second packed data source operands are of a first size and packed data elements of the packed data source/destination operand are of a second size greater than the first size. In response to the single instruction, execution circuitry of an apparatus, according to the opcode of the single instruction, for each packed data element position of the packed data source/destination operand is configured to: sign extend a plurality of packed data words from a corresponding packed data element position of the first packed data source operand; sign extend a plurality of packed data words from a corresponding packed data element position of the second packed data source operand; multiply each of the plurality of sign extended packed data words from a corresponding packed data element position of the first packed data source operand with a corresponding one of the plurality of sign extended packed data words from a corresponding packed data element position of the second packed data source operand to result in a plurality of results; add the plurality of results with a packed data element of the second size of a corresponding packed data element position of the packed data source/destination operand to result in an addition result; and store the addition result in the corresponding packed data element position of the packed data source/destination operand.
An apparatus comprising:decoder logic configured to decode a single instruction having an opcode, a first field to represent a packed data source/destination operand, a second field to represent a first packed data source operand, and a third field to represent a second packed data source operand, wherein packed data elements of the first and second packed data source operands are of a first size and packed data elements of the packed data source/destination operand are of a second size greater than the first size;a register file having a plurality of packed data registers to store one or more of the packed data source/destination operand, the first packed data source operand, and the second packed data source operand; andexecution logic coupled to the decoder and the register file, wherein in response to the decoded single instruction, the execution logic, according to the opcode of the single instruction, for each packed data element position of the packed data source/destination operand is configured to:sign extend a plurality of packed data words from a corresponding packed data element position of the first packed data source operand;sign extend a plurality of packed data words from a corresponding packed data element position of the second packed data source operand;multiply each of the plurality of sign extended packed data words from a corresponding packed data element position of the first packed data source operand with a corresponding one of the plurality of sign extended packed data words from a corresponding packed data element position of the second packed data source operand to result in a plurality of results;add the plurality of results with a packed data element of the second size of a corresponding packed data element position of the packed data source/destination operand to result in an addition result; andstore the addition result in the corresponding packed data element position of the packed data source/destination operand.The apparatus of claim 1, wherein the execution logic is configured to suppress a memory fault.The apparatus of claim 1 or 2, wherein when the single instruction further includes another field for a write mask, the execution logic is configured to perform a merging operation.The apparatus of any of claim 1 to 3, wherein the execution logic is configured to sign extend the plurality of packed data words from the first packed data source operand, the plurality of packed data words from the first packed data source operand comprising signed words.The apparatus of any of claim 1 to 4, wherein the execution logic is configured to generate the addition result comprising a doubleword result.The apparatus of any of claim 1 to 5, wherein the execution logic, when a width of the packed data source/destination operand is 128 bits, is configured to perform 4 iterations of the multiply, the add, and the store.The apparatus of any of claim 1 to 6, wherein the execution logic, when a width of the packed data source/destination operand is 256 bits, is configured to perform 8 iterations of the multiply, the add, and the store.A method comprising:decoding, in a decoder of a processor, a single instruction having an opcode, a first field to represent a packed data source/destination operand, a second field to represent a first packed data source operand, and a third field to represent a second packed data source operand, wherein packed data elements of the first and second packed data source operands are of a first size and packed data elements of the packed data source/destination operand are of a second size greater than the first size; andexecuting, in execution logic of the processor coupled to the decoder, according to the opcode of the single instruction to, for each packed data element position of the packed data source/destination operand:sign extend a plurality of packed data words from a corresponding packed data element position of the first packed data source operand;sign extend a plurality of packed data words from a corresponding packed data element position of the second packed data source operand;multiply each of the plurality of sign extended packed data words from a corresponding packed data element position of the first packed data source operand with a corresponding one of the plurality of sign extended packed data words from a corresponding packed data element position of the second packed data source operand to result in a plurality of results;add the plurality of results with a packed data element of the second size of a corresponding packed data element position of the packed data source/destination operand to result in an addition result; andstore the addition result in the corresponding packed data element position of the packed data source/destination operand.The method of claim 8, wherein the executing further comprises suppressing a memory fault.The method of any of claim 8 to 9, wherein the executing further comprises performing a merging operation when the single instruction further includes another field for a write mask.The method of any of claim 8 to 10, wherein the executing further comprises sign extending the plurality of packed data words from the first packed data source operand, the plurality of packed data words from the first packed data source operand comprising signed words.The method of any of claim 8 to 11, wherein the executing further comprises generating the addition result comprising a doubleword.The method of any of claim 8 to 12, wherein the executing further comprises, when a width of the packed data source/destination operand is 128 bits, performing 4 iterations of the multiply, the add, and the store.The method of any of claim 8 to 13, wherein the executing further comprises, when a width of the packed data source/destination operand is 256 bits, performing 8 iterations of the multiply, the add, and the store.A computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out the method of any of claims 8-14.A computer-readable data carrier having stored thereon the computer program product of claim 15.A data carrier signal carrying the computer program product of claim 15.A system comprising:a processor comprising the apparatus of any of claims 1-7; anda dynamic random access memory coupled to the processor.
FIELD OF INVENTIONThe field of invention relates generally to computer processor architecture, and, more specifically, to instructions which when executed cause a particular result.BACKGROUNDA common operation in linear algebra is a multiply accumulate operation (e.g., c = c + a ∗ b). The multiply accumulate is typically a sub-operation in a stream of operations, for instance, a dot product between two vectors, which could also be a single product of a column and a row in a matrix multiply. For example, C = 0 For (I) C + = A I * B IBRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:Figure 1 illustrates an exemplary execution of a fused multiply accumulate instruction that uses different sized operands according to an embodiment;Figure 2 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment;Figure 3 illustrates an embodiment of hardware to process an instruction such as a fused multiply accumulate instruction;Figure 4 illustrates an embodiment of method performed by a processor to process a fused multiply accumulate instruction;Figure 5 illustrates an embodiment of a subset of the execution of a fused multiply accumulate;Figure 6 illustrates an embodiment of pseudo code for implementing this instruction in hardware;Figure 7 illustrates an embodiment of a subset of the execution of a fused multiply accumulate;Figure 8 illustrates an embodiment of pseudo code for implementing this instruction in hardware;Figure 9 illustrates an embodiment of a subset of the execution of a fused multiply accumulate;Figure 10 illustrates an embodiment of pseudo code for implementing this instruction in hardware;Figure 11 illustrates an embodiment of a subset of the execution of a fused multiply accumulate;Figure 12 illustrates an embodiment of pseudo code for implementing this instruction in hardware;Figure 13A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention;Figure 13B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention;Figure 14A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;Figure 14B is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the full opcode field according to one embodiment of the invention;Figure 14C is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the register index field according to one embodiment of the invention;Figure 14D is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the augmentation operation field according to one embodiment of the invention;Figure 15 is a block diagram of a register architecture according to one embodiment of the invention;Figure 16A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention;Figure 16B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;Figure 17A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1702 and with its local subset of the Level 2 (L2) cache 1704, according to embodiments of the invention;Figure 17B is an expanded view of part of the processor core in Figure 17A according to embodiments of the invention;Figure 18 is a block diagram of a processor 1800 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;Figure 19 shown a block diagram of a system in accordance with one embodiment of the present invention;Figure 20 is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention;Figure 21 is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention;Figure 22 is a block diagram of a SoC in accordance with an embodiment of the present invention; andFigure 23 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.In processing large data sets, the memory and computation density can be increased by sizing the datatypes as small as possible. If the input terms come from sensor data, then 8 or 16-bit integer data may be expected as inputs. Neural network calculations, which also can be coded to match this dense format, typically have 'small' numbers as input terms. However, the accumulator is summing products, implying that the accumulator should tolerate two times the number of bits of the input terms (the nature of multiplication) and potentially much more in order to avoid overflow or saturation at any point in the computation.Detailed herein are embodiments that attempt to keep the input data size small and sum to a larger accumulator in a chain of fused multiply accumulate (FMA) operation. Figure 1 illustrates an exemplary execution of a fused multiply accumulate instruction that uses different sized operands according to an embodiment. A first source 101 (e.g., a SIMD or vector register) and a second source 103 store "half-sized" packed data elements with respect to a third source 105 (e.g., single input, multiple data (SIMD) or vector register) that stores full-size packed data elements used for accumulation. Any set of values where the packed data element sizes are in in this manner are supportable.As shown, values stored in packed data elements of the same position of the first and second sources 101 and 103 are multiplied together. For example, A0∗B0, A1∗B1, etc. A result of two such "half-sized" packed data element multiplications are added to a corresponding "full-sized" packed data element from the third source 105. For example, A0∗B0 + A1∗B1 + C0, etc.The result is stored in a destination 107 (e.g., a SIMD register) that has packed data element sizes that are at least "full-sized." In some embodiments, the third source 105 and the destination 107 are the same.Figure 2 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment. Note the source (to the multipliers) and accumulator values may be signed or unsigned values. For an accumulator having 2X input sizes (in other words, the accumulator input value is twice the size of the packed data element sizes of the sources) table 201 illustrates different configurations. For byte sized sources, the accumulator uses word or half-precision floating-point (HPFP) values that are 16-bit in size. For word sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values that are 32-bit in size. For SPFP or 32-bit integer sized sources, the accumulator uses 64-integer or double-precision floating-point (DPFP) values that are 64-bit in size. Using Figure 1 as an example, when the packed data element sizes of source 1 101 and source 2 103 are 8 bits, then the accumulator will use 16-bit sized data elements from source 3 103. When the packed data element sizes of source 1 101 and source 2 103 are 16 bits, then the accumulator will use 32-bit sized data elements from source 3 103. When the packed data element sizes of source 1 101 and source 2 103 are 32 bits, then the accumulator will use 64-bit sized data elements from source 3 103.For an accumulator having 4X input sizes (in other words, the accumulator input value is four times the size of the packed data element sizes of the sources) table 203 illustrates different configurations. For byte sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values that are 32-bit in size. For word sized sources, the accumulator uses 64-bit integer or double-precision floating-point (DPFP) values that are 64-bit in size. Using Figure 1 as an example, when the packed data element sizes of source 1101 and source 2 103 are 8 bits, then the accumulator will use 32-bit sized data elements from source 3 103. When the packed data element sizes of source 1101 and source 2 103 are 16 bits, then the accumulator will use 64-bit sized data elements from source 3 103.For an accumulator having 8X input sizes (in other words, the accumulator input value is eight times the size of the packed data element sizes of the sources) table 205 illustrates a configuration. For byte sized sources, the accumulator uses 64-bit integer or double-precision floating-point (DPFP) values that are 64-bit in size. Using Figure 1 as an example, when the packed data element sizes of source 1 101 and source 2 103 are 8 bits, then the accumulator will use 64-bit sized data elements from source 3 103.Detailed herein are embodiments of instructions and circuitry for fused multiply accumulate. In some embodiments, the fused multiply accumulate instruction is of mixed precision and/or uses horizontal reduction as detailed herein.Detailed herein are embodiments of an instruction that when executed causes, for each packed data element position of the destination, a multiplication of a M N-sized packed data elements from a first and a second source that correspond to a packed data element position of a third source, and add results from these multiplications to a full-sized (relative to the N-sized packed data elements) packed data element of a packed data element position of the third source, and store the result the addition(s) in a packed data element position destination corresponding to the packed data element position of the third source, wherein M is equal to the full-sized packed data element divided by N. For example, when M is equal to 2 (e.g., a full-sized packed data element is 16 bits and N is 8 bits), consecutive packed data elements from the first source are multiplied to respective consecutive packed data elements of the second source.As such, detailed herein are embodiments of an instruction that when executed causes a multiplication of a pair of half-sized packed data elements from a first and a second source, and adds results from these multiplications to a full-sized (relative to the half-sized packed data elements) packed data element of a third source and stores the result in a destination. In other words, in some embodiments, for each data element position i of the third source, there is a multiplication of data from a data element position [2i] of the first source to data from a data element position [2i] of the second source to generate a first result, a multiplication of data from a data element position [2i+1] of the first source) to data from a data element position [2i+1] of the second source to generate a second result, and an addition of the first and second results to data from the data element position i of the third source. In some embodiments, saturation is performed at the end of the addition. In some embodiments, the data from the first and/or second sources is sign extended prior to multiplication.Further, detailed herein are embodiments of an instruction that when executed causes a multiplication of a quartet of quarter-sized packed data elements from a first and a second source, and adds results from these multiplications to a full-sized (relative to the quarter-sized packed data elements) packed data element of a third source and stores the result in a destination. In other words, in some embodiments, for each data element position i of the third source, there is a multiplication of data from a data element position [4i] of the first source to data from a data element position [4i] of the second source to generate a first result, a multiplication of data from a data element position [4i+1] of the first source) to data from a data element position [4i+1] of the second source to generate a second result, a multiplication of data from a data element position [4i+2] of the first source) to data from a data element position [4i+2] of the second source to generate a second result, a multiplication of data from a data element position [4i+3] of the first source) to data from a data element position [4i+3] of the second source to generate a second result, and an addition the first, second, third, and fourth results to data from the data element position i of the third source. In some embodiments, saturation is performed at the end of the addition. In some embodiments, the data from the first and/or second sources is sign extended prior to multiplication.In some embodiments of integer versions of the instruction, saturation circuitry is used to preserve a sign of an operand when the addition results in a value that is too big. In particular, the saturation evaluation occurs on the infinite precision result in between the multi-way-add and the write to the destination. There are instances where the largest positive or least negative number cannot be trusted since it may reflect that a calculation exceeded the container space. However, this can at least be checked. When the accumulator is floating point and the input terms are integer, then the question to be answered is how and when is the conversion from the integer products was done such that there is no double-rounding from the partial terms to the final floating point accumulation. In some embodiments, the sum of products and the floating point accumulator are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition is performed, and then a single rounding to the actual accumulator type is performed.In some embodiments, when the input terms are floating point operands, rounding and dealing with special values (infinities and not a numbers (NANs)), the ordering of faults in the calculation needs solving in the definition. In some embodiments, an order of operations is specified that is emulated and ensures that the implementation delivers faults in that order. It may be impossible for such an implementation to avoid multiple roundings in the course of the calculation. A single precision multiply can fill completely into a double precision result regardless of input values. However, the horizontal add of two such operations may not fit into a double without rounding, and the sum may not fit the accumulator without an additional rounding. In some embodiments, rounding is performed during the horizontal summation and once during the accumulation.Figure 3 illustrates an embodiment of hardware to process an instruction such as a fused multiply accumulate instruction. As illustrated, storage 303 stores a fused multiply accumulate instruction 301 to be executed causes, for each packed data element position of the destination, a multiplication of a M N-sized packed data elements from a first and a second source that correspond to a packed data element position of a third source, an add of results from these multiplications to a full-sized (relative to the N-sized packed data elements) packed data element of a packed data element position of the third source, and store of the result the addition(s) in a packed data element position destination corresponding to the packed data element position of the third source, wherein M is equal to the full-sized packed data element divided by N.The instruction 301 is received by decode circuitry 305. For example, the decode circuitry 305 receives this instruction from fetch logic/circuitry. The instruction includes fields for the first, second, and third sources, and a destination. In some embodiments, the sources and destination are registers. Additionally, in some embodiments, the third source and the destination are the same. The opcode and/or prefix of the instruction 301 includes an indication of source and destination data element sizes {B/W/D/Q} of byte, word, doubleword, and quadword, and a number of iterations.More detailed embodiments of at least one instruction format will be detailed later. The decode circuitry 305 decodes the instruction into one or more operations. In some embodiments, this decoding includes generating a plurality of micro-operations to be performed by execution circuitry (such as execution circuitry 311). The decode circuitry 305 also decodes instruction prefixes.In some embodiments, register renaming, register allocation, and/or scheduling circuitry 307 provides functionality for one or more of: 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments). Registers (register file) and/or memory 308 store data as operands of the instruction to be operated on by execution circuitry 309. Exemplary register types include packed data registers, general purpose registers, and floating point registers.Execution circuitry 309 executes the decoded instruction.In some embodiments, retirement/write back circuitry 311 architecturally 1commits the destination register into the registers or memory and retires the instruction.An embodiment of a format for a fused multiply accumulate instruction is FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}] DSTREG, SRC1, SRC2, SRC3. In some embodiments, FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}] is the opcode and/or prefix of the instruction. B/W/D/Q indicates the data element sizes of the sources/destination as byte, word, doubleword, and quadword. DSTREG is a field for the packed data destination register operand. SRC1, SRC2, and SRC3 are fields for the sources such as packed data registers and/or memory.An embodiment of a format for a fused multiply accumulate instruction is FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}] DSTREG/SRC3, SRC1, SRC2. In some embodiments, FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}] is the opcode and/or prefix of the instruction. B/W/D/Q indicates the data element sizes of the sources/destination as byte, word, doubleword, and quadword. DSTREG/SRC3 is a field for the packed data destination register operand and a third source operand. SRC1, SRC2, and SRC3 are fields for the sources such as packed data registers and/or memory.In some embodiments, the fused multiply accumulate instruction includes a field for a writemask register operand (k) (e.g., FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}]{k} DSTREG/SRC3, SRC1, SRC2 or FMA[SOURCESIZE{B/W/D/Q}][DESTSIZE{B/W/D/Q}]{k} DSTREG, SRC1, SRC2, SRC3). A writemask is used to conditionally control per-element operations and updating of results. Depending upon the implementation, the writemask uses merging or zeroing masking. Instructions encoded with a predicate (writemask, write mask, or k register) operand use that operand to conditionally control per-element computational operation and updating of result to the destination operand. The predicate operand is known as the opmask (writemask) register. The opmask is a set of architectural registers of size MAX_KL (64-bit). Note that from this set of architectural registers, only k1 through k7 can be addressed as predicate operand. k0 can be used as a regular source or destination but cannot be encoded as a predicate operand. Note also that a predicate operand can be used to enable memory fault-suppression for some instructions with a memory operand (source or destination). As a predicate operand, the opmask registers contain one bit to govern the operation/update to each data element of a vector register. In general, opmask registers can support instructions with element sizes: single-precision floating-point (float32), integer doubleword(int32), double-precision floating-point (float64), integer quadword (int64). The length of a opmask register, MAX_KL, is sufficient to handle up to 64 elements with one bit per element, i.e. 64 bits. For a given vector length, each instruction accesses only the number of least significant mask bits that are needed based on its data type. An opmask register affects an instruction at per-element granularity. So, any numeric or non-numeric operation of each data element and per-element updates of intermediate results to the destination operand are predicated on the corresponding bit of the opmask register. In most embodiments, an opmask serving as a predicate operand obeys the following properties: 1) the instruction's operation is not performed for an element if the corresponding opmask bit is not set (this implies that no exception or violation can be caused by an operation on a masked-off element, and consequently, no exception flag is updated as a result of a masked-off operation); 2). a destination element is not updated with the result of the operation if the corresponding writemask bit is not set. Instead, the destination element value must be preserved (merging-masking) or it must be zeroed out (zeroing-masking); 3) for some instructions with a memory operand, memory faults are suppressed for elements with a mask bit of 0. Note that this feature provides a versatile construct to implement control-flow predication as the mask in effect provides a merging behavior for vector register destinations. As an alternative the masking can be used for zeroing instead of merging, so that the masked out elements are updated with 0 instead of preserving the old value. The zeroing behavior is provided to remove the implicit dependency on the old value when it is not needed.In embodiments, encodings of the instruction include a scale-index-base (SIB) type memory addressing operand that indirectly identifies multiple indexed destination locations in memory. In one embodiment, an SIB type memory operand may include an encoding identifying a base address register. The contents of the base address register may represent a base address in memory from which the addresses of the particular destination locations in memory are calculated. For example, the base address may be the address of the first location in a block of potential destination locations for an extended vector instruction. In one embodiment, an SIB type memory operand may include an encoding identifying an index register. Each element of the index register may specify an index or offset value usable to compute, from the base address, an address of a respective destination location within a block of potential destination locations. In one embodiment, an SIB type memory operand may include an encoding specifying a scaling factor to be applied to each index value when computing a respective destination address. For example, if a scaling factor value of four is encoded in the SIB type memory operand, each index value obtained from an element of the index register may be multiplied by four and then added to the base address to compute a destination address.In one embodiment, an SIB type memory operand of the form vm32{x,y,z} may identify a vector array of memory operands specified using SIB type memory addressing. In this example, the array of memory addresses is specified using a common base register, a constant scaling factor, and a vector index register containing individual elements, each of which is a 32-bit index value. The vector index register may be an XMM register (vm32x), a YMM register (vm32y), or a ZMM register (vm32z). In another embodiment, an SIB type memory operand of the form vm64{x,y,z} may identify a vector array of memory operands specified using SIB type memory addressing. In this example, the array of memory addresses is specified using a common base register, a constant scaling factor, and a vector index register containing individual elements, each of which is a 64-bit index value. The vector index register may be an XMM register (vm64x), a YMM register (vm64y) or a ZMM register (vm64z).Figure 4 illustrates an embodiment of method performed by a processor to process a fused multiply accumulate instruction.At 401, an instruction is fetched. For example, a fused multiply accumulate instruction is fetched. The fused multiply accumulate instruction includes an opcode, and fields for packed data source operands and a packed data destination operand as detailed above. In some embodiments, the fused multiply accumulate instruction includes a writemask operand. In some embodiments, the instruction is fetched from an instruction cache.The fetched instruction is decoded at 403. For example, the fetched fused multiply accumulate instruction is decoded by decode circuitry such as that detailed herein.Data values associated with the source operands of the decoded instruction are retrieved at 405.At 407, the decoded instruction is executed by execution circuitry (hardware) such as that detailed herein. For the fused multiply accumulate instruction, the execution will cause, for each packed data element position of the destination, a multiplication of a M N-sized packed data elements from a first and a second source that correspond to a packed data element position of a third source, add of results from these multiplications to a full-sized (relative to the N-sized packed data elements) packed data element of a packed data element position of the third source, and store of the result the addition(s) in a packed data element position destination corresponding to the packed data element position of the third source, wherein M is equal to the full-sized packed data element divided by N.In some embodiment, the instruction is committed or retired at 409.Figure 5 illustrates an embodiment of a subset of the execution of a fused multiply accumulate. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the fused multiply accumulate is operating on signed sources wherein the accumulator is 2x the input data size. Figure 6 illustrates an embodiment of pseudo code for implementing this instruction in hardware.A first signed source (source 1 501) and a second signed source (source 2 503) each have four packed data elements. Each of these packed data elements stores signed data such as floating point data. A third signed source 509 (source 3) has two packed data elements of which each stores signed data. The sizes of the first and second signed sources 501 and 503 are half that of the third signed source 509. For example, the first and second signed sources 501 and 503 could have 32-bit packed data elements (e.g., single precision floating point) the third signed source 509 could have 64-bit packed data elements (e.g., double precision floating point).In this illustration, only the two most significant packed data element positions of the first and second signed sources 501 and 503 and the most significant packed data element position of the third signed source 509 are shown. Of course, the other packed data element positions would also be processed.As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 501 and 503 are multiplied using a multiplier circuit 505, and the data from second most significant packed data element positions of the first and second signed sources 501 and 503 are multiplied using a multiplier circuit 507. In some embodiments, these multiplier circuits 505 and 507 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 509. The results of each of the multiplications are added using addition circuitry 511.The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 509 (using a different adder 513 or the same adder 511).Finally, the result of the second addition is stored into the signed destination 515 in a packed data element position that corresponds to the packed data element positon used from the signed third source 509. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 7 illustrates an embodiment of a subset of the execution of a fused multiply accumulate. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the fused multiply accumulate is operating on signed sources wherein the accumulator is 2x the input data size. Figure 8 illustrates an embodiment of pseudo code for implementing this instruction in hardware.A first signed source (source 1 701) and a second signed source (source 2 703) each have four packed data elements. Each of these packed data elements stores signed data such as integer data. A third signed source 709 (source 3) has two packed data elements of which each stores signed data. The sizes of the first and second signed sources 701 and 703 are half that of the third signed source 709. For example, the first and second signed sources 701 and 703 could have 32-bit packed data elements (e.g., single precision floating point) the third signed source 709 could have 64-bit packed data elements (e.g., double precision floating point).In this illustration, only the two most significant packed data element positions of the first and second signed sources 701 and 703 and the most significant packed data element position of the third signed source 709 are shown. Of course, the other packed data element positions would also be processed.As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 705, and the data from second most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 707. In some embodiments, these multiplier circuits 705 and 707 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 709. The results of each of the multiplications are added to the signed third source 709 using addition/saturation circuitry 711.Addition/saturation (accumulator) circuitry 711 preserves a sign of an operand when the addition results in a value that is too big. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the signed destination 715. When the accumulator 711 is floating point and the input terms are integer, the sum of products and the floating point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.The result of the addition and saturation check is stored into the signed destination 715 in a packed data element position that corresponds to the packed data element positon used from the signed third source 709. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 9 illustrates an embodiment of a subset of the execution of a fused multiply accumulate. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the fused multiply accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size. Figure 10 illustrates an embodiment of pseudo code for implementing this instruction in hardware.A first signed source (source 1 901) and a second unsigned source (source 2 903) each have four packed data elements. Each of these packed data elements data such as floating point or integer data. A third signed source (source 3 915) has a packed data element of which stores signed data. The sizes of the first and second sources 901 and 903 are a quarter of the third signed source 915. For example, the first and second sources 901 and 903 could have 16-bit packed data elements (e.g., word) and the third signed source 915 could have 64-bit packed data elements (e.g., double precision floating point or 64-bit integer).In this illustration, the four most significant packed data element positions of the first and second sources 901 and 903 and the most significant packed data element position of the third signed source 915 are shown. Of course, other packed data element positions would also be processed if there are any.As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 907, data from second most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 907, data from third most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 909, and data from the least significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 911. In some embodiments, the signed packed data elements of the first source 901 are sign extended and the unsigned packed data elements of the second source 903 are zero extended prior to the multiplications.In some embodiments, these multiplier circuits 905-911 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 915. The results of each of the multiplications are added using addition circuitry 911.The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 915 (using a different adder 913 or the same adder 911).Finally, the result of the second addition is stored into the signed destination 919 in a packed data element position that corresponds to the packed data element positon used from the signed third source 909. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.Figure 11 illustrates an embodiment of a subset of the execution of a fused multiply accumulate. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the fused multiply accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size. Figure 12 illustrates an embodiment of pseudo code for implementing this instruction in hardware.A first signed source (source 11101) and a second unsigned source (source 2 1103) each have four packed data elements. Each of these packed data elements data such as floating point or integer data. A third signed source (source 3 1115) has a packed data element of which stores signed data. The sizes of the first and second sources 1101 and 1103 are a quarter of the third signed source 1115. For example, the first and second sources 1101 and 1103 could have 16-bit packed data elements (e.g., word) and the third signed source 1115 could have 64-bit packed data elements (e.g., double precision floating point or 64-bit integer).In this illustration, the four most significant packed data element positions of the first and second sources 1101 and 1103 and the most significant packed data element position of the third signed source 1115 are shown. Of course, other packed data element positions would also be processed if there are any.As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first and second sources 1101 and 1103 are multiplied using a multiplier circuit 1107, data from second most significant packed data element positions of the first and second sources 1101 and 1103 are multiplied using a multiplier circuit 1107, data from third most significant packed data element positions of the first and second sources 1101 and 1103 are multiplied using a multiplier circuit 1109, and data from the least significant packed data element positions of the first and second sources 1101 and 1103 are multiplied using a multiplier circuit 1111. In some embodiments, the signed packed data elements of the first source 1101 are sign extended and the unsigned packed data elements of the second source 1103 are zero extended prior to the multiplications.In some embodiments, these multiplier circuits 1105-1111 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 1115. The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 1115 are added to the signed third source 1115 using addition/saturation circuitry 1113.Addition/saturation (accumulator) circuitry 1113 preserves a sign of an operand when the addition results in a value that is too big. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the destination 1115. When the accumulator 1113 is floating point and the input terms are integer, the sum of products and the floating point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.The result of the addition and saturation check is stored into the signed destination 1119 in a packed data element position that corresponds to the packed data element positon used from the signed third source 715. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.The figures below detail exemplary architectures and systems to implement embodiments of the above. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below, or implemented as software modules.Exemplary embodiments include a processor comprising a decoder to decode a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand; a register file having a plurality of packed data registers including registers for the source and destination operands; and execution circuitry to execute the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N.In some embodiments, one or more of the following apply: the instruction defines sizes of the packed data elements; the execution circuitry zero extends packed data elements of the second source and sign extends packed data elements of the first source prior to the multiplications; when the first size is half of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; when the first size is half of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration; when the first size is a quarter of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; and/or when the first size is a quarter of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration.Exemplary embodiments include a method of decoding a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand; a register file having a plurality of packed data registers including registers for the source and destination operands; and executing the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N.In some embodiments, one or more of the following apply: the instruction defines sizes of the packed data elements; the execution circuitry zero extends packed data elements of the second source and sign extends packed data elements of the first source prior to the multiplications; when the first size is half of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; when the first size is half of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration; when the first size is a quarter of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; and/or when the first size is a quarter of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration.Exemplary embodiments include a non-transitory machine-readable medium storing an instruction, which when executed to cause method of decoding a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand; a register file having a plurality of packed data registers including registers for the source and destination operands; and executing the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N.In some embodiments, one or more of the following apply: the instruction defines sizes of the packed data elements; the execution circuitry zero extends packed data elements of the second source and sign extends packed data elements of the first source prior to the multiplications; when the first size is half of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; when the first size is half of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration; when the first size is a quarter of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; and/or when the first size is a quarter of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration.Exemplary embodiments include a system including memory and a processor comprising a decoder to decode a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand; a register file having a plurality of packed data registers including registers for the source and destination operands; and execution circuitry to execute the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N.In some embodiments, one or more of the following apply: the instruction defines sizes of the packed data elements; the execution circuitry zero extends packed data elements of the second source and sign extends packed data elements of the first source prior to the multiplications; when the first size is half of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; when the first size is half of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration; when the first size is a quarter of the second size, a first addition is performed on each of the multiplications and a second addition is performed on a result of the first addition and a result from a previous iteration; and/or when the first size is a quarter of the second size, a single addition and saturation check is performed on each of the multiplications a result from a previous iteration.Embodiments of the instruction(s) detailed above are embodied may be embodied in a "generic vector friendly instruction format" which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the writemask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) above may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Instruction SetsAn instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014).Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.Figures 13A-13B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. Figure 13A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while Figure 13B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 1300 for which are defined class A and class B instruction templates, both of which include no memory access 1305 instruction templates and memory access 1320 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in Figure 13A include: 1) within the no memory access 1305 instruction templates there is shown a no memory access, full round control type operation 1310 instruction template and a no memory access, data transform type operation 1315 instruction template; and 2) within the memory access 1320 instruction templates there is shown a memory access, temporal 1325 instruction template and a memory access, non-temporal 1330 instruction template. The class B instruction templates in Figure 13B include: 1) within the no memory access 1305 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1312 instruction template and a no memory access, write mask control, vsize type operation 1317 instruction template; and 2) within the memory access 1320 instruction templates there is shown a memory access, write mask control 1327 instruction template.The generic vector friendly instruction format 1300 includes the following fields listed below in the order illustrated in Figures 13A-13B.Format field 1340 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 1342 - its content distinguishes different base operations.Register index field 1344 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 1346 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 1305 instruction templates and memory access 1320 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 1350 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 1368, an alpha field 1352, and a beta field 1354. The augmentation operation field 1350 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 1360 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base).Displacement Field 1362A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement Factor Field 1362B (note that the juxtaposition of displacement field 1362A directly over displacement factor field 1362B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale ∗ index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 1374 (described later herein) and the data manipulation field 1354C. The displacement field 1362A and the displacement factor field 1362B are optional in the sense that they are not used for the no memory access 1305 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 1364 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 1370 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 1370 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 1370 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 1370 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 1370 content to directly specify the masking to be performed.Immediate field 1372 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 1368 - its content distinguishes between different classes of instructions. With reference to Figures 13A-B,the contents of this field select between class A and class B instructions. In Figures 13A-B,rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1368A and class B 1368B for the class field 1368 respectively in Figures 13A-B).Instruction Templates of Class AIn the case of the non-memory access 1305 instruction templates of class A, the alpha field 1352 is interpreted as an RS field 1352A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1352A.1 and data transform 1352A.2 are respectively specified for the no memory access, round type operation 1310 and the no memory access, data transform type operation 1315 instruction templates), while the beta field 1354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1305 instruction templates, the scale field 1360, the displacement field 1362A, and the displacement scale filed 1362B are not present.No-Memory Access Instruction Templates - Full Round Control Type OperationIn the no memory access full round control type operation 1310 instruction template, the beta field 1354 is interpreted as a round control field 1354A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 1354A includes a suppress all floating point exceptions (SAE) field 1356 and a round operation control field 1358, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 1358).SAE field 1356 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 1356 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.Round operation control field 1358 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1358 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 1350 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type OperationIn the no memory access data transform type operation 1315 instruction template, the beta field 1354 is interpreted as a data transform field 1354B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 1320 instruction template of class A, the alpha field 1352 is interpreted as an eviction hint field 1352B, whose content distinguishes which one of the eviction hints is to be used (in Figure 13A, temporal 1352B.1 and non-temporal 1352B.2 are respectively specified for the memory access, temporal 1325 instruction template and the memory access, non-temporal 1330 instruction template), while the beta field 1354 is interpreted as a data manipulation field 1354C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 1320 instruction templates include the scale field 1360, and optionally the displacement field 1362A or the displacement scale field 1362B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - TemporalTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporalNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class BIn the case of the instruction templates of class B, the alpha field 1352 is interpreted as a write mask control (Z) field 1352C, whose content distinguishes whether the write masking controlled by the write mask field 1370 should be a merging or a zeroing.In the case of the non-memory access 1305 instruction templates of class B, part of the beta field 1354 is interpreted as an RL field 1357A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1357A.1 and vector length (VSIZE) 1357A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1312 instruction template and the no memory access, write mask control, VSIZE type operation 1317 instruction template), while the rest of the beta field 1354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1305 instruction templates, the scale field 1360, the displacement field 1362A, and the displacement scale filed 1362B are not present.In the no memory access, write mask control, partial round control type operation 1310 instruction template, the rest of the beta field 1354 is interpreted as a round operation field 1359A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).Round operation control field 1359A - just as round operation control field 1358, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1359A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 1350 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 1317 instruction template, the rest of the beta field 1354 is interpreted as a vector length field 1359B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 1320 instruction template of class B, part of the beta field 1354 is interpreted as a broadcast field 1357B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 1354 is interpreted the vector length field 1359B. The memory access 1320 instruction templates include the scale field 1360, and optionally the displacement field 1362A or the displacement scale field 1362B.With regard to the generic vector friendly instruction format 1300, a full opcode field 1374 is shown including the format field 1340, the base operation field 1342, and the data element width field 1364. While one embodiment is shown where the full opcode field 1374 includes all of these fields, the full opcode field 1374 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 1374 provides the operation code (opcode).The augmentation operation field 1350, the data element width field 1364, and the write mask field 1370 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction FormatFigure 14A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. Figure 14A shows a specific vector friendly instruction format 1400 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1400 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 13 into which the fields from Figure 14A map are illustrated.It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 1400 in the context of the generic vector friendly instruction format 1300 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 1400 except where claimed. For example, the generic vector friendly instruction format 1300 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1400 is shown as having fields of specific sizes. By way of specific example, while the data element width field 1364 is illustrated as a one bit field in the specific vector friendly instruction format 1400, the invention is not so limited (that is, the generic vector friendly instruction format 1300 contemplates other sizes of the data element width field 1364).The generic vector friendly instruction format 1300 includes the following fields listed below in the order illustrated in Figure 14A.EVEX Prefix (Bytes 0-3) 1402 - is encoded in a four-byte form.Format Field 1340 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 1340 and it contains 0×62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 1405 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 1357BEX byte 1, bit[5] - B).The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 1310 - this is the first part of the REX' field 1310 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 1415 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 1364 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.ww 1420 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.ww may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (1s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 1420 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 1368 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 1425 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.Alpha field 1352 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 1354 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ) ― as previously described, this field is context specific.REX' field 1310 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.ww.Write mask field 1370 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 1430 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 1440 (Byte 5) includes MOD field 1442, Reg field 1444, and R/M field 1446. As previously described, the MOD field's 1442 content distinguishes between memory access and non-memory access operations. The role of Reg field 1444 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1446 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 1350 content is used for memory address generation. SIB.xxx 1454 and SIB.bbb 1456 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 1362A (Bytes 7-10) - when MOD field 1442 contains 10, bytes 7-10 are the displacement field 1362A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 1362B (Byte 7) - when MOD field 1442 contains 01, byte 7 is the displacement factor field 1362B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1362B is a reinterpretation of disp8; when using displacement factor field 1362B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8∗N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1362B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1362B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8∗N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 1372 operates as previously described.Full Opcode FieldFigure 14B is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the full opcode field 1374 according to one embodiment of the invention. Specifically, the full opcode field 1374 includes the format field 1340, the base operation field 1342, and the data element width (W) field 1364. The base operation field 1342 includes the prefix encoding field 1425, the opcode map field 1415, and the real opcode field 1430.Register Index FieldFigure 14C is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the register index field 1344 according to one embodiment of the invention. Specifically, the register index field 1344 includes the REX field 1405, the REX' field 1410, the MODR/M.reg field 1444, the MODR/M.r/m field 1446, the VVVV field 1420, xxx field 1454, and the bbb field 1456.Augmentation Operation FieldFigure 14D is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the augmentation operation field 1350 according to one embodiment of the invention. When the class (U) field 1368 contains 0, it signifies EVEX.U0 (class A 1368A); when it contains 1, it signifies EVEX.U1 (class B 1368B). When U=0 and the MOD field 1442 contains 11 (signifying a no memory access operation), the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 1352A. When the rs field 1352A contains a 1 (round 1352A.1), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 1354A. The round control field 1354A includes a one bit SAE field 1356 and a two bit round operation field 1358. When the rs field 1352A contains a 0 (data transform 1352A.2), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 1354B. When U=0 and the MOD field 1442 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 1352B and the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 1354C.When U=1, the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 1352C. When U=1 and the MOD field 1442 contains 11 (signifying a no memory access operation), part of the beta field 1354 (EVEX byte 3, bit [4]-So) is interpreted as the RL field 1357A; when it contains a 1 (round 1357A.1) the rest of the beta field 1354 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 1359A, while when the RL field 1357A contains a 0 (VSIZE 1357.A2) the rest of the beta field 1354 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 1359B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 1442 contains 00, 01, or 10 (signifying a memory access operation), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 1359B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 1357B (EVEX byte 3, bit [4]- B).Exemplary Register ArchitectureFigure 15 is a block diagram of a register architecture 1500 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 1510 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 1400 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 1359BA ( Figure 13A ; U=0)1310, 1315, 1325, 1330zmm registers (the vector length is 64 byte)B ( Figure 13B ; U=1)1312zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 1359BB ( Figure 13B ; U=1)1317, 1327zmm, ymm, or xmm registers (the vector length is 64 byte, 32 byte, or 16 byte) depending on the vector length field 1359BIn other words, the vector length field 1359B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 1359B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1400 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 1515 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1515 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 1525 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1545, on which is aliased the MMX packed integer flat register file 1550 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFigure 16A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 16B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 16A-Billustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 16A, a processor pipeline 1600 includes a fetch stage 1602, a length decode stage 1604, a decode stage 1606, an allocation stage 1608, a renaming stage 1610, a scheduling (also known as a dispatch or issue) stage 1612, a register read/memory read stage 1614, an execute stage 1616, a write back/memory write stage 1618, an exception handling stage 1622, and a commit stage 1624.Figure 16B shows processor core 1690 including a front end unit 1630 coupled to an execution engine unit 1650, and both are coupled to a memory unit 1670. The core 1690 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1690 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 1630 includes a branch prediction unit 1632 coupled to an instruction cache unit 1634, which is coupled to an instruction translation lookaside buffer (TLB) 1636, which is coupled to an instruction fetch unit 1638, which is coupled to a decode unit 1640. The decode unit 1640 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1640 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1690 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1640 or otherwise within the front end unit 1630). The decode unit 1640 is coupled to a rename/allocator unit 1652 in the execution engine unit 1650.The execution engine unit 1650 includes the rename/allocator unit 1652 coupled to a retirement unit 1654 and a set of one or more scheduler unit(s) 1656. The scheduler unit(s) 1656 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1656 is coupled to the physical register file(s) unit(s) 1658. Each of the physical register file(s) units 1658 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1658 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1658 is overlapped by the retirement unit 1654 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1654 and the physical register file(s) unit(s) 1658 are coupled to the execution cluster(s) 1660. The execution cluster(s) 1660 includes a set of one or more execution units 1662 and a set of one or more memory access units 1664. The execution units 1662 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1656, physical register file(s) unit(s) 1658, and execution cluster(s) 1660 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1664). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 1664 is coupled to the memory unit 1670, which includes a data TLB unit 1672 coupled to a data cache unit 1674 coupled to a level 2 (L2) cache unit 1676. In one exemplary embodiment, the memory access units 1664 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1672 in the memory unit 1670. The instruction cache unit 1634 is further coupled to a level 2 (L2) cache unit 1676 in the memory unit 1670. The L2 cache unit 1676 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1600 as follows: 1) the instruction fetch 1638 performs the fetch and length decoding stages 1602 and 1604; 2) the decode unit 1640 performs the decode stage 1606; 3) the rename/allocator unit 1652 performs the allocation stage 1608 and renaming stage 1610; 4) the scheduler unit(s) 1656 performs the schedule stage 1612; 5) the physical register file(s) unit(s) 1658 and the memory unit 1670 perform the register read/memory read stage 1614; the execution cluster 1660 perform the execute stage 1616; 6) the memory unit 1670 and the physical register file(s) unit(s) 1658 perform the write back/memory write stage 1618; 7) various units may be involved in the exception handling stage 1622; and 8) the retirement unit 1654 and the physical register file(s) unit(s) 1658 perform the commit stage 1624.The core 1690 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1690 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1634/1674 and a shared L2 cache unit 1676, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFigures 17A-Billustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 17A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1702 and with its local subset of the Level 2 (L2) cache 1704, according to embodiments of the invention. In one embodiment, an instruction decoder 1700 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1706 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1708 and a vector unit 1710 use separate register sets (respectively, scalar registers 1712 and vector registers 1714) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 1706, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1704 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1704. Data read by a processor core is stored in its L2 cache subset 1704 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1704 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 17B is an expanded view of part of the processor core in Figure 17A according to embodiments of the invention. Figure 17B includes an L1 data cache 1706A part of the L1 cache 1704, as well as more detail regarding the vector unit 1710 and the vector registers 1714. Specifically, the vector unit 1710 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1728), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1720, numeric conversion with numeric convert units 1722A-B, and replication with replication unit 1724 on the memory input. Write mask registers 1726 allow predicating resulting vector writes.Figure 18 is a block diagram of a processor 1800 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 18 illustrate a processor 1800 with a single core 1802A, a system agent 1810, a set of one or more bus controller units 1816, while the optional addition of the dashed lined boxes illustrates an alternative processor 1800 with multiple cores 1802A-N, a set of one or more integrated memory controller unit(s) 1814 in the system agent unit 1810, and special purpose logic 1808.Thus, different implementations of the processor 1800 may include: 1) a CPU with the special purpose logic 1808 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1802A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1802A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1802A-N being a large number of general purpose in-order cores. Thus, the processor 1800 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1800 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1806, and external memory (not shown) coupled to the set of integrated memory controller units 1814. The set of shared cache units 1806 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1812 interconnects the integrated graphics logic 1808 (integrated graphics logic 1808 is an example of and is also referred to herein as special purpose logic) , the set of shared cache units 1806, and the system agent unit 1810/integrated memory controller unit(s) 1814, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1806 and cores 1802-A-N.In some embodiments, one or more of the cores 1802A-N are capable of multithreading. The system agent 1810 includes those components coordinating and operating cores 1802A-N. The system agent unit 1810 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1802A-N and the integrated graphics logic 1808. The display unit is for driving one or more externally connected displays.The cores 1802A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1802A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFigures 19-22 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 19, shown is a block diagram of a system 1900 in accordance with one embodiment of the present invention. The system 1900 may include one or more processors 1910, 1915, which are coupled to a controller hub 1920. In one embodiment the controller hub 1920 includes a graphics memory controller hub (GMCH) 1990 and an Input/Output Hub (IOH) 1950 (which may be on separate chips); the GMCH 1990 includes memory and graphics controllers to which are coupled memory 1940 and a coprocessor 1945; the IOH 1950 couples input/output (I/O) devices 1960 to the GMCH 1990. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1940 and the coprocessor 1945 are coupled directly to the processor 1910, and the controller hub 1920 in a single chip with the IOH 1950.The optional nature of additional processors 1915 is denoted in Figure 19 with broken lines. Each processor 1910, 1915 may include one or more of the processing cores described herein and may be some version of the processor 1800.The memory 1940 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1920 communicates with the processor(s) 1910, 1915 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1995.In one embodiment, the coprocessor 1945 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1920 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1910, 1915 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1910 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1910 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1945. Accordingly, the processor 1910 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1945. Coprocessor(s) 1945 accept and execute the received coprocessor instructions.Referring now to Figure 20, shown is a block diagram of a first more specific exemplary system 2000 in accordance with an embodiment of the present invention. As shown in Figure 20, multiprocessor system 2000 is a point-to-point interconnect system, and includes a first processor 2070 and a second processor 2080 coupled via a point-to-point interconnect 2050. Each of processors 2070 and 2080 may be some version of the processor 1800. In one embodiment of the invention, processors 2070 and 2080 are respectively processors 1910 and 1915, while coprocessor 2038 is coprocessor 1945. In another embodiment, processors 2070 and 2080 are respectively processor 1910 coprocessor 1945.Processors 2070 and 2080 are shown including integrated memory controller (IMC) units 2072 and 2082, respectively. Processor 2070 also includes as part of its bus controller units point-to-point (P-P) interfaces 2076 and 2078; similarly, second processor 2080 includes P-P interfaces 2086 and 2088. Processors 2070, 2080 may exchange information via a point-to-point (P-P) interface 2050 using P-P interface circuits 2078, 2088. As shown in Figure 20, IMCs 2072 and 2082 couple the processors to respective memories, namely a memory 2032 and a memory 2034, which may be portions of main memory locally attached to the respective processors.Processors 2070, 2080 may each exchange information with a chipset 2090 via individual P-P interfaces 2052, 2054 using point to point interface circuits 2076, 2094, 2086, 2098. Chipset 2090 may optionally exchange information with the coprocessor 2038 via a high-performance interface 2092. In one embodiment, the coprocessor 2038 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 2090 may be coupled to a first bus 2016 via an interface 2096. In one embodiment, first bus 2016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 20, various I/O devices 2014 may be coupled to first bus 2016, along with a bus bridge 2018 which couples first bus 2016 to a second bus 2020. In one embodiment, one or more additional processor(s) 2015, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 2016. In one embodiment, second bus 2020 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 2020 including, for example, a keyboard and/or mouse 2022, communication devices 2027 and a storage unit 2028 such as a disk drive or other mass storage device which may include instructions/code and data 2030, in one embodiment. Further, an audio I/O 2024 may be coupled to the second bus 2020. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 20, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 21, shown is a block diagram of a second more specific exemplary system 2100 in accordance with an embodiment of the present invention. Like elements in Figures 20and21 bear like reference numerals, and certain aspects of Figure 20 have been omitted from Figure 21 in order to avoid obscuring other aspects of Figure 21.Figure 21 illustrates that the processors 2070, 2080 may include integrated memory and I/O control logic ("CL") 2072 and 2082, respectively. Thus, the CL 2072, 2082 include integrated memory controller units and include I/O control logic. Figure 21 illustrates that not only are the memories 2032, 2034 coupled to the CL 2072, 2082, but also that I/O devices 2114 are also coupled to the control logic 2072, 2082. Legacy I/O devices 2115 are coupled to the chipset 2090.Referring now to Figure 22, shown is a block diagram of a SoC 2200 in accordance with an embodiment of the present invention. Similar elements in Figure 18 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 22, an interconnect unit(s) 2202 is coupled to: an application processor 2210 which includes a set of one or more cores 1802A-N, which include cache units 1804A-N, and shared cache unit(s) 1806; a system agent unit 1810; a bus controller unit(s) 1816; an integrated memory controller unit(s) 1814; a set or one or more coprocessors 2220 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2230; a direct memory access (DMA) unit 2232; and a display unit 2240 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 2220 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 2030 illustrated in Figure 20, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 23 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 23 shows a program in a high level language 2302 may be compiled using an x86 compiler 2304 to generate x86 binary code 2306 that may be natively executed by a processor with at least one x86 instruction set core 2316. The processor with at least one x86 instruction set core 2316 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 2304 represents a compiler that is operable to generate x86 binary code 2306 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2316. Similarly, Figure 23 shows the program in the high level language 2302 may be compiled using an alternative instruction set compiler 2308 to generate alternative instruction set binary code 2310 that may be natively executed by a processor without at least one x86 instruction set core 2314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2312 is used to convert the x86 binary code 2306 into code that may be natively executed by the processor without an x86 instruction set core 2314. This converted code is not likely to be the same as the alternative instruction set binary code 2310 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 2312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2306.
Methods and apparatus are described for enabling copper-to-copper (Cu-Cu) bonding at reduced temperatures (e.g., at most 200 degrees centigrade) by significantly reducing Cu oxide formation. These techniques provide for faster cycle time and entail no extraordinary measures (e.g., forming gas). Such techniques may also enable longer queue (Q) or staging times. One example semiconductor structure (100) generally includes a semiconductor layer (102), an adhesion layer (104) disposed above the semiconductor layer (102), an anodic metal layer (106) disposed above the adhesion layer (104), and a cathodic metal layer (108) disposed above the anodic metal layer (106). An oxidation potential of the anodic metal layer (106) may be greater than an oxidation potential of the cathodic metal layer (108). Such a semiconductor structure (100) may be utilized in fabricating IC packages (300, 400) implementing 2.5D or 3D integration.
1.A semiconductor structure, characterized in that the semiconductor structure includes:Semiconductor layerAn adhesion layer, which is disposed above the semiconductor layer;An anode metal layer, which is disposed above the adhesion layer; andA cathode metal layer, which is disposed above the anode metal layer.2.The semiconductor structure of claim 1, wherein the anode metal layer comprises magnesium (Mg).3.The semiconductor structure according to claim 1, wherein the anode metal layer includes an element selected from the group consisting of aluminum (Al), zinc (Zn), and nickel (Ni).4.The semiconductor structure according to any one of claims 1 to 3, wherein the cathode metal layer comprises copper (Cu).5.The semiconductor structure according to any one of claims 1 to 4, wherein an oxidation potential of the anode metal layer is higher than an oxidation potential of the cathode metal layer.6.The semiconductor structure according to any one of claims 1 to 5, wherein the anode metal layer includes a first metal, the cathode metal layer includes a second metal, and the first metal has a An oxidation potential of the second metal.7.The semiconductor structure of claim 6, wherein the first metal has a larger negative Gibbs free energy of oxide formation than the second metal.8.The semiconductor structure according to any one of claims 1 to 7, wherein the anode metal layer includes a metal related to a porous oxide, and wherein the oxidation rate of the porous oxide is a linear function of time .9.The semiconductor structure according to any one of claims 1 to 8, wherein the anode metal layer comprises a metal having a volume ratio of oxide to metal of less than 1.0.10.The semiconductor structure according to claim 1, wherein the anode metal layer is configured to suppress oxidation associated with the cathode metal layer by providing cathodic protection to the cathode metal layer. Growth.11.The semiconductor structure according to any one of claims 1 to 10, wherein the adhesion layer includes titanium (Ti), and wherein the semiconductor layer includes silicon (Si).12.The semiconductor structure according to claim 1, wherein the cathode metal layer is disposed directly above the anode metal layer.13.The semiconductor structure according to any one of claims 1 to 12, wherein the cathode metal layer includes one or more pillars.14.An integrated circuit package is characterized in that the integrated circuit package includes:Package substrate; andA plurality of dies, the plurality of dies being disposed above the packaging substrate, wherein:At least one of the plurality of dies is electrically coupled to another die of the plurality of dies through a plurality of copper pillar microbumps;At least one of the plurality of dies includes:A cathode metal layer that forms the copper pillar microbumps;An anode metal layer, which is disposed above the cathode metal layer;An adhesion layer, which is disposed above the anode metal layer; andA semiconductor layer, the semiconductor layer being disposed over the adhesion layer; andThe oxidation potential of the anode metal layer is higher than the oxidation potential of the cathode metal layer.
Interconnect method for high-density 2.5D and 3D integrationTechnical fieldThe disclosed examples relate generally to integrated circuits, and more particularly, to integrated circuit packages using copper-copper (Cu-Cu) bonding.Background techniqueElectronic devices (eg, computers, laptops, tablets, copiers, digital cameras, smartphones, etc.) typically employ integrated circuits (ICs, also referred to as "chips"). These integrated circuits are typically implemented as semiconductor dies packaged in integrated circuit packages. The semiconductor die may include memory, logic, and / or any of a variety of other suitable circuit types.Many integrated circuits and other semiconductor devices make use of bump arrangements, such as ball array packages (BGA), for surface-mounting a package onto a circuit board, such as a printed circuit board (PCB). Any of a variety of suitable package pin structures, such as controlled collapsed chip connection (C4) bumps or microbumps (e.g., used in stacked silicon interconnect (SSI) applications), can be used in integrated circuits An electrical signal is conducted between a channel on a (IC) die (or other packaged device) and the circuit board of the circuit board mounting package.Summary of the inventionAn example disclosed by the present invention is a semiconductor structure. The semiconductor structure generally includes a semiconductor layer, an adhesion layer disposed above the semiconductor layer, an anode metal layer disposed above the adhesion layer, and a cathode metal layer disposed above the anode metal layer. .In some embodiments, the anode metal layer may include magnesium (Mg).In some embodiments, the anode metal layer may include an element selected from the group consisting of aluminum (Al), zinc (Zn), and nickel (Ni).In some embodiments, the cathode metal layer may include copper (Cu).In some embodiments, the oxidation potential of the anode metal layer may be higher than the oxidation potential of the cathode metal layer.In some embodiments, the anode metal layer may include a first metal, the cathode metal layer may include a second metal, and the first metal has an oxidation potential higher than the second metal.In some embodiments, the first metal has a larger negative Gibbs free energy for oxide formation than the second metal.In some embodiments, the anode metal layer may include a metal associated with a porous oxide, and the oxidation rate of the porous oxide may be a linear function of time.In some embodiments, the anode metal layer may include a metal having a volume ratio of oxide to metal of less than 1.0.In some embodiments, the anode metal layer may be configured to suppress the growth of oxides associated with the cathode metal layer by providing cathodic protection to the cathode metal layer.In some embodiments, the adhesion layer may include titanium (Ti), and the semiconductor layer may include silicon (Si).In some embodiments, the cathode metal layer may be disposed directly over the anode metal layer.In some embodiments, the cathode metal layer includes one or more pillars.Another example disclosed by the present invention is a method of manufacturing a semiconductor structure. The method generally includes: disposing an adhesion layer above the semiconductor layer; disposing an anode metal layer above the adhesion layer; and disposing a cathode metal layer above the anode metal layer.Another example disclosed by the present invention is a method of manufacturing an integrated circuit package. The method generally includes providing a semiconductor structure having an adhesion layer disposed above the semiconductor layer, an anode metal layer disposed above the adhesion layer, and disposed above the anode metal layer. A cathode metal layer; and bonding the cathode layer of the semiconductor structure to a metal layer of another structure at a temperature below 200 ° C.In some embodiments, the method further includes forming a plurality of pillars over the cathode metal layer using photolithography and plating. The plurality of pillars may have the same composition as the cathode metal layer.In some embodiments, the method may further include etching the cathode metal layer to remove at least a portion of the cathode metal layer between the plurality of pillars, and coating a layer including the plurality of pillars with a resist. An upper surface of the semiconductor structure, using photolithography to remove at least a portion of the resist between the plurality of pillars such that the anode metal layer is exposed, and etching at least a portion of the anode metal layer and The adhesion layer between the plurality of pillars causes the semiconductor layer to be exposed and removes the resist.In some embodiments, the anode metal layer may include magnesium (Mg), and the cathode metal layer may include copper (Cu).In some embodiments, the oxidation potential of the anode metal layer may be higher than the oxidation potential of the cathode metal layer.In some embodiments, the method may further include bonding the cathode metal layer of the semiconductor structure to a metal layer of another structure at a temperature below 200 ° C.Yet another example disclosed by the present invention is an integrated circuit package. The package generally includes a package substrate and a plurality of dies disposed over the package substrate, wherein at least one of the plurality of dies is electrically coupled to the dies through a plurality of copper pillar microbumps. Another one of the plurality of dies; at least one of the plurality of dies includes a cathode metal layer forming the copper pillar microbumps, and an anode metal layer disposed above the cathode metal layer An adhesion layer provided above the anode metal layer and a semiconductor layer provided above the adhesion layer; and the oxidation potential of the anode metal layer is higher than the oxidation potential of the cathode metal layer.These and other aspects can be understood with reference to the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGSThus, the manner in which the above-mentioned features of the present disclosure can be understood in detail, a more specific description of the present disclosure briefly summarized above can be described by referring to examples, some of which are illustrated in the accompanying drawings. It should be understood, however, that the drawings show only typical examples of the disclosure and therefore should not be considered as limiting its scope, as the disclosure may allow other equally effective examples.FIG. 1 is a cross-sectional view of an exemplary semiconductor structure having an anode metal layer in galvanic series with a cathode metal layer according to an example disclosed in the present invention.FIG. 2 illustrates an operation for forming a copper-copper bonded copper pillar based on the semiconductor structure of FIG. 1 according to an example of the present disclosure.FIG. 3 is a cross-sectional view of an exemplary 2.5D integrated circuit (IC) package according to one example of the present disclosure.FIG. 4 is a cross-sectional view of an exemplary 3D IC package according to one example of the present disclosure.5 is a flowchart of an exemplary operation for manufacturing a semiconductor structure according to an example disclosed in the present invention.Detailed waysThe disclosed examples provide techniques and equipment for Cu-Cu bonding, and reduce the focus on oxide formation, thereby providing adequate performance at reduced temperatures (e.g., up to 200 ° C) and faster cycle times. Bonding without any special requirements for this bonding. The disclosed examples also enable longer queue (Q) or segmentation times.Example of cathodic protection for copper-copper bondingChip-to-chip (C2C), chip-to-wafer (C2W), and wafer-to-wafer (W2W) bonding technologies rely on interconnect technology that is rugged and robust to avoid when chips and / or wafers are exposed to The connection fails under various stresses (for example, temperature, strain, torsion, etc.). For decades, copper (Cu) pillars with solder interconnects have been the workhorse of the industry's low-density and high-density designs. However, as the density increases and the pitch decreases, this Cu pillar technology encounters various problems, such as reduced solder content, brittle intermetallic compounds (IMC), voids, and low thermal conductivity. Copper-copper (Cu-Cu) bonding is an alternative interconnect that the industry has pursued for many years, but so far, there is no real practical or high-volume production (HVM) solution. An important challenge for Cu-Cu bonding is the rapid oxide formation on the Cu surface, which inhibits satisfactory interconnection.At present, a temperature of about 400 ° C is required for successful bonding. However, this high temperature may melt certain materials (such as polymers). Universities, alliances, and industry have tried various methods for years to achieve low-temperature Cu-Cu bonding with limited success. For example, acid immersion bonding, insertion bonding, self-assembling monolayer (SAM), and surface activated bonding (SAB) have all attempted to address this long-term need but have so far failed to produce an acceptable solution for HVM.The disclosed examples provide techniques for Cu-Cu metal bonding at a reduced temperature (eg, up to 200 ° C) by significantly reducing Cu oxide formation. These technologies enable faster cycle times and do not require special measures (eg, gas formation). These technologies also enable longer queue (Q) or segmentation times.Deriving these techniques involves recognizing that different metals have different oxide-forming actions, as some metals may form passivated oxides, some form porous oxides, and others form very brittle oxides. The Pilling-Bedworth ratio (RPB) represents the volume ratio of oxides to metals. When RPB <1, the oxide coating cracks and fails to provide protection (for example, magnesium (Mg): RPB = 0.81). When RPB> 2, the oxide coating is cut off and cannot provide protection (for example, iron (Fe): RPB = 2.1). When 1 ≦ RPB ≦ 2, the oxide coating is passivated (for example, aluminum (Al): RPB = 1.28 or titanium (Ti): RPB = 1.73). For Mg, the oxide is porous, so the oxidation rate expression is linear (for example, W = K1t, where W is the weight increase per unit area, K1 is a constant, and t is the time. Has a non-porous oxide The metal (for example, Cu) can follow a parabolic or logarithmic behavior. For example, the parabolic oxidation rate can be expressed as W = K2t + K3, where K2 and K3 are time-independent constants at a given temperature. Al or Fe The oxidation rate is logarithmic near ambient temperature and can be expressed as W = K4log (K5t + K6), where K4, K5, and K6 are constants.Using the concepts described above, galvanic pairs can be formed between Cu and some other metals to inhibit Cu oxidation. The ideal case is a Cu / Mg pair, as shown in the example semiconductor structure 100 of FIG. 1. Mg forms a porous oxide whose oxide growth rate is linear. The oxidation potential of magnesium (2.37V) is higher than that of Cu (-0.34V), as shown in the following table:Table 1Therefore, Mg is very anode in the galvanic sequence with Cu. In addition, the Gibbs free energy formed by Mg (-569.43kJ / mol) oxide is more negative than that of Cu (-127kJ / mol). The integration of Cu and Mg in the interconnect will inhibit or at least reduce the growth of Cu oxides because Mg sacrifices itself, thereby providing cathodic protection for Cu. Because Mg oxides are porous and have a linear growth rate, Mg will continue to lose electrons and form oxides without Cu oxidation.FIG. 1 is a cross-sectional view of an exemplary semiconductor structure 100 according to one example of the present disclosure. The semiconductor structure 100 may represent a wafer or a single die (eg, after singulation from a wafer). The semiconductor structure 100 includes a wafer layer 102 (or a substrate layer), an adhesion layer 104 disposed above the wafer layer 102, an anode metal layer 106 disposed above the adhesion layer 104, and an anode metal layer 106. A cathodic metal layer 108 is formed above and electrically connected to the cathode metal layer 108. The wafer layer 102 may include any suitable semiconductor material, such as silicon (Si). The adhesion layer 104 may include any of a variety of suitable metal materials (eg, titanium (Ti), tantalum (Ta), or chromium (Cr)) that adhere well to the wafer layer 102. The cathode metal layer 108 may include Cu, so that Cu-Cu bonding may form interconnections between chips and / or wafers.As shown in FIG. 1, the anode metal layer 106 may be composed of Mg. However, the anode metal layer 106 may include any of a variety of other suitable metals as a substitute for Mg. A suitable metal for the anode metal layer 106 has an oxidation potential higher than that of the cathode metal layer 108, making this metal more anodic than Cu, and therefore provides cathodic protection when forming galvanically coupled connections with Cu. For example, the anode metal layer 106 may include Al, zinc (Zn), or nickel (Ni). However, some of these metals do not follow linear oxide growth rates, so oxidation can be controlled by diffusion over time, limiting the electrons supplied to oxygen (O).FIG. 2 illustrates an exemplary operation 200 for forming a copper-copper bonded copper pillar based on the semiconductor structure 100 of FIG. 1 according to an example of the present disclosure. The resulting structure from operation 200 can be used for C2C, C2W or W2W bonding at a temperature of less than or equal to 200 ° C.Starting from the wafer layer 102 of Si or another suitable semiconductor layer, the adhesion layer 104, the anode metal layer 106, and the cathode metal layer 108 may be sequentially disposed over the wafer layer 102. The layers 104, 106, and 108 may be disposed over the wafer layer 102 using any of a variety of suitable technologies (eg, physical vapor deposition (PVD)) to form the semiconductor structure 100. According to the lithographic mask, a plurality of pillars 202 (eg, copper (Cu) pillars) may be formed over the semiconductor structure 100 in a designated area using photolithography and plating. In this manner, the cathode metal layer 108 may be considered to include the pillars 202. Next, a portion of the cathode metal layer 108 may be removed (eg, by etching) in the region 204 between the pillars 202. Therefore, in this process, the cathode metal layer 108 may be considered as a seed layer for electroplated pillars, where a portion of the seed layer is subsequently removed, and the remaining portion of the seed layer then forms a part of each pillar. After the seed layer is etched in the region 204, the upper surface of the structure may be coated with a resist 206. Photolithography may be used to remove a portion of the resist in the desired region 208 between the pillars 202. Then, a portion of the anode metal layer 106 (and in some cases, the adhesion layer 104 can be removed, as shown in FIG. As shown). The resist 206 may also be removed. The structure obtained in Figure 2 cannot quickly form copper oxide, so it is suitable for Cu-Cu bonding with another structure (such as a chip or wafer) to form a satisfactory interaction at a temperature not higher than 200 ° C. even.For some examples, after or during the formation of the pillar 202, a sidewall of an anode metal (eg, Mg) may be formed on a side surface of the pillar and may surround the pillar. These anode sidewalls may have the same height as or lower than the pillar 202. These sidewalls can remain in all the rest of the operations shown in FIG. 2.Exemplary Integrated Circuit PackageAn integrated circuit (IC) die (also referred to as a "chip") is typically provided in a package for electrical connection with a circuit board (eg, a printed circuit board PCB). The package protects the integrated circuit die from potential physical damage and corrosion from moisture. The disclosed examples can be used for chip-to-chip (C2C), chip-to-wafer (C2W), or wafer-to-wafer (W2W) bonding to form such IC packages. According to the disclosed examples, Cu-Cu bonding may be performed at temperatures below 200 ° C to achieve C2C, C2W or W2W integration.Many different types of IC dies may benefit from the disclosed examples and be included in IC packages. One exemplary type of IC die is a programmable IC die, such as a field programmable gate array FPGA die. FPGAs typically include a programmable tiled array. These programmable tiles may include, for example, input / output block IOB, configurable logic block CLB, dedicated random access memory block BRAM, multiplier, digital signal processing block DSP, processor, clock manager, delay locked loop DLL, etc. Wait. Another type of programmable IC die is a complex programmable logic device CPLD die. A CPLD includes two or more "function blocks" and input / output I / O resources connected together by an interconnect switch matrix. Each functional block of a CPLD includes a two-level AND / OR structure, similar to the structure used in programmable logic array PLA and programmable array logic PAL devices. Other programmable ICs are programmed by applying a processing layer, such as a metal layer, that programmably interconnects the various elements on the device. These programmable ICs are called mask programmable devices. The phrase "programmable IC" may also include devices that are only partially programmable, such as application specific integrated circuits (ASICs).As the demand for small electronic devices with enhanced functions has increased, IC packaging technology has expanded to more than just traditional two-dimensional (2D) structures, increasing integration. The conventional 2D structure involves multiple IC dies being disposed directly over a substrate (eg, a system-in-package (SiP) substrate) and on the same plane. However, IC packages with 2.5D and 3D integration have been and are being further developed. Examples of 2.5D and 3D integration are provided below.3 is a cross-sectional view of an exemplary 2.5D IC package 300 using stacked silicon interconnect SSI technology according to an example disclosed in the present invention. The main difference between 2.5D and traditional 2D IC packages is that they include an interposer with a through-silicon tube via (TSV), and the IC die is disposed on the interposer. For example, the IC package 300 includes a first die 3021 (labeled as "die # 1") and a second die 3022 (labeled as "die # 2") (collectively referred to as "die 302"). The die 302 may include any of a variety of suitable dies, including a highly manufacturable FPGA die slice, called a super logic area (SLR). Although only two dies 302 are shown in FIG. 3 for the convenience of explaining the concept, it should be understood that a 2.5D IC package may include more than two dies. Each die 302 may include a chip substrate 304, a device layer 306, and a metal layer 308. The die 302 may be a flip-chip die as shown in the figure, and the flip-chip die is connected to the interposer 311 through the micro-bump 310. The micro bumps 310 may be implemented as copper pillar micro bumps (also referred to as copper pillar bumps, copper pillar micro bumps, or copper pillars), which may be formed similarly to the pillars 202 of FIG. 2. The micro-bumps 310 achieve finer pitch than conventional solder bumps. The interconnect formed between the die 302 and the interposer 311 using the copper pillar microbumps is an example of a Cu-Cu bond with reduced oxide formation, which may benefit from the examples disclosed in the present invention .SSI technology allows different types of dies 302 or silicon processes to be interconnected on the interposer 311. The interposer 311 serves as an interconnect carrier on which the IC dies 302 are arranged side by side and interconnected. For example, the interposer 311 may be a passive silicon interposer. Although only one interposer 311 is shown in FIG. 3, for some examples, IC packages may be implemented with multiple interposers. The interposer 311 may include an interposer substrate 316, a top-side metal layer 312 disposed above the substrate 316, and a bottom-side metal layer 318 disposed below the substrate 316. For some examples, the interposer 311 may further include a plurality of interconnect lines (not shown), which may provide a high-bandwidth, low-latency connection through the interposer. The interposer 311 may also include a TSV 314 for a plurality of eutectic bumps 320 (e.g., a controlled collapsed chip connection (C4) bump) disposed between the die 302 and a plurality of eutectic bumps 320 disposed between the interposer 311 and the package substrate 322. ) To route connections. The TSV 314 can provide connections between the die 302 and the package substrate 322 for parallel and serial I / O, power / ground, clocks, configuration signals, and so on. The plurality of eutectic bumps 320 electrically connect the interposer 311 to the package substrate 322, and more specifically, to the conductive elements on the surface of the package substrate 322 and the through holes in the package substrate 322.The IC package 300 also has a plurality of solder balls 324 disposed under the package substrate 322. For example, solder balls 324 may be arranged in an array of rows and columns for making electrical contact with an arrangement of matching conductive pads provided on a surface of a circuit board 326 (eg, a PCB).FIG. 4 is a cross-sectional view of an exemplary 3D IC package 400 according to an example of the present disclosure. 3D IC packaging involves stacking at least one IC die on top of another IC die (eg, without intermediate components such as interposers or other passive dies), where these active dies can be directly bonded to each other. The lower die may employ TSV to allow the upper die to communicate with the lower die and the packaging substrate. For example, a 3D IC package 400 involves a first die 4021 (labeled "die # 1") (collectively referred to as "die 402") mounted above a second die 4022 (labeled "die # 2"). ). Although only two dies 402 are shown in FIG. 4, the reader will understand that more than two dies can be stacked. Further, although the two dies 402 are shown to have the same dimensions, it should be understood that the dies may have different dimensions. For example, die # 2 may be wider than die # 1, and in this case, another die (not shown) may be disposed above die # 2, on the same plane as die # 1.As shown in FIG. 4, the die # 2 may include a backside metal layer 309 provided on the back surface of the chip substrate 304 for connection with the micro-bump 310 so that the die # 2 may be electrically connected to the die # 1 connection. Die # 2 may also include TSV 414 so that die # 1 may be directly electrically connected to the package substrate 322.Exemplary operating methods for manufacturing packages5 is a flowchart of an exemplary operation 500 for manufacturing a semiconductor structure and / or package (eg, an IC package as described below) according to an example disclosed in the present invention, the semiconductor structure and / or package including the Semiconductor structure. For example, at least a portion of operation 500 may be performed by a system for manufacturing the semiconductor structure, which may include a semiconductor processing chamber.Operation 500 begins at block 502 by providing an adhesive layer over the semiconductor layer. At block 504, an anode metal layer may be disposed over the adhesion layer. At block 506, a cathode metal layer may be disposed over the anode metal layer.According to some examples, at least one of disposing the adhesion layer at block 502, disposing the anode metal layer at block 504, or disposing the cathode metal layer involves using physical vapor deposition (PVD).According to some examples, operation 500 further requires forming a plurality of pillars over the cathode metal layer using photolithography and plating. The plurality of pillars may have the same composition as the cathode metal layer. For some examples, operation 500 further involves etching the cathode metal layer to remove at least a portion of the cathode layer between the plurality of pillars. For some examples, operation 500 further includes coating an upper surface of the semiconductor structure including the plurality of pillars with a resist. For some examples, operation 500 further requires using photolithography to remove at least a portion of the resist between the plurality of pillars to expose the anode metal layer. For some examples, operation 500 further involves etching the adhesion layer between at least a portion of the anode layer and the plurality of pillars such that the semiconductor layer is exposed and the resist is removed.According to some examples, the anode metal layer includes magnesium (Mg).According to some examples, the anode metal layer includes an element selected from the group consisting of aluminum (Al), zinc (Zn), and nickel (Ni).According to some examples, the cathode metal layer includes copper (Cu).According to some examples, the oxidation potential of the anode metal layer is higher than the oxidation potential of the cathode metal layer.According to some examples, the anode metal layer includes a metal associated with a porous oxide. In this case, the oxidation rate of the porous oxide may be a linear function of time.According to some examples, the anode metal layer includes a metal having an oxide to metal volume ratio of less than 1.0.According to some examples, the anode metal layer is configured to suppress the growth of oxides associated with the cathode metal layer by providing cathodic protection to the cathode metal layer.According to some examples, operation 500 further includes, at optional block 508, bonding the cathode metal layer of the semiconductor structure to a metal layer of another structure at a temperature below 200 ° C.The disclosed examples provide an integrated method for Cu-Cu bonding and reduce the focus on oxide formation, thereby providing adequate bonding at reduced temperatures and faster cycle times. There are no special requirements for bonding. The disclosed examples also enable longer queue (Q) or segmentation times.As used herein (including subsequent claims), a phrase that refers to "at least one" in a list of items relates to any combination of those items, including a single member. By way of example, "at least one of x, y, and z" is intended to cover: x, y, z, x-y, x-z, y-z, x-y-z, and any combination thereof (eg, x-y-y and x-x-y-z).Although the foregoing is directed to the examples disclosed by the present invention, other and further examples of the present disclosure may be devised without departing from the basic scope of the present invention, and the scope thereof is determined by the claims that follow.
An apparatus is described. The apparatus can include non-volatile memory, an embedded processor, and a memory controller. The memory controller can access data from the byte addressable non-volatile memory using at least one of: a first addressing scheme or a second addressing scheme. The memory controller can provide the data to a host system over a first interface when the data is accessed using the first addressing scheme. The memory controller can provide the data to the embedded processor over a second interface when the data is accessed using the second addressing scheme.
CLAIMSWhat is claimed is:1. An apparatus, comprising:a byte addressable non-volatile memory;an embedded processor; anda memory controller comprising logic to:access data from the byte addressable non-volatile memory using at least one of: a first addressing scheme or a second addressing scheme;provide the data to a host system over a first interface when the data is accessed using the first addressing scheme; andprovide the data to the embedded processor over a second interface when the data is accessed using the second addressing scheme.2. The apparatus of claim 1, wherein the first addressing scheme is a logical block addressing (LBA) scheme and the first interface is a block storage interface.3. The apparatus of claim 1, wherein the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.4. The apparatus of claim 1, wherein the memory controller further comprises logic to access same data from the byte addressable non-volatile memory using the first addressing scheme or the second addressing scheme.5. The apparatus of claim 1, wherein the memory controller further comprises logic to provide the data over the second interface to the embedded processor to enable the embedded processor to perform at least one of: localized searching, localized error correction, expression pattern matching or data copying on the data without transferring the data to the host system.6. The apparatus of claim 1, wherein the memory controller further comprises logic to maintain an address map to write data to the byte addressable non-volatile memory using the first addressing scheme and access the data from the byte addressable nonvolatile memory using the second addressing scheme.7. The apparatus of claim 1, wherein the memory controller further comprises logic to maintain an address mapping to store data to the byte addressable non-volatile memory using the second addressing scheme and read the data from the byteaddressable non-volatile memory using the first addressing scheme.8. The apparatus of claim 1, wherein the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory at a more granular level as compared to the first addressing scheme.9. The apparatus of claim 1, wherein the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory in a reduced period of time as compared to the first addressing scheme.10. The apparatus of claim 1, wherein the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory with a reduced level of power consumption as compared to the first addressing scheme.11. The apparatus of claim 1, wherein the memory controller further comprises logic to perform media management in the byte addressable non-volatile memory and the media management is abstracted from the data that is accessed by the memory controller from the byte addressable non-volatile memory using the second addressing scheme.12. The apparatus of claim 1, wherein the apparatus is a solid state drive (SSD).13. A computing system, comprising:a host system; anda memory device comprising:a memory controller;an embedded processor; and non-volatile memory comprising:data that is accessible to the host system via the memory controller over a first interface that uses a first addressing scheme; anddata that is accessible to the embedded processor via the memory controller over a second interface that uses a second addressing scheme.14. The computing system of claim 13, wherein the first addressing scheme is a logical block addressing (LBA) scheme and the first interface is a block storage interface.15. The computing system of claim 13, wherein the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.16. The computing system of claim 13, wherein the embedded processor comprises logic to:access the data from the non-volatile memory via the memory controller over the second interface; andperform localized searching or localized error correction on the data without transferring the data to the host system.17. The computing system of claim 13, wherein the data that is accessible to the host system via the memory controller further traverses the embedded processor enroute to the host system.18. The computing system of claim 13, wherein the memory controller further comprises logic to access same data from the non-volatile memory using the first addressing scheme or the second addressing scheme.19. The computing system of claim 13, wherein the memory device is a solid-state drive (SSD).20. The computing system of claim 13, further comprising one or more of:a display communicatively coupled to the host system;a network interface communicatively coupled to the host system; ora battery coupled to the host system.21. A method for accessing data from non-volatile memory, the method comprising: receiving, at a memory controller of a memory device, a first command to access data from a non-volatile memory of the memory device, wherein the first command is received from a host system;accessing, at the memory controller of the memory device, the data from the non-volatile memory using a first addressing scheme, wherein the data is provided from the memory controller to the host system over a first interface in response to the first command;receiving, at the memory controller of the memory device, a second command to access data from the non-volatile memory of the memory device, wherein the second command is received from an embedded processor; andaccessing, at the memory controller of the memory device, the data from the non-volatile memory using a second addressing scheme, wherein the data is provided from the memory controller to the embedded processor over a second interface in response to the second command.22. The method of claim 21, wherein the first addressing scheme is a logical block addressing (LBA) scheme and the first interface is a block storage interface.23. The method of claim 21, wherein the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.24. The method of claim 21, further comprising providing the data over the second interface to the embedded processor to enable the embedded processor to perform localized searching or localized error correction on the data without transferring the data to the host system.25. The method of claim 21, further comprising maintaining an address map to write data to the non-volatile memory using the first addressing scheme and access the data from the non-volatile memory using the second addressing scheme.26. The method of claim 21, further comprising maintaining an address map to store data to the non-volatile memory using the second addressing scheme and read the data from the non-volatile memory using the first addressing scheme.
METHOD AND APPARATUS TO PROVIDE BOTH STORAGE MODE AND MEMORY MODE ACCESS TO NON- VOLATILE MEMORY WITHIN ASOLID STATE DRIVEBACKGROUND[0001] Hard Disk Drives (HDDs) are often used in computer systems for persistent data storage. The data in an HDD is stored on rotating magnetic media and accessed on a block-basis. For example, a standard HDD data block size may be a 512 byte sector. Because of the block-based nature of HDDs, related interfaces, storage software, operating systems, and other software are written and designed to allow or employ a block-based access technique or scheme.BRIEF DESCRIPTION OF THE DRAWINGS[0002] Features and advantages of embodiments will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example, embodiment features; and, wherein:[0003] FIG. 1 illustrates a solid state drive (SSD) with a memory controller that is operable to access non-volatile memory via a block storage mode or a memory mode in accordance with an example embodiment;[0004] FIG. 2 illustrates an SSD with a memory controller that is operable to access non-volatile memory via a memory mode in accordance with an example embodiment;[0005] FIG. 3 is a diagram of an apparatus in accordance with an example embodiment;[0006] FIG. 4 is a diagram of a computing system in accordance with an example embodiment;[0007] FIG. 5 depicts a flowchart of a method for accessing data from non-volatile memory in accordance with an example embodiment; and[0008] FIG. 6 illustrates a computing system that includes a data storage device in accordance with an example embodiment. [0009] Reference will now be made to the exemplary embodiments illustrated, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation on disclosure scope is thereby intended.DESCRIPTION OF EMBODIMENTS[0010] Before the disclosed embodiments are described, it is to be understood that this disclosure is not limited to the particular structures, process steps, or materials disclosed herein, but is extended to equivalents thereof as would be recognized by those ordinarily skilled in the relevant arts. It should also be understood that terminology employed herein is used for the purpose of describing particular examples or embodiments only and is not intended to be limiting. The same reference numerals in different drawings represent the same element. Numbers provided in flow charts and processes are provided for clarity in illustrating steps and operations and do not necessarily indicate a particular order or sequence.[0011] Furthermore, the described features, structures, or characteristics can be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of layouts, distances, network examples, etc., to provide a thorough understanding of various embodiments. One skilled in the relevant art will recognize, however, that such detailed embodiments do not limit the overall concepts articulated herein, but are merely representative thereof.[0012] As used in this written description, the singular forms "a," "an" and "the" include express support for plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a bit line" includes support for a plurality of such bit lines.[0013] Reference throughout this specification to "an example" means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment. Thus, appearances of the phrases "in an example" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. [0014] As used herein, a plurality of items, structural elements, compositional elements, and/or materials can be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary. In addition, various embodiments and examples can be referred to herein along with alternatives for the various components thereof. It is understood that such embodiments, examples, and alternatives are not to be construed as defacto equivalents of one another, but are to be considered as separate and autonomous representations under the present disclosure.[0015] Furthermore, the described features, structures, or characteristics can be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of layouts, distances, network examples, etc., to provide a thorough understanding of various embodiments. One skilled in the relevant art will recognize, however, that the technology can be practiced without one or more of the specific details, or with other methods, components, layouts, etc. In other instances, well-known structures, materials, or operations may not be shown or described in detail to avoid obscuring aspects of the disclosure.[0016] As use herein, "comprises," "comprising," "containing" and "having" and the like can have the meaning ascribed to them in U.S. Patent law and can mean,"includes," "including," and the like, and are generally interpreted to be open ended terms. The terms "consisting of or "consists of are closed terms, and include only the components, structures, steps, or the like specifically listed in conjunction with such terms, as well as that which is in accordance with U.S. Patent law. "Consisting essentially of or "consists essentially of have the meaning generally ascribed to them by U.S. Patent law. In particular, such terms are generally closed terms, with the exception of allowing inclusion of additional items, materials, components, steps, or elements, that do not materially affect the basic and novel characteristics or function of the item(s) used in connection therewith. For example, trace elements present in a composition, but not affecting the compositions nature or characteristics would be permissible if present under the "consisting essentially of language, even though not expressly recited in a list of items following such terminology. When using an open ended term in this written description, like "comprising" or "including," it is understood that express support should be afforded also to "consisting essentially of language as well as "consisting of language as if stated explicitly and vice versa.[0017] The terms "first," "second," "third," "fourth," and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that any terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Similarly, if a method is described herein as comprising a series of steps, the order of such steps as presented herein is not necessarily the only order in which such steps may be performed, and certain of the stated steps may possibly be omitted and/or certain other steps not described herein may possibly be added to the method.[0018] As used herein, comparative terms such as "increased," "decreased," "better," "worse," "higher," "lower," "enhanced," and the like refer to a property of a device, component, or activity that is measurably different from other devices, components, or activities in a surrounding or adjacent area, in a single device or in multiple comparable devices, in a group or class, in multiple groups or classes, or as compared to the known state of the art. For example, a data region that has an "increased" risk of corruption can refer to a region of a memory device which is more likely to have write errors to it than other regions in the same memory device. A number of factors can cause such increased risk, including location, fabrication process, number of program pulses applied to the region, etc.[0019] As used herein, the term "substantially" refers to the complete or nearly complete extent or degree of an action, characteristic, property, state, structure, item, or result. For example, an object that is "substantially" enclosed would mean that the object is either completely enclosed or nearly completely enclosed. The exact allowable degree of deviation from absolute completeness may in some cases depend on the specific context. However, generally speaking the nearness of completion will be so as to have the same overall result as if absolute and total completion were obtained. The use of "substantially" is equally applicable when used in a negative connotation to refer to the complete or near complete lack of an action, characteristic, property, state, structure, item, or result. For example, a composition that is "substantially free of particles would either completely lack particles, or so nearly completely lack particles that the effect would be the same as if it completely lacked particles. In other words, a composition that is "substantially free of an ingredient or element may still actually contain such item as long as there is no measurable effect thereof.[0020] As used herein, the term "about" is used to provide flexibility to a numerical range endpoint by providing that a given value may be "a little above" or "a little below" the endpoint. However, it is to be understood that even when the term "about" is used in the present specification in connection with a specific numerical value, that support for the exact numerical value recited apart from the "about" terminology is also provided.[0021] Numerical amounts and data may be expressed or presented herein in a range format. It is to be understood that such a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. As an illustration, a numerical range of "about 1 to about 5" should be interpreted to include not only the explicitly recited values of about 1 to about 5, but also include individual values and sub-ranges within the indicated range. Thus, included in this numerical range are individual values such as 2, 3, and 4 and sub-ranges such as from 1-3, from 2-4, and from 3-5, etc., as well as 1, 1.5, 2, 2.3, 3, 3.8, 4, 4.6, 5, and 5.1 individually.[0022] This same principle applies to ranges reciting only one numerical value as a minimum or a maximum. Furthermore, such an interpretation should apply regardless of the breadth of the range or the characteristics being described.[0023] An initial overview of technology embodiments is provided below and then specific technology embodiments are described in further detail later. This initial summary is intended to aid readers in understanding the technology embodiments more quickly, but is not intended to identify key or essential technological features nor is it intended to limit the scope of the claimed subject matter. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.[0024] Solid state drives (SSDs) use non-volatile memory to persistently store data. SSDs can use host interfaces compatible with traditional block input/output (I/O) hard disk drives. In other words, non-volatile memory on the SSDs can be accessed as block storage, in which blocks of data are programmed (or written), read, or erased from the non-volatile memory. A host system can access data in the non-volatile memory in the solid state drive (SSD) via a host interface. The host interface can establish an interface for accessing the data stored in the non-volatile memory. The host interface can be configured to utilize any suitable communication protocol to facilitate communications with the non-volatile memory depending on a type of SSD. For example, the host interface can be configured to communicate with the host system using Serial Advanced Technology Attachment (SATA), Peripheral Component Interconnect express (PCIe), Serial Attached SCSI (SAS), Universal Serial Bus (USB), and/or other communication protocol and/or technology.[0025] A request to access data stored in the non-volatile memory can use a logical block addressing (LB A) scheme, which can be used to determine the physical location of the blocks of data stored in the non-volatile memory. For example, each block of data can be a 512 byte sector of data. Block storage access to the non-volatile memory can be compatible with operating systems (OS) and applications.[0026] In one example, SSDs can include various types of non-volatile memory (NVM), such as phase change memory (PCM), three dimensional (3D) crosspoint memory, resistive memory, nanowire memory, ferro-electric transistor random access memory (FeTRAM), flash memory such as NAND, and NOR memory,magnetoresi stive random access memory (MRAM) memory that incorporates memristor technology, spin transfer torque (STT)-MRAM, and/or write in place non-volatile MRAM (NVMRAM).[0027] Storage devices such as Hard disk drives and Solid State Drives (SSD)s are accessed at a block level. As a result, operating systems are generally designed to access storage devices as block addressable devices. Dynamic random-access memory (DRAM) and other types of volatile memory are typically accessed at a byte level. Byte addressable write-in-place non-volatile memory, such as 3D crosspoint memory can be both block addressable and byte addressable. Such non-volatile memory can be compatible with a memory addressing scheme (such as a byte addressing scheme), in which individual bytes of data can be stored or accessed from the non-volatile memory. The ability to access the data in non-volatile memory in terms of bytes (as opposed to blocks of data) can be referred to as a "memory mode".[0028] In some embodiments, data stored in the non-volatile memory (e.g., byte addressable write-in-place non-volatile memory) can be accessed by a non-volatile memory controller via a block storage mode or a memory mode. The block storage mode is in line with the traditional mechanism used by a non-volatile memory controller in the SSD to access data stored on the non-volatile memory. In the block storage mode, the memory controller can use a block addressing scheme (or block storage semantics), such as an LB A scheme, to access the data stored in the non-volatile memory via the block host interface. On the other hand, in the memory mode, the memory controller can use a memory addressing scheme (or memory address semantics) to access the data stored in the non-volatile memory. In one specific example, the memory addressing scheme can be a byte addressable scheme.[0029] In some embodiments, with respect to the block storage mode that uses the block addressing scheme, the data can be accessible to an external host system via a block storage interface between the memory controller and the external host system. In other words, the host system can be external to the SSD. With respect to the memory mode that uses the memory addressing scheme, the data can be accessible from non-volatile memory internally to an embedded processor in the SSD via a memory mode interface between the memory controller and the embedded processor. The embedded processor can access the data in the non-volatile memory using load/store operations, which can reduce access time to the data as compared to the block storage interface used by the external host system. Therefore, the memory controller can access the same data on the same media (i.e., non-volatile memory (NVM) of the SSD) when operating in either the block storage mode or the memory mode.[0030] In one example, the capability of the SSD to support both the block storage mode and the memory mode permits the host system to access data stored in the nonvolatile memory using the traditional block storage interface, while the same data is also accessible to the embedded processor within the SSD over the memory mode interface. The incorporation of the memory mode interface in the SSD, while preserving the traditional block storage interface, avoids changes to the host system, operating system (OS), applications, etc. that use the traditional block storage interface to access data stored in the non-volatile memory of the SSD. In addition, the incorporation of the memory mode interface enables the embedded processor to access the same data that is available to the host system via the block storage interface.[0031] FIG. 1 illustrates an example of a solid state drive (SSD) 100 with a memory controller 110 that is operable to access data stored on non-volatile memory 120 via a block storage mode or a memory mode. In the block storage mode, the memory controller 110 can access data in the non-volatile memory 120 using a block addressing scheme (block storage semantics), such as a logical block addressing (LB A) scheme. The memory controller 110 can access the data in the non-volatile memory 120 over a media interface between the memory controller 110 and the non-volatile memory 120. The memory controller 110 can provide the data to a host system 130 via a block storage interface using a standard block storage communication protocol between the memory controller 110 and the host system 130. The host system 130 can be external to the SSD 100. The host system 130 can include processor(s) 132, memory 134 and a storage interface 136. In the memory mode, the memory controller 110 can access data in the non-volatile memory 120 using a memory addressing scheme (memory address semantics), such as a byte addressing scheme, and provide the data to an embedded processor 140 in the SSD 100 via a memory mode interface between the memory controller 110 and the embedded processor 140. Therefore, the memory controller 110 can access the same data in the non-volatile memory using the block storage mode or the memory address mode.[0032] In one example, the memory controller 110 can perform read and write operations with data stored in the non-volatile memory 120 based on commands received from the host system 130 via the block storage interface. For example, the memory controller 110 can read data in the non-volatile memory 120 using the LBA scheme and provide the data to the host system 130 via the block storage interface. In another example, the memory controller 110 can write data to the non-volatile memory 120 using the LB A scheme based on commands received from the host system 130 via the block storage interface.[0033] In one example, the memory controller 110 can store or access data in the nonvolatile memory 120. For example, the data can be stored or accessed based on instructions stored in memory in the SSD that is to be executed by the embedded processor 140. The memory controller 110 can access data in the non-volatile memory 120 using the memory addressing scheme, and the data can be provided from the memory controller 110 to the embedded processor 140 via the memory mode interface. In another example, the memory controller 110 can store data in the non-volatile memory 120 using the memory addressing scheme.[0034] As previously described, the host system 130 can access data stored in the nonvolatile memory 120 over the block storage interface via the memory controller 110. The block storage interface is a standard block storage communication protocol that is used by the host system 130 to access blocks of data stored in the non-volatile memory 120. The SSD 100 can use the block storage interface to avoid changes to the host system 130, operating system (OS), applications, etc. that use the traditional block storage interface to access the data in the non-volatile memory 120. In addition, the same data in the non-volatile memory 120 can be accessed internally by the embedded processor 140 in the SSD 100. The embedded processor 140 can access the data via the memory controller 110 over the memory mode interface. In addition, the embedded processor 140 can access the data via the memory controller 110 over the block storage interface.[0035] In one example, the embedded processor 140 can internally perform various functionalities with the data accessed from the non-volatile memory 120. For example, the embedded processor 140 can perform localized search and/or replace functions on data stored in the non-volatile memory 120. As another example, the embedded processor 140 can perform localized data scrubbing or error correction on the data stored in the non-volatile memory 120. In addition, the embedded processor 140 can perform regular expression pattern matching and data copying. Such functionalities can be performed with increased execution times and reduced power levels since the functions are performed with the data stored internally in the SSD 100, as opposed to the functions being performed only after the data is transferred to the host system 130.[0036] In one example, the memory controller 110 can function to perform media management of the non-volatile memory 120. For example, the memory controller 110 can correct errors on read operations to the non-volatile memory 120, move data on write operations to the non-volatile memory 120, and refresh data on read operations and write operations and over a period of time. Another example of media management can include wear leveling, which can prolong a service life of the non-volatile memory 120. In addition, the media management performed by the memory controller 110 can be architecturally separated from the embedded processor 140, such that data accessed from the non-volatile memory 120 by the embedded processor 140 does not interfere with the media management performed at the memory controller 110.[0037] As previously described, the memory controller 110 can provide the embedded processor 140 with access to data in the non-volatile memory 120 through the memory mode interface, which is separate from the block storage interface to the host system 130. In one example, the memory mode interface can enable the embedded processor 140 to access the data from the non-volatile memory 120 at a more granular level as compared to the block storage interface. For example, while the block storage interface may enable the host system 130 to access 512 byte blocks of data from the non-volatile memory 120, the memory mode interface can enable the embedded processor 140 to access a smaller size of data, such as a 128 byte block of data. In another example, the memory mode interface can enable the embedded processor 140 to access the data from the non-volatile memory 120 in a reduced period of time as compared to the block storage interface. For example, data in the non-volatile memory 120 can be accessed via the memory mode interface using efficient load and store operations, which can be completed in hundreds of nanoseconds, as opposed to slower read and write block storage operations which can take five or more micro seconds.[0038] In one configuration, the host system 130 can read and write data to the nonvolatile memory 120 using the block addressing scheme, and that same data can be manipulated by the embedded processor 140 using the memory addressing scheme. The memory controller 110 can maintain an address map (or other type of mathematical relationship or mathematical mapping) between the two address spaces, and the address map can enable the memory controller 110 to switch between the block storage mode and the memory mode when performing data operations. As an example, the memory controller 110 can use the address map when writing data to the non-volatile memory 120 using the block addressing scheme, and then access the data from the non-volatile memory 120 using the memory addressing scheme. As another example, the memory controller 110 can use the address map when storing data to the non-volatile memory 120 using the memory addressing scheme, and then read the data from the non-volatile memory 120 using the block addressing scheme. In this example, the embedded processor 140 can retrieve data from the non-volatile memory 120, generate modified data, and store the modified data in the non-volatile memory 120, and the modified data can be accessible to the host system 130.[0039] FIG. 2 illustrates an example of a solid state drive (SSD) 200 with a memory controller 210 that is operable to access non-volatile memory 220 via a memory mode. In the memory mode, the memory controller 210 can access data stored in the nonvolatile memory 220 using a memory addressing scheme (or memory semantics). One example of the memory addressing scheme is a byte addressing scheme. The memory controller 210 can access the data in the non-volatile memory 220 over a media interface between the memory controller 210 and the non-volatile memory 220. The memory controller 210 can provide the data to an embedded processor 240 in the SSD 200 via a memory mode interface between the memory controller 210 and the embedded processor 240. The embedded processor 240 can internally perform various functionalities with the data accessed from the non-volatile memory 220, such as local search operations and/or local data scrubbing (or error correction) operations.[0040] In one example, a host system 230 external to the SSD 200 can also access the data stored in the non-volatile memory 220 of the SSD 200. The host system 230 can provide read/write commands to the embedded processor 240 over a block storage interface between the host system 230 and the embedded processor 240. The host system 230 can provide the read/write commands using a block addressing scheme, such as a logical block addressing (LB A) scheme. The embedded processor 240 can translate the read/write commands using the block addressing scheme to load/store commands using the memory addressing scheme. The embedded processor 240 can translate the commands based on known relationships between the block addressing scheme and the memory addressing scheme. At this point, the embedded processor 240 can store/access data from the non-volatile memory 220 via the memory mode interface between the embedded processor 240 and the memory controller 210. When the host system 230 provides a command to access data, the embedded processor 240 can access the data from the non-volatile memory 220 and provide the data to the host system 230. Therefore, in this example, data transfers between the host system 230 and the nonvolatile memory 220 can traverse the embedded processor 240.[0041] As shown in FIG. 2, the memory controller 210 can access data from the nonvolatile memory using the memory addressing scheme (as opposed to the block addressing scheme). The host system 230 can continue to use the block addressing scheme. The embedded processor 240 can serve as an intermediary between the host system 230 and the memory controller 210. In other words, the embedded processor 240 can maintain the block storage interface with the host system 230, and the embedded processor 240 can maintain the memory mode interface with the memory controller 210. However, in this example, power consumption at the embedded processor 240 can be increased since the data transfers between the host system 230 and the non-volatile memory 220 traverse the embedded processor 240.[0042] FIG. 3 is an exemplary diagram of an apparatus 300. The apparatus 300 can include non-volatile memory 310, an embedded processor 320, and a memory controller 330. The memory controller 330 can comprise logic to: access data from the nonvolatile memory 310 using at least one of: a first addressing scheme or a second addressing scheme. The memory controller 330 can comprise logic to: provide the data to an external host system over a first interface when the data is accessed using the first addressing scheme. The memory controller 330 can comprise logic to: provide the data to the embedded processor 320 over a second interface when the data is accessed using the second addressing scheme.[0043] FIG. 4 is an exemplary diagram of a computing system 400. The computing system 400 can include a host system 410 and a memory device 420. The memory device 420 can include a memory controller 422, an embedded processor 424 and nonvolatile memory 426. The non-volatile memory 426 can include data that is accessible to the host system 410 via the memory controller 422 over a first interface that uses a first addressing scheme. The non-volatile memory 426 can include data that is accessible to the embedded processor 424 via the memory controller 422 over a second interface that uses a second addressing scheme.[0044] Another example provides a method 500 for accessing data from non-volatile memory, as shown in the flow chart in FIG. 5. The method can be executed as instructions on a machine, where the instructions are included on at least one computer readable medium or one non-transitory machine readable storage medium. The method can include the operation of: receiving, at a memory controller of a memory device, a first command to access data from a non-volatile memory of the memory device, wherein the first command is received from a host system, as in block 510. The method can include the operation of: accessing, at the memory controller of the memory device, the data from the non-volatile memory using a first addressing scheme, wherein the data is provided from the memory controller to the host system over a first interface in response to the first command, as in block 520. The method can include the operation of: receiving, at the memory controller of the memory device, a second command to access data from the non-volatile memory of the memory device, wherein the second command is received from an embedded processor, as in block 530. The method can include the operation of: accessing, at the memory controller of the memory device, the data from the non-volatile memory using a second addressing scheme, wherein the data is provided from the memory controller to the embedded processor over a second interface in response to the second command, as in block 540.[0045] FIG. 6 illustrates a general computing system 600 that can be employed in embodiments of the present technology. The computing system 600 can be connected to a solid state drive (SSD) 616. The SSD 616 can be located outside the computing system 600, or alternatively, the SSD 616 can be located within the computing system 600. The computing system 600 can include a processor 602 in communication with a memory 604. The memory 604 can include any device, combination of devices, circuitry, and the like that is capable of storing, accessing, organizing and/or retrieving data. Non-limiting examples include volatile or non-volatile RAM, phase change memory, optical media, hard-drive type media, and the like, including combinations thereof. [0046] The computing system 600 additionally includes a local communication interface 606 for connectivity between the various components of the system. For example, the local communication interface 606 can be a local data bus and/or any related address or control busses as may be desired.[0047] The computing system 600 can also include an I/O (input/output) interface 608 for controlling the I/O functions of the system, as well as for I/O connectivity to devices outside or inside of the computing system 600. A network interface 610 can also be included for network connectivity. The network interface 610 can control network communications both within the system and outside of the system. The network interface can include a wired interface, a wireless interface, a Bluetooth interface, optical interface, and the like, including appropriate combinations thereof. Furthermore, the computing system 600 can additionally include a user interface 612, a display device 614, as well as various other components that would be beneficial for such a system.[0048] The processor 602 can be a single or multiple processors, and the memory 604 can be a single or multiple memories. The local communication interface 606 can be used as a pathway to facilitate communication between any of a single processor, multiple processors, a single memory, multiple memories, the various interfaces, and the like, in any useful combination.[0049] Various techniques, or certain aspects or portions thereof, can take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, non-transitory computer readable storage medium, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various techniques. Circuitry can include hardware, firmware, program code, executable code, computer instructions, and/or software. A non-transitory computer readable storage medium can be a computer readable storage medium that does not include signal. In the case of program code execution on programmable computers, the computing device can include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The volatile and non-volatile memory and/or storage elements can be a RAM, EPROM, flash drive, optical drive, magnetic hard drive, solid state drive, or other medium for storing electronic data. The node and wireless device can also include a transceiver module, a counter module, a processing module, and/or a clock module or timer module. One or more programs that can implement or utilize the various techniques described herein can use an application programming interface (API), reusable controls, and the like. Such programs can be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language can be a compiled or interpreted language, and combined with hardware implementations. Exemplary systems or devices can include without limitation, laptop computers, tablet computers, desktop computers, smart phones, computer terminals and servers, storage databases, and other electronics which utilize circuitry and programmable memory, such as household appliances, smart televisions, digital video disc (DVD) players, heating, ventilating, and air conditioning (HVAC) controllers, light switches, and the like.Examples[0050] The following examples pertain to specific embodiments and point out specific features, elements, or steps that can be used or otherwise combined in achieving such embodiments.[0051] In one example there is provided an apparatus comprising: a byte addressable non-volatile memory; an embedded processor; and a memory controller comprising logic to: access data from the byte addressable non-volatile memory using at least one of: a first addressing scheme or a second addressing scheme; provide the data to a host system over a first interface when the data is accessed using the first addressing scheme; and provide the data to the embedded processor over a second interface when the data is accessed using the second addressing scheme. [0052] In one example of an apparatus, the first addressing scheme is a logical block addressing (LBA) scheme and the first interface is a block storage interface.[0053] In one example of an apparatus, the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.[0054] In one example of an apparatus, the memory controller further comprises logic to access same data from the byte addressable non-volatile memory using the first addressing scheme or the second addressing scheme.[0055] In one example of an apparatus, the memory controller further comprises logic to provide the data over the second interface to the embedded processor to enable the embedded processor to perform localized searching, localized error correction, expression pattern matching or data copying on the data without transferring the data to the host system.[0056] In one example of an apparatus, the memory controller further comprises logic to maintain an address map to write data to the byte addressable non-volatile memory using the first addressing scheme and access the data from the byte addressable nonvolatile memory using the second addressing scheme.[0057] In one example of an apparatus, the memory controller further comprises logic to maintain an address mapping to store data to the byte addressable non-volatile memory using the second addressing scheme and read the data from the byte addressable non-volatile memory using the first addressing scheme.[0058] In one example of an apparatus, the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory at a more granular level as compared to the first addressing scheme.[0059] In one example of an apparatus, the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory in a reduced period of time as compared to the first addressing scheme.[0060] In one example of an apparatus, the second addressing scheme enables the memory controller to access the data from the byte addressable non-volatile memory with a reduced level of power consumption as compared to the first addressing scheme. [0061] In one example of an apparatus, the memory controller further comprises logic to perform media management in the byte addressable non-volatile memory and the media management is abstracted from the data that is accessed by the memory controller from the byte addressable non-volatile memory using the second addressing scheme.[0062] In one example of an apparatus, the apparatus is a solid state drive (SSD).[0063] In one example there is provided a computing system comprising a host system; and a memory device comprising: a memory controller; an embedded processor; and non-volatile memory comprising: data that is accessible to the host system via the memory controller over a first interface that uses a first addressing scheme; and data that is accessible to the embedded processor via the memory controller over a second interface that uses a second addressing scheme.[0064] In one example of a computing system, the first addressing scheme is a logical block addressing (LB A) scheme and the first interface is a block storage interface.[0065] In one example of a computing system, the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.[0066] In one example of a computing system, the embedded processor comprises logic to: access the data from the non-volatile memory via the memory controller over the memory mode interface; and perform localized searching or localized error correction on the data without transferring the data to the host system.[0067] In one example of a computing system, the data that is accessible to the host system via the memory controller further traverses the embedded processor enroute to the host system.[0068] In one example of a computing system, the memory controller further comprises logic to access same data from the non-volatile memory using the first addressing scheme or the second addressing scheme.[0069] In one example of a computing system, the memory device is a solid-state drive (SSD).[0070] In one example of a computing system, the computing system further comprises one or more of: a display communicatively coupled to the host system; a network interface communicatively coupled to the host system; or a battery coupled to the host system.[0071] In one example there is provided a method for accessing data from non-volatile memory, the method comprising: receiving, at a memory controller of a memory device, a first command to access data from a non-volatile memory of the memory device, wherein the first command is received from a host system; accessing, at the memory controller of the memory device, the data from the non-volatile memory using a first addressing scheme, wherein the data is provided from the memory controller to the host system over a first interface in response to the first command; receiving, at the memory controller of the memory device, a second command to access data from the non-volatile memory of the memory device, wherein the second command is received from an embedded processor; and accessing, at the memory controller of the memory device, the data from the non-volatile memory using a second addressing scheme, wherein the data is provided from the memory controller to the embedded processor over a second interface in response to the second command.[0072] In one example of a method for accessing data from non-volatile memory, the first addressing scheme is a logical block addressing (LB A) scheme and the first interface is a block storage interface.[0073] In one example of a method for accessing data from non-volatile memory, the second addressing scheme is a memory addressing scheme and the second interface is a memory mode interface, wherein the memory addressing scheme includes a byte addressing scheme.[0074] In one example of a method for accessing data from non-volatile memory, the method further comprises providing the data over the second interface to the embedded processor to enable the embedded processor to perform localized searching or localized error correction on the data without transferring the data to the host system.[0075] In one example of a method for accessing data from non-volatile memory, the method further comprises maintaining an address map to write data to the non-volatile memory using the first addressing scheme and access the data from the non-volatile memory using the second addressing scheme.[0076] In one example of a method for accessing data from non-volatile memory, the method further comprises maintaining an address map to store data to the non-volatile memory using the second addressing scheme and read the data from the non-volatile memory using the first addressing scheme.[0077] While the forgoing examples are illustrative of the principles of various embodiments in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the disclosure.
The invention includes methods of forming pluralities of capacitors. In one implementation, a method of forming a plurality of capacitors includes providing a plurality of capacitor electrodes within a capacitor array area over a substrate. The capacitor electrodes comprise outer lateral sidewalls. The plurality of capacitor electrodes is supported at least in part with a retaining structure which engages the outer lateral sidewalls. The retaining structure is formed at least in part by etching a layer of material which is not masked anywhere within the capacitor array area to form said retaining structure. The plurality of capacitor electrodes is incorporated into a plurality of capacitors.
1.A method of forming a plurality of capacitors, including:Providing a plurality of capacitor electrodes in the area of the capacitor array on the substrate, the capacitor electrodes including outer lateral sidewalls;Forming a holding structure that engages with the outer lateral sidewall of the capacitor electrode, the holding structure is formed at least in part by etching a layer of material that is not masked anywhere in the capacitor array area to form The holding structure; and further including after forming the holding structure, etching the holding structure to reduce its size, the etched holding structure at least partially supporting the plurality of capacitor electrodes; andThe multiple capacitor electrodes are incorporated into multiple capacitors.2.The method of claim 1, wherein the substrate includes a peripheral circuit region around the capacitor array region, the material layer extends onto the peripheral circuit region during the etching of the material layer, and It is at least partially masked in the peripheral circuit area during the etching of the material layer.3.The method of claim 1, wherein the material layer is not masked anywhere on the substrate during the etching of the material layer.4.The method of claim 1, wherein the retention structure remains as part of a completed integrated circuit construction incorporating the plurality of capacitors.5.The method of claim 1, wherein the material is electrically insulating.6.The method of claim 1, wherein the material is electrically conductive.7.The method of claim 1, wherein the material is semi-conductive.8.The method of claim 1, wherein the etching to reduce the holding structure comprises facet etching.9.The method of claim 1, wherein the etching to reduce the holding structure comprises wet etching.10.A method of forming a plurality of capacitors, including:Providing a plurality of capacitor electrodes on the substrate, the capacitor electrodes including outer lateral sidewalls;Forming a retaining structure that engages with the outer lateral sidewalls of the capacitor electrode, the retaining structure is formed at least in part by: etching an unmasked material layer anywhere on the substrate to form a The holding structure; and further including after forming the holding structure, etching the holding structure to reduce its size, the etched holding structure at least partially supports the plurality of capacitor electrodes; and the plurality of The capacitor electrode is incorporated into multiple capacitors.11.The method of claim 10, wherein the retention structure remains as part of a completed integrated circuit construction incorporating the plurality of capacitors.12.The method of claim 10, wherein the material is electrically insulating.13.The method of claim 10, wherein the material is electrically conductive.14.The method of claim 10, wherein the material is semi-conductive.15.A method of forming a plurality of capacitors, including:Forming first, second and third materials having different compositions on the capacitor electrode forming material; providing the first, second and third materials at least partially at a common height on the capacitor electrode forming material ;The second material includes an anisotropically etched holding structure;The first material is selectively etched relative to the second and third materials, and then the capacitor electrode forming material is selectively etched relative to the second and third materials to efficiently form a plurality of capacitor electrodes OpeningForming each capacitor electrode in each of the capacitor electrode openings;The third material is selectively etched with respect to the second material and with respect to the capacitor electrode to effectively expose the capacitor electrode forming material under the third material being etched, and then selectively Etching the capacitor electrode forming material with respect to the second material and selectively with respect to the capacitor electrode to effectively expose the outer lateral sidewalls of the capacitor electrode and leave at least the support for the capacitor electrode Some of the retaining structures; andThe multiple capacitor electrodes are incorporated into multiple capacitors.16.A method of forming a plurality of capacitors, including:Forming a plurality of spaced masking blocks on the capacitor electrode forming material, the masking blocks defining the respective capacitor electrode opening profiles;Forming an interconnected insulating holding structure on the capacitor electrode forming material and against the sidewall of the masking block;Etching the mask block and then etching the capacitor electrode forming material under the mask block to form a capacitor electrode opening in the capacitor electrode forming material;Forming a capacitor electrode in the shape of each container in each of the capacitor electrode openings and against the interconnected holding structure, the capacitor electrode including outer lateral sidewalls;Etching at least some of the capacitor electrode forming material to expose at least some of the outer lateral sidewalls of the capacitor electrode; andAfter exposing the outer lateral sidewalls of the capacitor electrode, capacitor dielectric material and capacitor electrode material are deposited on at least some of the outer lateral sidewalls below the holding structure.17.The method according to claim 16, wherein the composition of the mask block is different from the composition of the capacitor electrode forming material.18.The method according to claim 16, wherein the mask block has the same composition as the capacitor electrode forming material.19.A method of forming a plurality of capacitors, including:Forming a plurality of capacitor electrodes in the area of the capacitor array on the substrate, the capacitor electrodes including outer lateral sidewalls;The plurality of capacitor electrodes are at least partially supported by a holding structure that engages with the outer lateral sidewalls, the holding structure is formed before forming the plurality of capacitor electrodes, at least before forming the plurality of electrodes The previous holding structure has a plurality of openings passing through the holding structure, each of the openings being received between the diagonal neighbors of the capacitor electrode to be formed;The multiple capacitor electrodes are incorporated into multiple capacitors.20.A method of forming a plurality of capacitors, including:Forming a plurality of capacitor electrodes in the area of the capacitor array on the substrate, the capacitor electrodes including outer lateral sidewalls;The plurality of capacitor electrodes are at least partially supported by a holding structure that engages with the outer lateral sidewalls, the holding structure is formed before forming the plurality of capacitor electrodes, at least before forming the plurality of electrodes The previous retaining structure has a plurality of openings through the retaining structure, and the forming of the retaining structure includes expanding the opening after initially forming the opening; andThe multiple capacitor electrodes are incorporated into multiple capacitors.
Method of forming multiple capacitorsInformation about divisional applicationsThis application is a Chinese invention patent application with the application number 200680008606.3 after the PCT application with the application date of February 27, 2006, application number PCT / US2006 / 006806, and the invention titled "Method of Forming Multiple Capacitors" entered the Chinese national phase Application for division.Technical fieldThe present invention relates to a method of forming multiple capacitors.Background techniqueCapacitors are a type of component commonly used in the manufacturing of integrated circuits (for example in DRAM circuits). A typical capacitor contains two conductive electrodes separated by a non-conductive dielectric area. As the density of integrated circuits increases, there is a continuing challenge of maintaining a sufficiently high storage capacitance in the typical reduced capacitor area. The increase in the density of integrated circuits usually results in a large reduction in the horizontal dimension of the capacitor compared to the vertical dimension. In many instances, the vertical size of the capacitor increases.One way to form a capacitor is to initially form an insulating material in which the capacitor storage node electrode is formed. For example, an array of capacitor electrode openings for individual capacitors is usually fabricated in such insulating capacitor electrode forming materials, where a typical insulating electrode forming material is silicon dioxide doped with one or both of phosphorus or boron. The capacitor electrode opening is usually formed by etching. However, it may be difficult to etch the capacitor electrode opening in the insulating material, especially if the opening is deep.Further and in any case, it is usually necessary to etch away most, if not all, capacitor electrode forming materials after each capacitor electrode has been formed within the opening. This enables the outer sidewall surface of the electrode to provide an increased area and therefore increased capacitance for the capacitor being formed. However, the capacitor electrode formed in the deeper opening is usually correspondingly much higher than its width. This may cause the capacitor electrode to collapse during etching to expose the external sidewall surface, during substrate transport, and / or during the deposition of the capacitor dielectric layer or external capacitor electrode layer. Our U.S. Patent No. 6,667,502 teaches the provision of brackets or retaining structures designed to mitigate such collapse.Although the motivation of the present invention is to solve the problems indicated above, it is by no means limited thereto. The present invention is limited only by wording and according to the principle of equivalence, and is limited by the appended claims, and no explanatory or other restrictive reference is made to the description.Summary of the inventionThe invention includes a method of forming multiple capacitors. In one embodiment, a method of forming a plurality of capacitors includes providing a plurality of capacitor electrodes in a capacitor region on a substrate. The capacitor electrode includes external lateral sidewalls. The plurality of capacitor electrodes are at least partially supported by a retaining structure that engages the outer lateral sidewall. The holding structure is formed at least in part by etching a material layer that is not masked anywhere in the capacitor array area to form the holding structure. The multiple capacitor electrodes are incorporated into multiple capacitors.In one embodiment, a method of forming a plurality of capacitors includes forming first, second, and third materials of different compositions on a capacitor electrode forming material. The first, second, and third materials are at least partially received at a common height on the capacitor electrode forming material. The second material includes an anisotropically etched holding structure. The first material is substantially selectively etched relative to the second and third materials, and then the capacitor electrode forming material is substantially selectively etched relative to the second and third materials to effectively form a plurality of capacitor electrode openings. Each capacitor electrode is formed in each capacitor electrode opening. The third material is etched substantially selectively with respect to the second material and substantially with respect to the capacitor electrode to effectively expose the capacitor electrode forming material under the third material being etched. After this, the capacitor electrode forming material is etched substantially selectively with respect to the second material and substantially with respect to the capacitor electrode to effectively expose the outer lateral sidewalls of the capacitor electrode and leave at least a portion supporting the capacitor electrode Maintain the structure. Multiple capacitor electrodes are incorporated into multiple capacitors.Other aspects and implementations are also covered.BRIEF DESCRIPTIONHereinafter, preferred embodiments of the present invention will be described with reference to the following drawings.Hereinafter, preferred embodiments of the present invention will be described with reference to the following drawings.FIG. 1 is a fragmentary illustrated portion of a semiconductor wafer segment being processed according to an aspect of the invention.2 is an alternative embodiment of the semiconductor wafer segment depicted in FIG.FIG. 3 is a top view of the left part of FIG. 1 in the processing step after the step of FIG. 1.FIG. 4 is a view of FIG. 3, and the left part of FIG. 4 is taken through line 4-4 in FIG. 3.FIG. 5 is a view of the substrate of FIG. 3, and the left part of FIG. 5 is taken through line 5-5 in FIG.6 is a view of the substrate of FIG. 4 in a processing step after the step shown in FIG. 4.7 is a view of the substrate of FIG. 5 in a processing step after the step shown in FIG. 5 and corresponding in sequence to the step of FIG. 6.8 is a top view of the substrate of FIG. 3 in the processing steps after the steps of FIG. 3 and after the steps of FIGS. 6 and 7.9 is a view of the substrate of FIG. 7 after the steps shown in FIG. 7 and in a process step corresponding in sequence to the steps of FIG. 8, where the left part of FIG. 9 is through line 9-9 in FIG. Intercepted.10 is a view of the substrate of FIG. 6 after the steps shown in FIG. 6 and in sequence corresponding to the steps of FIG. 8 in the processing steps, where the left part of FIG. 10 is through the line 10-10 in FIG. 8 Intercepted.FIG. 11 is a top view of the substrate of FIG. 8 in the processing step after the step of FIG. 8.12 is a view of the substrate of FIG. 9 after the steps shown in FIG. 9 and in a process step corresponding in sequence to the steps of FIG. 11, where the left part of FIG. 12 is through the line 12-12 in FIG. Intercepted.13 is a view of the substrate of FIG. 10 after the steps shown in FIG. 10 and in a process step corresponding in sequence to the steps of FIG. 11, where the left part of FIG. 13 is through the line 13-13 in FIG. Intercepted.14 is a view of the substrate of FIG. 13 in a processing step after the step shown in FIG. 13.FIG. 15 is a view of the substrate of FIG. 12 after the steps shown in FIG. 12 and in a process step corresponding in sequence to the steps of FIG. 14.16 is a plan view of the substrate of FIG. 11 after the step of FIG. 11 and in a process step corresponding in sequence to the steps of FIGS. 14 and 15.FIG. 17 is a view of the substrate of FIG. 14 after the steps shown in FIG. 14 and in processing steps corresponding in sequence to the steps of FIG. 16, where the left part of FIG. 17 is through line 17-17 in FIG. 16. Intercepted.18 is a view of the substrate of FIG. 15 after the steps shown in FIG. 15 and in a process step corresponding in sequence to the steps of FIG. 16, where the left part of FIG. 18 is through the line 18-18 in FIG. Intercepted.FIG. 19 is a top view of the substrate of FIG. 16 in a processing step after the step of FIG. 16.20 is a view of the substrate of FIG. 18 after the steps shown in FIG. 18 and in a processing step corresponding in sequence to the steps of FIG. 19, where the left part of FIG. 20 is through the line 20-20 in FIG. Intercepted.21 is a view of the substrate of FIG. 17 in the processing steps after the steps shown in FIG. 17 and after the steps shown in FIGS. 19 and 20.FIG. 22 is a view of the substrate of FIG. 20 after the steps shown in section 20 and corresponding in sequence to the steps of FIG. 21 in processing steps.FIG. 23 is a view of the left part of the substrate of FIG. 21 in a processing step after the step shown in FIG. 21.FIG. 24 is a view of the left part of the substrate of FIG. 22 after the steps shown in FIG. 22 and corresponding in sequence to the steps of FIG. 23 in the processing steps.Figure 25 is a top view of an alternative embodiment.detailed descriptionThis disclosure of the present invention is submitted for the purpose of promoting the constitutional purpose of "Promoting the Advancement of Science and Practical Technology" (Chapter 1, Section 8) of the US Patent Law.Referring to FIG. 1, generally reference numeral 10 indicates a semiconductor substrate being processed according to an aspect of the present invention. This includes a substrate including a semiconductor substrate in one exemplary embodiment, which contains, for example, bulk monocrystalline silicon or other materials. In the context of this document, the term "semiconductor substrate" or "semiconducting substrate" is defined to mean any construction that includes semiconducting materials, including (but not limited to) bulk semiconducting materials, such as semiconducting Wafer (single or a combination including other materials above) and semiconductive material layer (single or a combination including other materials). The term "substrate" refers to any supporting structure, including (but not limited to) the above semiconductive substrate. Further in the context of this document, the term "layer" includes both singular and plural forms unless otherwise indicated.Discussion continues in a preferred embodiment method of forming a capacitor array, such as a capacitor array that can be utilized in DRAM or other memory circuit configurations. The substrate segment 10 can be considered to include regions 14 and 16. In only one embodiment, the region 14 of a preferred embodiment includes a capacitor array region, and the region 16 includes a circuit region around the capacitor array region 14. Further, by way of example only, the substrate segment 10 is depicted as including an insulating layer 18 in which a plurality of conductive contact plugs 19 and 21 are formed for electrical connection with respect to capacitor electrodes of a plurality of capacitors, through the following discussion Will be easy to understand. The insulating material 18 will cover other substrate materials (not shown), such as bulk monocrystalline silicon, semiconductor-on-insulator circuits, or other existing or to-be-developed substrate materials. An exemplary preferred insulating material 18 includes silicon dioxide doped with at least one of phosphorus and boron, such as borophosphosilicate glass (BPSG). The conductive plugs 19 and 21 will include one or more conductive materials, which may include, for example, conductively doped semiconductive materials. The substrate 18/19/21 is only exemplary, and any conceivable substrate, whether existing or to be developed, is possible.The first material 20 has been formed on the substrate 18/19/21. An exemplary preferred material is BPSG, and an exemplary preferred thickness ranges from 1,000 Angstroms to 20,000 Angstroms. It will be easily understood from the following discussion that the capacitor electrode will be formed within the material 20, and therefore the material 20 may be regarded as a capacitor electrode forming material. The first material 20 may be electrically insulating, conductive or semi-conductive, with electrical insulation being most preferred. The capacitor electrode forming material 20 may include a single homogeneous layer as depicted in FIG. 1, may be non-homogeneous (e.g., two or more BPSG layers with different doping levels), and in turn may include, by way of example only Multiple discrete layers. For example, and by way of example only, FIG. 2 depicts a substrate segment 10a of an alternative embodiment. The same reference numerals in the first embodiment described are used where appropriate, and the difference is indicated by the suffix "a" or by different reference numerals. FIG. 2 depicts the capacitor electrode forming material / first material 20 a as including at least two layers 22 and 24. By way of example only, layer 22 may include an etch stop layer (eg, silicon nitride, aluminum oxide, etc.), where layer 24 includes BPSG.Referring to FIGS. 3-5, a plurality of spaced masking blocks 25, 26, 27, 28, 29, 30, 31, 32, and 33 have been formed on the first material 20. It defines the respective capacitor electrode opening profiles 25b, 26b, 27b, 28b, 29b, 30b, 31b, 32b and 33b. For example only, one preferred way of forming the depicted masking block with its corresponding contour is by photolithographic patterning and etching. The masking blocks 25-33 may have the same or different components as the components of the first material, with different components being more preferred. In the case where it is formed of the same composition (for example), an exemplary way of forming the masking block 28 relative to the material 20 below is by opening the opening formed in the mask (for example, by forming in the photomask The opening of the first material). By way of example only, the etch stop layer may be the received intermediate masking blocks 25-33 and the first material underneath. For example, and only by way of example, referring to the embodiment of FIG. 2, layer 22 may be provided to constitute the intermediate masking blocks 25-33 provided by the etch stop layer and the first material (not shown) below. The exemplary array patterns depicted by the masking blocks 25-33 are only exemplary, and essentially any other existing or to-be-developed array patterns are also covered. In the exemplary embodiment depicted, and by way of example only, the exemplary spacing between directly adjacent masking blocks in a row (ie, between the right edge of masking block 28 and the left edge of masking block 29) It is 500 Angstroms. An exemplary spacing between directly adjacent masking blocks in a column (ie, between the lower edge of masking block 26 and the upper edge of masking block 29 in FIG. 3) is 500 angstroms. The exemplary similar diagonal spacing between diagonally adjacent masking blocks (ie, between blocks 31 and 29) is 750 Angstroms.6 and 7, a second material layer 36 has been deposited on the masking blocks 25-33 and on the first material received between the masking blocks 25-33. In one aspect, the second material 36 has a different composition than the mask blocks 25-33. For example only, where material 20 is BPSG and masking blocks 25-33 are BPSG or undoped silicon dioxide, exemplary preferred materials for layer 36 include silicon nitride, aluminum oxide, and hafnium oxide. Of course, other insulating and even conductive and semi-conductive materials can be used for the material 36. Exemplary semiconductive materials include polysilicon. Exemplary conductive materials include titanium nitride, tantalum nitride, and tungsten. An exemplary deposition thickness for layer 36 is 250 to 300 Angstroms.8-10, the second material layer 36 has been anisotropically etched to effectively expose the masking blocks 25-33 and form interconnected holding structures 40 against the depicted sidewalls of the masking blocks 25-33. Furthermore, in the depicted exemplary embodiment, the interconnected retention structure 40 exposes some of the first material 20 received between the depicted masking blocks. For example only, the exposed first material 20 is located between diagonally adjacent masking blocks, and of course covers openings at other locations, and this may depend on the array patterning of the masking blocks. Further in the depicted preferred embodiment, the retaining structure 40 directly contacts the depicted sidewall of the masking block. In the depicted and most preferred embodiment, the holding structure 40 is formed at least in part by an etched layer of material 36, where it is not masked anywhere in the capacitor array region 14 to form this holding structure 40. Furthermore, in an exemplary preferred embodiment, the structure 40 may be formed in such a way that no material layer 36 is masked anywhere on the substrate to form this holding structure. For example, and by way of example only, FIGS. 9 and 10 depict that no masking has occurred in the peripheral circuit area 16 in order to remove all material 36 therefrom. Of course as an alternative, the material layer 36 may be at least partially masked in this peripheral area as it extends / receives to be received on the peripheral area 16 during anisotropic etching (not shown), so that after this etching Keep at least part of it.11-13, the third material 44 has been used to mask the exposed first material 20 received between the masking blocks 25-33. In one aspect, the third material 44 has a different composition than the composition of the first material 20, the mask blocks 25-33, and the composition of the second material 36. Where the material 20 is BPSG, where the masking blocks 25-33 include doped or undoped silicon dioxide, and where the material layer 36 includes silicon nitride, the exemplary material 44 is polysilicon.Regardless, an exemplary preferred technique for forming the configuration of FIGS. 11-13 is by depositing material 44 followed by chemical mechanical grinding to effectively expose masking blocks 25-33. FIGS. 12 and 13 depict that some residual material 44 remains in the peripheral circuit area 16, although of course the material 44 can be completely removed from the peripheral area 16 at this preferred processing point. Further in one exemplary embodiment, at least one of materials 25-33, materials 36 and 44 includes amorphous carbon, and in another embodiment polysilicon. Of course, in one aspect, at least one of the materials 25-33, the material 36, and the material 44 includes amorphous carbon, and at least another one of these materials includes polysilicon.Referring to FIGS. 14 and 15, the first material 20 under the masking block has been selectively etched (anisotropically) with respect to the second material 36 and the third material 44 after etching the masking blocks 25-33 to form effectively The capacitor electrode openings 25c, 26c, 27c, 28c, 29c, 30c, 31c, 32c, and 33c. (The openings 25c, 26c, 27c, 30c, 32c, and 33c are not shown in Figures 14 and 15, but they are shown in subsequent figures and expressed as such.) In the case of this document, generally selective etching requirements are The removal rate of the removed material is at least a removal ratio of 15: 1 relative to the other materials already stated. In the depicted example, where the third material 44 retains the masking material 20 within the peripheral circuit area 16, the material 20 remains within this peripheral area. If the masking material 44 is not received on the material 20 in this area, it will be possible to remove all such material 20 in the peripheral circuit area at this processing point.16-18, each capacitor electrode 25d, 26d, 27d, 28d, 29d, 30d, 31d, 32d, and 33d has been formed within each respective capacitor electrode opening and against the interconnected holding structure 40. For example only, an exemplary preferred way of forming the capacitor electrode is by depositing a titanium nitride layer of appropriate thickness followed by chemical mechanical polishing. In the depicted preferred and exemplary embodiments, the layer from which the capacitor electrodes are formed is deposited to a lesser degree than to completely fill the openings of the individual capacitor electrodes, so that the resulting individual capacitor electrodes include the shape of the container. Of course, other electrode shapes are also covered, including by way of example only, the use of conductive materials for forming capacitor electrodes to completely block the capacitor electrode openings.Referring to FIGS. 19 and 20, the third material 44 (not shown) has been etched substantially selectively relative to the second material 36 and substantially selectively relative to the capacitor electrodes 25d-33d in order to effectively expose the etched三 材料 下 第一 材料 20。 Three materials below the first material 20.Referring to FIGS. 21 and 22, after such etching of the third material 44, at least some of the exposed first material has been selectively etched at least substantially with respect to the capacitor electrodes 25d-33d and substantially with respect to the second material 36 A material 20 so as to effectively expose the outer lateral sidewalls of the capacitor electrodes 25d-33d, and at least leave at least some of the second material 36 in the interconnected holding structure 40 at least partially supporting the capacitor electrodes 25d-33d. In the depicted and preferred embodiment, substantially all of the first material 20 has been etched such that the outer lateral sidewalls of the capacitor electrode are substantially completely exposed.One embodiment of the invention encompasses at least some etching of the material 36 of the holding structure 40 before depositing the third material 44. Referring to FIG. 25, this embodiment is shown by way of example only in conjunction with the substrate segment 10g of the alternative embodiment. The similar reference numerals in the first embodiment described have been used where appropriate, with a suffix "g" to indicate the difference. FIG. 25 depicts some etching that has occurred with respect to the holding structure so that the holding structure 40g is thus etched to the extent that the space in which the material 20 is exposed can be effectively opened. For example, the dashed line depicted shows the initial opening depicted in the first embodiment, while the solid line nearby depicts such an increase due to a suitable exemplary facet etching of material 26 or a suitable exemplary wet etching. width. For example only, in the case where the material 36g includes silicon nitride, for example, the chemical substance used to produce the exemplary wet etching of the same structure of FIG. 25 includes phosphoric acid. An exemplary facet etching technique will include 100W to 1000W RF power and from 25 ° C to 100 ° C argon plasma.Referring to FIGS. 23 and 24, capacitor dielectric material 50 and capacitor electrode material 60 are deposited on at least some of the outer lateral sidewalls of the capacitor electrode at least below the retention structure 40 as shown. Of course, any suitable existing or to-be-developed materials are also covered. In the depicted exemplary embodiment, the capacitor electrode material 60 is shown as constituting a common capacitor electrode between multiple capacitors. Of course, as an alternative, for example only, it may be patterned or otherwise formed to constitute a separate capacitor electrode for each capacitor or capacitor group. In the depicted preferred embodiment, the retention structure 40 is maintained as part of the completed integrated circuit construction incorporating multiple capacitors.In one aspect, an embodiment of the present invention can be viewed as a method of forming a plurality of capacitors, which includes forming first, second, and third materials of different compositions on a capacitor electrode forming material. For example only, the materials of the masking blocks 25-33 constitute an exemplary first material, material 36 constitutes an exemplary second material, and material 44 constitutes an exemplary third material, all of which are accepted in the exemplary On the capacitor electrode forming material 20. First, the second and third materials are at least partially received at a common height on the capacitor electrode forming material. For example only, FIG. 12 depicts an exemplary such height "H". The second material includes an anisotropically etched holding structure.This first material is substantially selectively etched relative to the second and third materials, and then the capacitor electrode forming material is substantially selectively etched relative to the second and third materials to effectively form a plurality of capacitor electrode openings. For example only, the processing described above with respect to the figures is only an exemplary technique. Each capacitor electrode is formed in each capacitor electrode opening.0Thereafter, the third material is etched substantially selectively with respect to the second material and substantially with respect to the capacitor electrode to effectively expose the capacitor electrode forming material under the third material that has been / is being etched. The capacitor electrode forming material is then etched substantially selectively relative to the second material and substantially selectively relative to the capacitor electrode to effectively expose the outer lateral sidewalls of the capacitor electrode. Only part of or all of the capacitor electrode forming material may be etched. In any case, such etching can also effectively leave at least part of the holding structure supporting at least part of the plurality of capacitor electrodes. Multiple capacitor electrodes are incorporated into multiple capacitors.An embodiment of an aspect of the invention includes a method of forming a plurality of capacitors, thereby providing a plurality of capacitor electrodes in a capacitor array area on a substrate, and wherein the capacitor electrodes include external lateral sidewalls. This method includes at least partially supporting the plurality of capacitor electrodes with a retaining structure that engages with the outer lateral sidewall. The holding structure is formed at least in part by etching a material layer that is not masked anywhere in the capacitor array area to form such a holding structure. The above-mentioned preferred processing for providing a plurality of capacitor electrodes and supporting the plurality of capacitor electrodes with the holding structure as described above is only an exemplary example of the present embodiment just explained. For example and as described above, by way of example only, the multiple capacitor electrodes are incorporated into multiple capacitors. In the above-described exemplary embodiment, such etching is performed before forming a plurality of capacitor electrodes to form a holding structure. However, one aspect of the present invention covers etching after forming a plurality of capacitor electrodes to form a holding structure.In compliance with the regulations, the invention has been described in substantially specific language in terms of structure and method characteristics. However, it should be understood that the present invention is not limited to the specific features shown and described, as the components disclosed herein include preferred forms of putting the invention into practice. Therefore, the invention is claimed in any form or modification of the invention within the appropriate scope of the appended claims explained according to the theory of equality.
A method includes determining, by a first component of a memory subsystem controller, a first temperature value of the memory subsystem controller. The method may further include determining, by a second component of a non-volatile memory device, a second temperature value of the non-volatile memory device coupled to the memory subsystem controller. The method may further include modifying a data parameter in response to at least one of the first temperature value or the second temperature value exceeding a threshold temperature value.
1.A method comprising:determining, by a first component of the memory subsystem controller, a first temperature value of the memory subsystem controller;determining, by a second component of a non-volatile memory device, a second temperature value of the non-volatile memory device coupled to the memory subsystem controller; andA data parameter is modified in response to at least one of the first temperature value or the second temperature value exceeding a threshold temperature value.2.10. The method of claim 1, wherein modifying the data parameter comprises modifying data associated with transferring data from a volatile memory device to the memory subsystem controller and from the controller to the non-volatile memory device the transfer speed of the data associated with it.3.3. The method of claim 2, wherein modifying the transfer speed of the data includes increasing a delay between transmitted portions of the data.4.3. The method of claim 2, wherein the transferred data is a complete set of data associated with a particular write operation.5.5. The method of any one of claims 1-4, wherein modifying the data parameter includes limiting the amount of data transferred to the non-volatile memory device.6.6. The method of claim 5, wherein the amount of the data is limited based on at least one of the first temperature value and the second temperature value.7.6. The method of claim 5, wherein the amount of the data is limited based on a certain number of user inputs.8.7. The method of claim 7, wherein the number of user inputs includes a minimum value, a maximum value, a location in memory where a save operation begins, and a location in the non-volatile memory device where the save operation ends.9.4. The method of any one of claims 1 to 4, wherein the data parameter is an amount of data to be transferred, and the amount of data is decreased until the first temperature value and the second temperature value are until the at least one falls below the threshold temperature value.10.A system comprising:a first memory device including a first temperature component;a second memory device; anda memory subsystem controller coupled to the first memory device and the second memory device and including a second temperature component, the memory subsystem controller performing operations including:determining, by the first temperature component, a first temperature value of the first memory device;determining, by the second temperature component, a second temperature value of the memory subsystem controller; andA data transfer rate is modified in response to at least one of the first temperature value or the second temperature value exceeding a threshold temperature value.11.11. The system of claim 10, wherein the memory subsystem controller is to perform operations comprising reducing the data transfer rate.12.The system of claim 11, wherein the memory subsystem controller is to perform operations comprising:determining a subsequent first temperature value for the first memory device and a subsequent second temperature value for the second memory device; andThe data transfer rate is decreased until at least one of the subsequent first temperature value and the subsequent second temperature value is below the threshold temperature value.13.12. The system of any one of claims 10-12, wherein the memory subsystem controller is to perform a process comprising adding a portion of data transferred between the first memory device and the memory subsystem controller time delay between operations.14.14. The system of claim 13, wherein the memory subsystem controller is to perform operations comprising:determining a subsequent first temperature value for the first memory device and a subsequent second temperature value for the second memory device; andThe time delay between the transmitted data portions is increased until at least one of the subsequent first temperature value or the subsequent second temperature value is below the threshold temperature value.15.15. The system of any one of claims 10-14, wherein one of the first memory device or the second memory device comprises a volatile memory device, and wherein the first memory device or the second memory device comprises a volatile memory device Another of the second memory devices includes a non-volatile memory device.16.15. The system of any one of claims 10-14, wherein the memory subsystem controller is configured to perform operations comprising:An indication is generated in response to at least one of the first temperature value or the second temperature value exceeding the threshold temperature value.17.A system comprising:memory subsystem controllers for non-volatile dual in-line memory modules (NVDIMMs);a first memory device of the NVDIMM coupled to the memory subsystem controller and including a first temperature component; anda second memory device of the NVDIMM coupled to the memory subsystem controller;wherein the memory subsystem includes a second temperature component and is configured to:determining, by the first temperature component, a first temperature value of the first memory device;determining, by the second temperature component, a second temperature value of the memory subsystem controller; andThe amount of data to be transferred is set in response to at least one of the first temperature value or the second temperature value exceeding a threshold temperature value.18.18. The system of claim 17, wherein the memory subsystem controller is configured to perform one of a full save operation or a partial save operation using the set amount of data to transfer.19.The system of claim 18, wherein the memory subsystem controller is configured to:performing the full save operation in response to the at least one of the first temperature value or the second temperature value being below the threshold temperature value; andThe partial save operation is performed in response to the at least one of the first temperature value or the second temperature value exceeding the threshold temperature value.20.19. The system of any one of claims 17-19, wherein the memory subsystem controller, the first memory device and the second memory device are part of an application specific integrated circuit or a field programmable gate array .
Memory Subsystem Temperature Regulationtechnical fieldEmbodiments of the present disclosure relate generally to memory subsystems, and more particularly, to memory subsystem temperature regulation.Background techniqueThe memory subsystem may include one or more memory devices that store data. Memory devices may be, for example, non-volatile memory components and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.Description of drawingsThe present disclosure will be more fully understood from the detailed description provided below and the accompanying drawings of various embodiments of the disclosure.1 illustrates an example computing system including a memory subsystem, according to some embodiments of the present disclosure.2 illustrates an example of a memory subsystem controller and a temperature component in accordance with some embodiments of the present disclosure.3 illustrates another example of a memory subsystem controller and temperature component in accordance with some embodiments of the present disclosure.4A illustrates a flow diagram of memory subsystem operation corresponding to regulating temperature, according to some embodiments of the present disclosure.4B illustrates a flow diagram of memory subsystem operation corresponding to regulating temperature, according to some embodiments of the present disclosure.5 is a flowchart corresponding to a method for performing memory subsystem operations to regulate temperature, according to some embodiments of the present disclosure.6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure are directed to temperature regulation associated with memory subsystems, and in particular to memory subsystems that include temperature components. The memory subsystem may be a storage device, a memory module, or a mix of storage devices and memory modules. An example of a memory subsystem is a memory system such as a non-volatile dual inline memory module (NVDIMM). Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components, such as "memory devices" that store data. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.A non-volatile dual inline memory module (NVDIMM) is a type of random access memory that has volatile memory for normal operation and in which volatile memory is stored using an on-board backup power supply in the event of a power failure Non-volatile memory for the contents of the memory. With respect to a memory unit, a host may be structured as one or more processors that control the flow of data to and from the memory unit in response to instructions (eg, applications, programs, etc.) executed by the host. In the event of a power failure, the NVDIMM can copy all data from its volatile memory (eg, DRAM or DRAM set) to its persistent flash storage, and can copy all data back to volatile when power is restored Sexual memory. The state transfer of all DRAM data to persistent data on persistent flash storage may be performed on a power cycle. Although the examples described above relate to persistent flash storage devices, embodiments are not so limited. For example, some embodiments may include persistent storage that is non-flash persistent storage. The NVDIMM has its own battery backup power or access to a dedicated power source to allow the NVDIMM to complete the save.NVDIMMs can contain many different types (N, P, X, F). NVDIMM-Ns are dual inline memory modules (DIMMs) that typically have flash memory and traditional dynamic random access memory (DRAM) on the same module. The host processing unit has direct access to conventional DRAM. NVDIMM-Ps may contain persistent main memory and may share DDR4 or DDR5 DIMM interconnects with DRAM DIMMs. NVDIMM-X can include DDR4 DIMMs with NAND flash storage and volatile DRAM on the same module. NVDIMM-F may include NVDIMMs with flash storage.In various embodiments, a set of control registers in the NVDIMM may be implemented to provide a portion of the memory in the NVDIMM that is saved to non-volatile memory, where "memory" refers to the main memory of the NVDIMM. The main memory is volatile memory, such as DRAM, which stores user data. The set of control registers may provide for a partial save by including a starting offset of a portion of volatile memory to identify where in volatile memory the save operation begins and by including the amount of volatile memory to be saved Mechanisms. The host can populate the set of control registers in the NVDIMM with the identification of the start of the save operation and the amount of content for the save operation. This set of control registers may also control the inverse operation of restoring partial saves back to the NVDIMM's volatile memory. This structure for maintaining data stored on the NVDIMM provides the host with increased flexibility in handling user data relative to the application being processed by the host. Provides access to the host to perform full or partial content preservation at any offset. This provides the host with the ability to have better control and more control over what is saved and restored.Whether to perform a partial save/restore or a full save/restore can affect the temperature of the NVDIMM's controller and/or the NVDIMM's non-volatile memory. The temperature of the system (eg, controller, non-volatile memory, etc.) can be important due to cooling costs, system usage or placement, application of the DIMM, temperature of the DIMM due to increased temperature or temperature exceeding the maximum value. damage, and use of NVDIMMs in non-power failure events (eg, when operating times are less critical). The temperature of the controller and/or the non-volatile memory can be monitored, and a maximum or threshold temperature of each of the controller and the non-volatile memory can be determined or predetermined. In response to the temperature value of the controller or the non-volatile memory exceeding the threshold temperature value, an operation of adjusting the temperature may be performed to bring the temperature value below the threshold temperature value.Embodiments herein may allow the use of temperature components associated with the NVDIMM device to control system temperature by exploiting the NVDIMM's save and/or partial save capabilities or by dynamically changing the data transfer speed. For example, a temperature value of a memory device (eg, volatile memory and/or non-volatile memory) associated with the NVDIMM may be sensed or monitored to maintain the temperature value below using the operations described herein to regulate temperature Threshold temperature value or within a range of temperature values.As described in greater detail herein, the operation of regulating temperature may be carried out using a temperature component residing on the NVDIMM. In some embodiments, the temperature component may reside on a controller (eg, a memory subsystem controller) associated with the NVDIMM or may reside on non-volatile memory such as flash memory devices, cross-point memory devices, etc. on the device. As used herein, the term "residing on" means that something is physically located on a particular component. For example, a temperature component "residing on a controller" refers to the condition that the temperature component is physically located on the controller. The term "residing on" may be used interchangeably herein with other terms such as "deployed on" or "located on".In some prior methods, a save operation performed on data stored in a volatile memory device may include saving an entire portion of the data to a non-volatile memory device, and it may not be possible to dynamically adjust the size of the saved data in order to prevent the temperature value from exceeding Threshold temperature value. Similarly, the transfer rate of data cannot be dynamically adjusted in order to maintain the temperature of the system within a certain range or below a certain threshold. A full save or restore of data may cause the temperature of the controller and/or non-volatile memory to increase beyond user or server system limits.Aspects of the present disclosure address the above and other deficiencies by adjusting the size of transfer data or the transfer speed of data to control the temperature of the controller and/or the temperature of the memory subsystem controller. Advantages of the present disclosure include dynamically controlling the temperature of the controller and/or memory subsystem controller by adjusting characteristics of the data transfer, such as data size or data transfer speed. Embodiments described herein include a temperature component that resides on the memory subsystem or memory subsystem controller to make it possible to perform operations (eg, data size and/or data transfer speed adjustments) to adjust the temperature and/or temperature of the controller. The temperature of the volatile memory device. For example, because a temperature component may be provided that resides on the memory subsystem and/or the memory subsystem controller, embodiments described herein may allow the temperature component to monitor the temperature of the controller and adjust data size (eg, for save data to non-volatile memory) in order to reduce the temperature of the controller. Similarly, for example, embodiments described herein may allow a temperature component to monitor the temperature of the controller and adjust the speed of data transfer in order to reduce the temperature of the controller. In addition, as described below, temperature measurements of non-volatile memory and system (eg, field programmable gate array (FPGA)) temperature can be continuously monitored, and the monitored and maximum values can be stored in registers of the FPGA.FIG. 1 illustrates an example computing system 100 including a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, storage device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination thereof.Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs). Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) Cards and Hard Disk Drives (HDDs).Computing system 100 may be a computing device such as a desktop computer, laptop computer, web server, mobile device, vehicle (eg, airplane, drone, train, car, or other vehicle), Internet of Things (IoT) enabled A device, an embedded computer (eg, a computer included in a vehicle, industrial equipment, or networked commercially available device), or such computing device that includes memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 illustrates one example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communicative connection or a direct communicative connection (eg, without intervening components), whether wired or wireless, Include connections such as electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, NVDIMM controller), and a storage protocol controller (eg, PCIe controller, SATA controller). Host system 120 uses, for example, memory subsystem 110 to write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), Double Data Rate (DDR) memory bus, Small Computer System Interface (SCSI), Dual Inline Memory Module (DIMM) interface (eg, Double Data Rate (DDR) capable DIMM socket), and the like. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may further utilize an NVM Express (NVMe) interface to access components (eg, memory device 130). A physical host interface may provide an interface for passing control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 illustrates memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple individual communication connections, and/or a combination of communication connections.The memory devices 130, 140 may include different types of non-volatile memory devices and/or various combinations of volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices, such as memory device 130, include NAND-type flash memory and write-in-place memory, such as three-dimensional crosspoint ("3D crosspoint") memory devices, which are non- A crosspoint array of volatile memory cells. Cross-point arrays of non-volatile memory can be combined with stackable cross-grid data access arrays for bit storage based on changes in bulk resistance. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells. NAND-type flash memories include, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each of memory devices 130 may include one or more arrays of memory cells. One type of memory cell, eg, a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), tri-level cells (TLCs), and quad-level cells (QLCs), can store multiple bits per cell. In some embodiments, each of memory devices 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination of such memory cell arrays. In some embodiments, a particular memory device may include an SLC portion, and an MLC portion, a TLC portion, or a QLC portion of memory cells. The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks.Although non-volatile memory components are described, such as 3D cross-point non-volatile memory cell arrays and NAND-type flash memory (eg, 2D NAND, 3D NAND), memory device 130 may be based on any other type of non-volatile memory memory or storage devices such as read only memory (ROM), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory ( FeRAM), magnetic random access memory (MRAM), spin transfer torque (STT)-MRAM, conductive bridge RAM (CBRAM), resistive random access memory (RRAM), oxide-based RRAM (OxRAM), or non- (NOR) flash memory, and Electrically Erasable Programmable Read Only Memory (EEPROM).Memory subsystem controller 115 (for simplicity, controller 115 ) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data and other such operations at memory device 130 . The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuits with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 . In the example shown, the local memory 119 of the memory subsystem controller 115 includes embedded memory that is configured to store the various processes, operations, logic flows, and routines used to perform the operations that control the memory subsystem 110 . Instructions, including handling communications between memory subsystem 110 and host system 120 .In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the memory subsystem controller 115, in another embodiment of the invention, the memory subsystem 110 does not include the memory subsystem controller 115, but may rely on External control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).Generally, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to memory device 130 and/or memory device 140 . Memory subsystem controller 115 may be responsible for other operations associated with memory device 130, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical address ( For example, address translation between logical block addresses (LBAs, namespaces) and physical addresses (eg, physical block addresses). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access memory device 130 and/or memory device 140 and translate responses associated with memory device 130 and/or memory device 140 for Host system 120 information.Memory subsystem 110 may also include additional circuits or components not illustrated. In some embodiments, memory subsystem 110 may include caches or buffers (eg, DRAM) and address circuitry (eg, row and column decoders) that may be received from memory subsystem controller 115 address and decode the address to access memory device 130 and/or memory device 140.In some embodiments, memory device 130 includes a local media controller 135 that operates with memory subsystem controller 115 to perform operations on one or more memory units of memory device 130 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 130 externally (eg, perform media management operations on memory device 130 ). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 135 ) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.Memory subsystem 110 includes a temperature component 113 , which may be configured to schedule and/or perform operations to regulate temperature and communicate temperature data over various components, data paths, and/or interfaces of memory subsystem 110 . Although not shown in FIG. 1 so as not to obscure the drawing, temperature component 113 may include various circuitry to facilitate the ranking and assignment of sets of memory cells. For example, temperature component 113 may include dedicated circuitry in the form of ASICs, FPGAs, state machines and/or various components that may allow temperature component 113 to schedule and/or perform temperature operations and communicate results to memory subsystem 110 , data paths and/or other logic circuitry of the interface.The memory subsystem controller 115 includes a temperature component 113 that may be configured to schedule and/or perform operations to regulate the temperature on various components, data components, and/or interfaces of the memory subsystem 110 . For example, temperature component 113 may sense and/or monitor a temperature value indicative of the temperature of memory subsystem controller 115 . Although not shown in FIG. 1 so as not to obscure the drawing, temperature component 113 may include various circuitry to facilitate the ranking and assignment of sets of memory cells. For example, temperature component 113 may include dedicated circuitry in the form of ASICs, FPGAs, state machines, and/or may allow temperature component 113 to schedule and/or perform operations to regulate various components, data elements of memory subsystem 110 and/or other logic circuitry that interfaces and communicates the temperature values to various other components of the memory subsystem 110 . The temperature component 113 may sense and/or monitor multiple temperature values within a time period or at different time intervals, and feed back the temperature values as a feedback loop to dynamically monitor and adjust the temperature values.As described in more detail in conjunction with FIGS. 2 and 3 , temperature component 113 may be communicatively coupled to memory device 130 and may access memory device 130 , memory device 140 , an internal data path of memory subsystem 110 and/or an internal data path of memory subsystem 110 . interface to perform the operations described herein and/or communicate temperature value data to additional elements of the memory subsystem 110 . In some embodiments, the operations performed by temperature component 113 may be performed during an initialization or pre-initialization phase of data transfer within memory subsystem 110 and/or memory subsystem controller 115 . Accordingly, in some embodiments, the temperature component 113 may perform the operations described herein prior to data transfer in order to determine the size of the data to be transferred or the speed of the data transfer to be initially performed. During the initial data transfer, additional temperature values can be obtained, and the size of the transferred data or the data transfer speed can be adjusted in order to adjust the temperature values.The memory device 130 includes a temperature component 131 that may be configured to schedule and/or perform operations to regulate the temperature on the memory device 130 . For example, temperature component 131 may sense and/or monitor a temperature value indicative of the temperature of memory device 130 . Although not shown in FIG. 1 so as not to obscure the drawing, temperature component 131 may include various circuitry to facilitate ranking and assignment of sets of memory cells. For example, temperature component 131 may include dedicated circuitry in the form of ASICs, FPGAs, state machines, and/or may allow temperature component 131 to schedule and/or perform operations to regulate various components, data elements and and/or other logic circuitry that interfaces and communicates the temperature values to various other components of the memory subsystem 110 . The temperature component 131 may sense and/or monitor multiple temperature values within a time period or at different time intervals, and feed back the temperature values as a feedback loop to dynamically monitor and adjust the temperature values of the memory device 130 .As described in more detail in conjunction with FIGS. 2 and 3 , temperature component 131 may be communicatively coupled to memory subsystem controller 115 and may access memory subsystem controller 115 , memory devices 140 , internal data paths of memory subsystem 110 and/or or interface of memory subsystem 110 to perform the operations described herein and/or communicate temperature value data to additional elements of memory subsystem 110 . In some embodiments, the operations performed by temperature component 131 may be performed during initialization or pre-initialization phases of data transfers to and from memory device 130 and/or within memory subsystem 110 . Accordingly, in some embodiments, temperature component 131 may perform the operations described herein prior to data transfer in order to determine the size of the data to be transferred or the speed of the data transfer to be initially performed. During the initial data transfer, additional temperature values can be obtained, and the size of the transferred data or the data transfer speed can be adjusted in order to adjust the temperature values.Data generated by temperature component 113 or 131 may be injected into the data path between memory subsystem controller 115 and memory device 140 or memory device 130 . The data may be a number of bits corresponding to a particular bit pattern. For example, the data may be an Altera PHY Interface (AFI) bit pattern, a user control bit pattern, a bidirectional (DQ) pin control data pattern, or may be written to the first memory device (or to a memory other than the first memory device) device) and other suitable bit patterns read from it. In some embodiments, the data may contain a particular recurring set of alphanumeric characters, such as a string of alternating ones and zeros or a certain number of ones (or zeros) followed by a certain number of zeros (or ones). It will be appreciated that embodiments are not limited to these enumerated examples, and that data may include any pattern of bits and/or data that may be written to and read from one of the memory devices, etc., therefrom.The temperature component 113 may be configured to cause data to be injected into the data path such that the data is written to the second memory device using a particular data size or a particular data transfer speed based on the temperature value determined by the temperature component 113 . Similarly, temperature component 131 may be configured to cause data to be injected into the data path using a particular data size or a particular data transfer speed based on the temperature value determined by temperature component 131 . As described above, data may be written to the second memory device as part of an operation to save data to the second memory device, or as part of an operation to perform a partial save operation using the second memory device.In some embodiments, temperature component 113 may generate the first indication in response to a determination that the temperature value of memory subsystem controller 115 is approaching a temperature value threshold. The first indication may indicate reducing the size of the data being transferred or reducing the data transfer speed in order to maintain the temperature value or reduce the temperature value. The reduction may be based on how close the temperature value is to the threshold temperature value or the margin of the temperature value between the determined temperature value and the threshold temperature value. Similarly, a second indication to maintain the data size or data transfer speed may be generated. Additionally, a third instruction to increase the data size or data transfer speed may be generated. The third indication may be generated in response to the temperature value being below the threshold temperature value by a certain margin. In this example, the temperature value may be increased by increasing the data size or increasing the data transfer speed in order to increase data transfer efficiency, reduce the amount of time it takes to transfer data, and the like.2 illustrates an example of a memory subsystem controller 215 and a temperature component 213 in accordance with some embodiments of the present disclosure. Memory subsystem controller 215 may be similar to memory subsystem controller 115 shown in FIG. 1 , and temperature component 213 may be similar to temperature component 113 shown in FIG. 1 . Furthermore, processor 217 may be similar to processor 117 shown in FIG. 1 , memory device 230 may be similar to memory device 130 shown in FIG. 1 , and memory device 240 may be similar to memory device 140 shown in FIG. 1 . In addition to temperature component 213 , processor 217 , memory device 230 , and memory device 240 , memory subsystem controller 215 may include clock component 218 , system interconnect 212 , volatile memory controller 219 that may include volatile memory volatile memory control infrastructure 214, and non-volatile memory control infrastructure 216.Clock component 218 may provide timing signals to memory subsystem controller 215 to facilitate execution of memory operations scheduled by memory subsystem controller 215 . In some embodiments, clock component 218 may be a register clock driver that may be configured to buffer and/or re-drive commands and/or addresses to memory device 230 and/or during operation of memory subsystem controller 215 memory device 240 .System interconnect 212 may be a communication subsystem that may allow commands, signals, instructions, etc. to pass between processor 217 , clock component 218 , volatile memory control infrastructure 214 , and non-volatile memory control infrastructure 216 . System interconnect 212 may be a crossbar switch ("XBAR"), network-on-chip, or other communication subsystem that implements processor 217, clock component 218, volatile memory control infrastructure 214, and non-volatile memory control infrastructure 216 interconnection and interoperability between. For example, system interconnect 212 may facilitate visibility between processor 217, clock component 218, volatile memory control infrastructure 214, and non-volatile memory control infrastructure 216 to facilitate communication therebetween. In some embodiments, communication between processor 217, clock component 218, volatile memory control infrastructure 214, and non-volatile memory control infrastructure 216 via system interconnect 212 may be via respective data paths (by the System interconnect 212 is connected to other components of memory subsystem controller 215 (shown by arrows) provided. These data paths may be used to share indications that increase in response to changing temperature values obtained by temperature component 213 and corresponding to the temperature of memory subsystem controller 215 or obtained by temperature component 231 of memory device 230 (indicating the temperature value of memory device 230 ). Or a command to reduce the data transfer size or data transfer speed.Volatile memory control infrastructure 214 may include circuitry to control data transfers between memory devices 230 and a host, such as host system 120 shown in FIG. 1 . For example, volatile memory control infrastructure 214 may include various interfaces, direct media access components, registers, and/or buffers.In the embodiment illustrated in FIG. 2 , temperature component 213 resides on memory subsystem controller 215 and temperature component 231 resides on memory device 230 . As described above, the temperature components 213, 231 may be configured to facilitate the operation of the memory subsystem controller 215 and/or the memory subsystem in which the memory subsystem controller 215 is deployed (eg, the memory subsystem 110 shown in FIG. 1). execution of the operation. For example, temperature components 213, 231 may be configured to adjust the size of data to be transferred or the speed of data transfer corresponding to a particular data transfer (eg, for performing full or partial restores or saves).The temperature components 213, 231 may be further configured to affect timing information from the clock component 218 (eg, generate an indication to increase or decrease the clock cycle signal associated with the data transfer frequency), perform operations to convert the timing signal generated by the clock component The frequency of the timing signal and the expected timing signal are compared, and based on the comparison, it is determined whether the frequency of the timing signal and the expected timing signal are substantially equivalent. As used herein, the term "substantially" means that a characteristic need not be absolute, but rather close enough to achieve the advantage of the characteristic. For example, "substantially equivalent" is not limited to absolute equivalence, and may include minor variations of equivalence attributable to manufacturing limitations and/or operating characteristics of memory subsystem controller 215 .Non-volatile memory control infrastructure 216 may include circuitry to control data transfers between memory devices 240 and a host, such as host system 120 shown in FIG. 1 . For example, volatile memory control infrastructure 214 may include various interfaces, direct media access components, registers, and/or buffers.3 illustrates another example of a memory subsystem controller 315 and a temperature component 313 in accordance with some embodiments of the present disclosure. Memory subsystem controller 315 may be similar to memory subsystem controller 215 shown in FIG. 2 , and temperature component 313 may be similar to temperature component 213 shown in FIG. 2 . Additionally, processor 317, system interconnect 312, volatile memory control infrastructure 314, volatile memory controller 319, and non-volatile memory control infrastructure 316 may be similar to processors 217, 316 shown in FIG. System interconnect 212 , volatile memory control infrastructure 214 , volatile memory controller 219 , and non-volatile memory control infrastructure 216 .As shown in FIG. 3 , the memory subsystem controller may further include a memory subsystem core 342 , which may include a processor 317 and a data and/or instruction cache 344 . Additionally, volatile memory control infrastructure 314 may include volatile memory interface 447 and volatile memory controller 319 that may include temperature component 313 . Additionally, the non-volatile memory control infrastructure 316 may include a non-volatile memory direct memory access (DMA) component 348 and a non-volatile memory controller 349 .Memory subsystem core 342 may be coupled to system interconnect 312 via data paths 336, which may allow commands, signals, data, and other information to communicate between memory subsystem core 342, volatile memory control infrastructure 314, and nonvolatile volatile memory control infrastructure 316 transfers. Memory subsystem core 342 may be a reduced instruction set computing (RISC) device, such as a RISC-V device. In some embodiments, memory subsystem core 342 may be a MicroBlaze soft processor core, or other suitable processing core.Volatile memory control infrastructure 314 may include volatile memory controller 319 , which may include temperature component 313 , and/or volatile memory interface 347 .4A-5 each illustrate a flow diagram corresponding to methods 450, 460, 570 for performing memory subsystem operations to regulate temperature in accordance with some embodiments of the present disclosure. Each respective method 450, 460, 570 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (eg, instructions to run or execute on a processing device), or a combination thereof. In some embodiments, each method 450 , 460 , 570 is performed by temperature component 113 or temperature component 131 of FIG. 1 , temperature component 213 or temperature component 231 of FIG. 2 , and/or temperature component 313 of FIG. 3 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Therefore, it is to be understood that the illustrated embodiments are examples only and that the illustrated processes may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.FIG. 4A illustrates a flowchart 450 of memory subsystem operation corresponding to regulating temperature, according to some embodiments of the present disclosure. At operation 451, a temperature component (eg, temperature components 113, 131, 213, 231, 313, 331) may perform an initial temperature reading. The initial temperature reading may indicate the temperature of a controller (eg, memory subsystem controller 115, 215, 315) obtained by a first temperature component (eg, temperature component 113), or may indicate a temperature obtained by a second temperature component (eg, temperature component 113) The temperature of the non-volatile memory device (eg, memory devices 130, 230) obtained by the temperature component 131). At operation 452, a data transfer rate may be set. In some embodiments, the data transfer rate may be set based on an initial temperature reading of the controller and/or non-volatile memory device. In some embodiments, the data transfer rate may be set based on a predetermined initial data transfer rate.At operation 453, data transfer may begin. For example, data may be transferred from a volatile memory device (eg, memory device 140, 240) to a non-volatile memory device (eg, memory device 130, 230). Data transfer can be a save operation or a partial save operation. The data transfer may include restore operations to restore data from the non-volatile memory device to the volatile memory device.At operation 454, the temperature value of the controller and/or the non-volatile memory device may be checked by its corresponding temperature component and subsequent temperature values may be obtained. At operation 455, the data transfer rate may be reset based on the checked temperature value. For example, starting a data transfer may cause the temperature value of the controller to increase and be closer to the threshold temperature value. In response to this increase, the data transfer rate can be decreased in order to decrease the temperature value of the controller. Similarly, the temperature value of the non-volatile memory device can be increased, and the data transfer rate can be decreased in order to decrease the temperature value corresponding to the non-volatile memory device.As indicated by arrow 457, a feedback loop or mechanism may be used to dynamically adjust the transfer rate as data is transferred by resetting the data transfer rate and checking subsequent temperature values for the controller or non-volatile memory device in an iterative process or repeatedly until until the data transfer is completed. In this way, the temperature value may be maintained below the threshold temperature value or within a range of temperature values. At operation 456, the data transfer may end or complete. In some embodiments, temperature value sensing or monitoring of the temperature component may be suspended until subsequent data transfers are requested.In some embodiments, the temperature values of the controller and the non-volatile memory device can be used as a feedback mechanism to pass data transfer rates with dynamic data transfer rates from the volatile memory device to the controller and then to the non-volatile memory device And limit data save/restore operations. In one example, this can be done by increasing the delay between smaller units of data (eg, pages, blocks, banks, DRAM, etc.) or by reducing the controller and volatile memory devices or the controller and non-volatile memory The communication speed between devices to dynamically adjust the data transfer rate. In this way, performing a save/restore of the full data set to non-volatile memory may allow the host to be unaware of any changes to the data transfer speed and without regard to memory space. Due to the variable time of operation this may result, operating time may receive less attention when performing non-power failure operations.FIG. 4B illustrates a flowchart 460 of memory subsystem operations corresponding to regulating temperature in accordance with some embodiments of the present disclosure. At operation 461, a temperature component (eg, temperature components 113, 131, 213, 231, 313, 331) may perform an initial temperature reading. The initial temperature reading may indicate the temperature of a controller (eg, memory subsystem controller 115, 215, 315) obtained by a first temperature component (eg, temperature component 113), or may indicate a temperature obtained by a second temperature component (eg, temperature component 113) The temperature of the non-volatile memory device (eg, memory devices 130, 230) obtained by the temperature component 131). At operation 462, the amount of data to be transferred may be set. In some embodiments, the amount or size of data may be set based on an initial temperature reading of the controller and/or non-volatile memory device. In some embodiments, the data amount may be set based on a predetermined initial data transfer amount.At operation 463, data transfer may begin. For example, data may be transferred from a volatile memory device (eg, memory device 140, 240) to a non-volatile memory device (eg, memory device 130, 230). The data transfer can be a save operation or a partial save operation. The data transfer may include restore operations to restore data from the non-volatile memory device to the volatile memory device.At operation 464, the temperature value of the controller and/or the non-volatile memory device may be checked by its corresponding temperature component and subsequent temperature values may be obtained. At operation 465, the data transfer amount may be reset based on the checked temperature value. For example, starting a data transfer may cause the temperature value of the controller to increase and be closer to the threshold temperature value. In response to this increase, the amount of data transfer can be decreased in order to decrease the temperature value of the controller. Similarly, the temperature value of the non-volatile memory device can be increased, and the amount of data transfer can be decreased in order to decrease the temperature value corresponding to the non-volatile memory device.As indicated by arrow 467, a feedback loop or mechanism may be used to reset the data transfer rate and check subsequent temperature values for the controller or non-volatile memory device in an iterative process or repeatedly to dynamically adjust the amount of data transfer as data is transferred until the data transfer is complete. In this way, the temperature value may be maintained below the threshold temperature value or within a range of temperature values. At operation 466, the data transfer may end or complete. In some embodiments, temperature value sensing or monitoring of the temperature component may be suspended until subsequent data transfers are requested.In some embodiments, the temperature values of the controller and the non-volatile memory device can be used as a feedback mechanism to pass data transfer rates with dynamic data transfer rates from the volatile memory device to the controller and then to the non-volatile memory device And limit data save/restore operations. In one example, the amount of data transfer may include having dynamic memory capacity for save/restore operations on the data. Instead of having a static amount of data transferred to the non-volatile memory device (eg, all or part of the capacity of the volatile memory space), the amount of data can be dynamic and determined by the controller's current temperature value, the non-volatile memory device The current temperature value and/or user input such as a minimum value, a maximum value, the beginning of the memory space to save, or the end of the memory space to save is determined. In this way, a predictable and user-determined maximum operating time can be used. This method can be used when the host is only interested in performing partial save/restore operations and a minimum size is set or the host implements a priority memory space.In some embodiments, a combination of adjusting both the data transfer rate and the amount of data transfer may be used in order to dynamically affect the temperature of both the controller and the non-volatile memory device. For example, the data transfer rate can be reduced while also reducing the amount of data to be transferred in order to reduce the temperature value of at least one of the controller and the non-volatile memory, and vice versa.5 is a flowchart corresponding to a method 570 for performing memory subsystem operations to regulate temperature, according to some embodiments of the present disclosure. The method 570 may comprise hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, instructions that run or execute on a processing device), or The combined processing logic executes. At operation 571, method 570 may include determining, by a first component of the memory subsystem controller, a first temperature value of the memory subsystem controller. The components may be similar to temperature component 113 of FIG. 1, temperature component 213 of FIG. 2, and/or temperature component 313 of FIG. 3, and the memory subsystem controller may be similar to memory subsystem controller 115 of FIG. 1, memory of FIG. 2 Subsystem controller 215 and/or memory subsystem controller 315 of FIG. 3 . As described above, temperature values may be sensed and/or monitored by components and allow components to adjust data parameters.At operation 573, method 570 may include determining, by a second component of the non-volatile memory device, a second temperature value of the non-volatile memory device coupled to the memory subsystem controller. The memory device may be similar to memory device 130 and/or memory device 140 of FIG. 1 , and/or memory device 230 and/or memory device 240 of FIG. 2 .At operation 575, method 570 may include modifying the data parameter in response to at least one of the first temperature value and the second temperature value exceeding a threshold temperature value. The data parameters may include the size of the data to be transferred, the speed of data transfer, and the like. As described above, data may be written to a memory device as part or part of a save operation performed by a memory subsystem, such as memory subsystem 110 shown in FIG. 1 . In some embodiments, data paths between memory elements may be reserved for dynamically switching between a host and a non-volatile memory device, which may be coupled to a memory subsystem controller, before a data transfer occurs or while data is being transferred. data is transferred between them, as described above.In some embodiments, the memory device can be a volatile memory device, and a memory device different from the memory device can be a non-volatile memory device (or vice versa). For example, the memory device may be a system memory device, such as a DRAM (eg, dual-port RAM) memory device, and a memory device other than the memory device may be a storage device, such as a NAND memory device, a three-dimensional crosspoint memory device or other non-volatile memory device.Method 570 may further include modifying data transfer speeds associated with transferring data from the volatile memory device to the memory subsystem controller and from the controller to the non-volatile memory device. Volatile memory devices may be coupled to the memory subsystem controller. Method 570 may further include increasing a delay between transmitted portions of data. The transmitted data may be a complete set of data or may be a partial set of data. Data parameters can be modified by limiting the amount of data transferred to the non-volatile memory device. The amount of data may be limited based on one of the first temperature value and the second temperature value. The amount of data can be limited based on one of a certain number of user inputs. The number of user inputs may include a minimum value, a maximum value, the beginning of the memory space to save, and the end of the memory space to save. Method 570 may further include determining a second temperature value for the NAND memory device. The method may further include6 illustrates an example machine of a computer system 600 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, computer system 600 may correspond to including, coupled to, or utilizing a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or may be operable to perform the operations of a controller (eg, executing an operating system to perform operations associated with FIG. 1 ). 1) of the host system (eg, host system 120 of FIG. 1 ). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. Machines may operate at the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, or capable of (sequentially or in sequence) Otherwise) any machine that executes a set of instructions specifying actions to be taken by the machine. Additionally, although describing a single machine, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute one (or more) sets of instructions to perform any of the methods discussed herein. or more.The example computer system 600 includes a processing device 602, main memory 604 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), Static memory 606 (eg, flash memory, static random access memory (SRAM), etc.), and data storage system 618 , communicate with each other via bus 630 .Processing device 602 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. Processing device 602 may also be one or more special-purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. Computer system 600 may further include network interface device 608 to communicate via network 620 . In some embodiments, main memory 604 or system 618 may be, for example, an NVDIMM as described in association with FIGS. 2-3 .Data storage system 618 may include machine-readable storage media 624 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 626 or embodying any or more of the methods or functions described herein software. Instructions 626 may also reside entirely or at least partially within main memory 604 and/or within processing device 602 during execution by computer system 600, which also constitute machine-readable storage media. Machine-readable storage medium 624 , data storage system 618 , and/or main memory 604 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 626 include instructions to implement functionality corresponding to a memory block sizing and allocation component (eg, temperature component 113 of FIG. 1 ). Although machine-readable storage medium 624 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. Herein, and generally, an algorithm is conceived as a self-consistent sequence of operations that produce a desired result. Operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to a computer that manipulates and transforms data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the computer system's memory or registers or other such information storage systems The actions and processes of a system or similar electronic computing device.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. This computer program may be stored in a computer readable storage medium such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magneto-optical disks, read only memory (ROM), random access memory (RAM), EPROM , EEPROM, magnetic or optical cards, or any type of media suitable for storing electronic instructions and each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure has not been described with reference to any particular programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present invention may be provided as a computer program product or software, which may comprise a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform a process according to the present invention. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory devices, etc.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Methods are described for characterizing floating body delay effects in SOI wafers comprising providing a pulse edge to a floating body and a tied body chain in the wafer, storing tied body chain data according to one or more of the floating body devices, and characterizing the floating body delay effects according to the stored tied body chain data. Test apparatus are also described comprising a floating body chain including a plurality of series connected floating body inverters or NAND gates fabricated in the wafer and a tied body chain comprising a plurality of series connected tied body devices to in the wafer. Storage devices are coupled with the tied body devices and with one or more of the floating body devices and operate to store tied body chain data from the tied body devices according to one or more signals from floating body chain.
1. Test apparatus for use in characterizing floating body delay effects in an SOI wafer, comprising:a floating body chain comprising a plurality of floating body devices fabricated in series with one another in the wafer, a tied body chain comprising a plurality of tied body devices fabricated in series with one another in the wafer; and a plurality of storage devices individually coupled with the plurality of tied body devices and with at least one of the floating body devices, the plurality of storage devices being adapted to store tied body chain data from the plurality of tied body devices according to the at least one of the floating body devices. 2. The apparatus of claim 1, wherein the floating body chain comprises one of a plurality of series coupled floating body inverter devices and a plurality of series coupled floating body NAND gate devices, wherein the floating body devices individually comprise floating body MOS transistors fabricated in the wafer.3. The apparatus of claim 2, wherein the tied body chain comprises a plurality of series coupled floating body inverter devices and wherein the tied body inverter devices individually comprise tied body MOS transistors fabricated in the wafer.4. The apparatus of claim 3, wherein the floating body chain comprises a plurality of series coupled floating body inverter devices.5. The apparatus of claim 2, wherein the floating body chain comprises a plurality of series coupled floating body inverter devices.6. The apparatus of claim 1, further comprising a ring oscillator circuit operatively coupled with the tied body chain and adapted to selectively couple an input of a first tied body device with an output of a last tied body device in the tied body chain such that the tied body devices are series connected in a ring, wherein the tied body chain operates as an oscillator when the ring oscillator circuit couples the first and last tied body devices.7. The apparatus of claim 6, wherein the ring oscillator circuit comprises a frequency counter receiving an output of one of the tied body devices in the tied body chain and a buffer receiving a divided count of transitions on the output of the tied body device from the frequency counter.8. The apparatus of claim 1:wherein the floating body chain comprises: a first floating body device having an input for receiving an input pulse edge from a pulse generator; a last floating body device; and a plurality of intermediate floating body devices serially connected between the first and last floating body devices; wherein the plurality of storage devices comprises: a first set of storage devices individually coupled with odd numbered ones of the plurality of tied body devices and adapted to store tied body chain data from the odd numbered ones of the plurality of tied body devices according to a first control signal from a first one of the intermediate floating body devices in the floating body chain; and a second set of storage devices individually coupled with even numbered ones of the plurality of tied body devices and adapted to store tied body chain data from the even numbered ones of the plurality of tied body devices according to a second control signal from a second one of the intermediate floating body devices in the floating body chain; and wherein the first one of the intermediate floating body devices is nearer to the first floating body device than is the second one of the intermediate floating body devices in the floating body chain. 9. The apparatus of claim 8, comprising:a data interface formed in the wafer and being coupled with the plurality of storage devices, and operable to receive the tied body chain data from the plurality of storage devices, and to provide access to the tied body chain data to an external test device; a pulse input pad formed in the wafer and being coupled with an input of a first one of the plurality of floating body devices and coupled with an input of a first one of the plurality of tied body devices to provide a pulse edge from a pulse generator to the first ones of the floating body devices and the tied body devices; and a pair of power connection pads coupled with power terminals of the devices in the wafer, and operable to provide electrical power to the devices in the wafer by an external power source. 10. The apparatus of claim 8, wherein the floating body chain comprises one of a plurality of series coupled floating body inverter devices and a plurality of series coupled floating body NAND gate devices, wherein the floating body devices individually comprise floating body MOS transistors fabricated in the wafer.11. The apparatus of claim 10, wherein the tied body chain comprises a plurality of series coupled floating body inverter devices and wherein the tied body inverter devices individually comprise tied body MOS transistors fabricated in the wafer.12. A test system for characterizing floating body delay effects in an SOI wafer, comprising:a floating body chain comprising a plurality of floating body devices fabricated in series with one another in the wafer; a tied body chain comprising a plurality of tied body devices fabricated in series with one another in the wafer, a plurality of storage devices individually coupled with the plurality of tied body devices and with at least one of the floating body devices, the plurality of storage devices being adapted to store tied body chain data from the plurality of tied body devices according to at least one of the floating body devices in the floating body chain; and a tester comprising: a pulse generator coupleable to the floating body chain and to the tied body chain to provide a pulse edge to first devices in the floating body chain and in the tied body chain; a processor coupleable to the plurality of storage devices to receive stored tied body chain data therefrom; and a power source coupleable to power terminals of devices in the wafer to provide electrical power thereto; wherein the processor controls the pulse generator to selectively provide one or more pulse edges to the floating body and tied body chains and wherein the processor determines at least one floating body delay according to the tied body chain data from the plurality of storage devices. 13. A method of fabricating an SOI wafer, comprising:providing a plurality of series connected floating body devices in the wafer to form a floating body chain; providing a plurality of series connected tied body devices in the wafer to form a tied body chain; providing at least one pulse input pad in the wafer, the pulse input pad being coupled with a first one of the floating body devices and with a first one of the tied body devices; providing a plurality of storage devices in the wafer, the storage devices being individually coupled with the tied body devices and with at least one of the floating body devices, wherein the plurality of storage devices store tied body chain data from the tied body devices according to the at least one of the series connected floating body devices in the floating body chain; and providing an interface coupled with the plurality of storage devices in the wafer to provide external access to the tied body chain data. 14. A test method for characterizing floating body delay effects in an SOI wafer, the method comprising:providing a pulse edge to a floating body chain comprising a plurality of series connected floating body devices in the SOI wafer and to a tied body chain comprising a plurality of series connected tied body devices in the SOI wafer; storing tied body chain data from the plurality of series connected tied body devices according to at least one of the floating body devices; and characterizing floating body delay effects in an SOI wafer according to stored tied body chain data from the plurality of series connected tied body devices. 15. The method of claim 14, wherein characterizing floating body delay effects in an SOI wafer comprises determining a floating body delay value according to stored tied body chain data.16. The method of claim 15, wherein storing the tied body chain data comprises:storing first tied body chain data according to a first floating body device in the floating body chain; and storing second tied body chain data according to a second floating body device in the floating body chain after storing the first tied body chain data. 17. The method of claim 16, wherein storing the first tied body chain data comprises storing data states from the tied body devices when the pulse edge propagates through the floating body chain to the first floating body device thereof, and wherein storing the second tied body chain data comprises storing data states from the tied body devices when the pulse edge propagates through the floating body chain to the second floating body device.18. The method of claim 17, wherein determining the floating body delay value according to stored tied body chain data comprises:determining a first value representing a number of tied body devices in the tied body chain to which the pulse edge has propagated in the first tied body chain data; determining a second value representing a number of tied body devices in the tied body chain to which the pulse edge has propagated in the second tied body chain data; and determining the floating body delay value according to the first and second values. 19. The method of claim 18, wherein storing the first tied body chain data comprises storing data states from odd numbered tied body devices in the tied body chain, and wherein storing the second tied body chain data comprises storing data states from even numbered tied body devices in the tied body chain.20. The method of claim 19, further comprising:coupling first and last tied body devices in the tied body chain to form a tied body chain ring oscillator; measuring a tied body device propagation delay value using the tied body chain ring oscillator; and decoupling the first and last tied body devices from one another in the tied body chain; wherein determining the floating body delay value comprises determining the floating body delay value according to the first and second values and according to the tied body device propagation delay value. 21. The method of claim 19, further comprising providing at least one preconditioning pulse to the floating body chain and to the tied body chain before providing the pulse edge.22. The method of claim 16, wherein storing the first tied body chain data comprises storing data states from odd numbered tied body devices in the tied body chain, and wherein storing the second tied body chain data comprises storing data states from even numbered tied body devices in the tied body chain.23. The method of claim 16, further comprising:coupling first and last tied body devices in the tied body chain to form a tied body chain ring oscillator; measuring a tied body device propagation delay value using the tied body chain ring oscillator; and decoupling the first and last tied body devices from one another in the tied body chain; wherein determining the floating body delay value comprises determining the floating body delay value according to stored tied body chain data and according to the tied body device propagation delay value. 24. The method of claim 16, further comprising providing at least one preconditioning pulse to the floating body chain and to the tied body chain before providing the pulse edge.25. The method of claim 14, further comprising providing at least one preconditioning pulse to the floating body chain and to the tied body chain before providing the pulse edge.
FIELD OF INVENTIONThe present invention relates generally to semiconductor device processing and more particularly to apparatus and methods for determining or characterizing floating body effects such as hysteretic propagation delays in SOI devices.BACKGROUND OF THE INVENTIONA continuing trend in the semiconductor manufacturing industry is toward smaller and faster transistor devices, which consume less power. Toward that end, device scaling is a continuous design goal, wherein device features sizes and spaces are reduced. However, performance limits are reached in technologies where scaled transistors and other electrical devices are formed directly in a wafer substrate, such as silicon. These are sometimes referred to as bulk devices. To surpass the performance limitations of bulk devices, recent scaling efforts have included the use of silicon over oxide (SOI) wafers, in which a silicon layer overlies an insulator layer above a silicon substrate. SOI wafers may be fabricated according to known SOI wafer manufacturing techniques, such as SIMOX, bond-and-etch-back and smart-cut technology.In SOI wafers, the active semiconductor regions of the wafer arc formed in the silicon on top of the oxide insulator, whereby these active regions are electrically isolated from one another. This technique achieves certain design advantages, such as a significant reduction in parasitic capacitances that exist in non-SOI (bulk) devices, as well as enhanced resistance to radiation damage. Partially depleted SOI devices are produced using one type of SO process in which the transistors are formed in a deposited semiconductor layer which is thick enough that the channel region will not be fully depleted through its full thickness when the device is in operation. The transistor design and operation in partially depleted SOI processes are similar to that of bulk CMOS devices.Although SOI designs provide certain advantages over bulk designs, SOI devices suffer from certain effects related to the isolation of the active devices from the substrate material underlying the oxide layer, which are sometimes referred to as floating-substrate or floating body effects. In bulk transistors, the transistor body may be electrically connected through the substrate. In this case, the transistor body is at a relatively fixed potential, and consequently, the transistor threshold voltage is stable relative to the drain-to-source voltage. In many SOI transistors however, the body (e.g., the undepleted silicon under the gate) is electrically floating with respect to the substrate because of the intervening oxide insulator layer. Thus, when sufficient drain-to-body bias is applied to the transistor, impact ionization can generate electron-hole pairs near the drain. These electron-hole pairs cause a voltage differential to build up between the body node and the source of the transistor because majority carriers travel to the body while the minority carriers travel to the drain. The resulting voltage differential lowers the effective threshold voltage, thereby increasing the drain current.The isolated body creates capacitive coupling between the body and the gate, between the body and the source, and between the body and the drain, in addition to diode couplings between the body and the source and between the body and the drain. These effects bias the body, creating a variation in the transistor threshold voltage during switching which is dependent upon the current and past states of the transistor. During switching, these effects bias the body through two mechanisms; capacitive coupling between the body and the gate, source, and drain, as well as charging and discharging between the body and the source and drain through diode coupling. This history dependent operation, sometimes referred to as hysteretic behavior, results from potentially large uncertainties in the floating body potential and, thus, uncertainties in the threshold voltage of devices due to unknown switching history.These floating body effects can contribute to undesirable performance shifts in the transistor relative to design, as well as to increased instability of the transistor operating characteristics. In order to address these SOI floating body issues, some designs provide for electrical connection of the body or the source of an SOI transistor to the substrate. Transistors formed in this manner in an SOI wafer are sometimes referred to as tied body transistors. Although this technique serves to prevent body charging by creating a direct contact to the substrate, implementation of this approach complicates the device manufacturing process and also increases area overhead because tied body devices consume a larger area than floating body devices. Thus, most SOI designs must take these floating body effects into account.Because these and other floating body issues affect end-product device performance, monitoring the hysteretic behavior of SOI devices is needed to refine and monitor the SOI manufacturing process. Thus, it is desirable to measure floating body effects in wafers at various points in a manufacturing process flow. One measure of the veracity of an SOI process is the propagation delays in switching a floating body transistor from one state to another. The threshold voltage of such floating body devices is dependent upon the body potential. The body potential, in turn, is dependent upon the current and past states of the transistor (e.g., the voltages at the various terminals of the device). Thus, the propagation delays are often measured at various voltages with switching signals of varying amounts of preconditioning, to obtain a curve of average propagation delay vs. time.Typically, these measurements are obtained manually on a test bench, using oscilloscopes and high frequency probes to monitor floating body transistor switching delays under various conditions. Pulse generators are connected to the inputs of inverters or other floating body devices, which are formed of floating body MOS transistors, and the device outputs are monitored using the oscilloscope. Such testing is time consuming, and ill fitted for testing every wafer in a high throughput production setting. Thus, there is a need for improved apparatus and methods for measuring hysteretic propagation delay in SOI devices, which are amenable to automation using readily available, inexpensive test equipment.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended neither to identify key or critical elements of the invention nor to delineate the scope of the invention. Rather, the primary purpose of this summary is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later. The invention relates to apparatus and methodologies which may be employed to facilitate automated wafer testing to characterize hysteretic propagation delay and other floating body effects in SOI devices.According to one aspect of the invention, test apparatus is provided, which comprises a floating body chain including a plurality of series connected floating body devices, such as inverters or NAND gates fabricated in a silicon over insulator (SOI) wafer and a reference delay chain comprising reference delay elements, such as tied body devices, connected in series in the wafer. The floating body comprises MOS transistors fabricated in the SO wafer. The reference delay elements may be some devices whose delay properties are switching-history independent, such as tied body inverters fashioned from MOS transistors having body regions or source regions electrically tied to the substrate. Storage elements such as edge-triggered registers or level-sensitive latches are formed in the wafer and coupled with the reference delay elements and with one or more of the floating body devices, where the storage elements operate to store reference delay chain data from the reference delay elements according to one or more signals from the floating body chain.The storage elements may be divided into groups associated with odd and even reference delay elements, wherein the first group stores first reference delay data values from the odd numbered reference delay elements according to a first signal from the floating body chain and the second group stores second reference delay data from the even numbered reference delay elements according to a second signal. A pulse edge is applied to the floating body and reference delay chains, wherein the first control clock to the first group of the storage elements is provided when the pulse edge propagates through the floating body chain to a first floating body device, and the second clocking signal to the second group of the storage elements is provided when the pulse edge propagates through the floating body chain to a second (e.g., downstream) floating body device. A test system, such as a PC-based tester may then retrieve the first and second stored reference data, such as through a data interface in the wafer, and determine a first value representing a number of reference delay elements in the reference delay chain to which the pulse edge has propagated, as well as a second value representing a number of reference delay elements to which the pulse edge has propagated. A floating body delay value may then be determined according to the first and second values.If the reference delay elements are implemented using tied body devices, prior to or following the provision of the pulse edge, and the tied body data loading, an odd number of the tied body devices may be selectively connected in a loop to operate as a ring oscillator for measuring a reference propagation delay value for use in evaluating the stored data In one example, a frequency divider is provided, which receives an output from one of the tied body devices in the tied body chain ring oscillator, along with a buffer receiving a divided count of transitions on the output of the tied body device from the frequency divider. This allows measurement of a tied body device propagation delay value. The loop is then decoupled, wherein the test pulse edge may then be applied to the tied body and floating body chains. The floating body delay value may then be determined according to the first and second values and according to the tied body device propagation delay value.Another aspect of the invention provides test systems for characterizing floating body delay effects in an SOI wafer. The system comprises a floating body chain and a tied body chain, as well as a plurality of latches coupled with the tied body chain and with the floating body chain, wherein the latches are adapted to latch tied body chain data according to at least one of the floating body devices. The system further provides a tester comprising a pulse generator coupleable to the floating body and tied body chains so as to provide a pulse edge to first devices thereof. A processor is provided, which is coupleable to the latches to receive latched tied body chain data therefrom, and a power source is provided to power the devices in the wafer. The processor may control the pulse generator to selectively provide one or more pulse edges to the floating body and tied body chains and further determines at least one floating body delay value according to the tied body chain data from the latches.Yet another aspect of the invention provides methods for fabricating an SOI wafer, comprising providing a plurality of series connected floating body devices in the wafer to form a floating body chain, providing a plurality of series connected tied body devices in the wafer to form a tied body chain, and providing a plurality of latches in the S wafer, where the latches are individually coupled with the tied body devices and with one or more of the floating body devices. The latches latch tied body chain data from the tied body devices according to the floating body device or devices in the floating body chain. The method further comprises providing one or more pulse input pads in the wafer, which are coupled with a first one of the floating body devices and with a first one of the tied body devices, as well as providing an interface coupled with the latches in the wafer to provide external access to the tied body chain data.According to still another aspect of the invention, methods are provided for measuring or characterizing hysteretic propagation delay or other floating body delay effects in SOI devices. The methods comprise providing a pulse edge to a floating body and a tied body chain in an SOI wafer, storing tied body chain data according to one or more of the floating body devices, and characterizing the floating body delay effects, such as by determining one or more floating body delay values, according to latched tied body chain data. The tied body chain data storing may comprise storing first tied body chain data according to a first floating body device and storing second tied body chain data according to a second floating body device after storing the first tied body chain data.In one implementation, first data states are latched from the tied body devices when the pulse edge propagates through the floating body chain to the first floating body device, and second data states are latched when the pulse edge propagates to the second floating body device, wherein the first states may represent odd numbered tied body device states and the second states represent even numbered tied body device states. First and second values may be determined from the latched data, which represent the number of tied body devices to which the pulse edge has propagated in the chain at the points in time where the pulse edge reaches the first and second floating body devices. These values are then used to determine a floating body delay value.The method may further comprise coupling first and last tied body devices in the tied body chain to form a tied body chain ring oscillator, measuring a tied body device propagation delay value using the tied body chain ring oscillator, and decoupling the first and last tied body devices from one another in the tied body chain. In this instance, the floating body delay value may be determined according to the first and second values and according to the tied body device propagation delay value. Alternatively, or in combination, the method may also comprise providing one or more preconditioning pulses to the floating body chain and to the tied body chain before providing the pulse edge, so as to provide an indication of the floating body propagation delay in the presence of hysteretic preconditioning.To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth in detail certain illustrative aspects and implementations of the invention. These are indicative of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a partial side elevation view in section illustrating an exemplary floating body NMOS transistor fabricated in an SOI wafer;FIG. 1B is a schematic diagram illustrating a circuit representation of various floating body electrical characteristics of the NMOS transistor of FIG. 1A;FIG. 1C is a schematic diagram illustrating an inverter device formed using the floating body NMOS device of FIGS. 1A and 1B and a floating body PMOS transistor;FIG. 1D is a schematic diagram illustrating an exemplary chain of floating body inverter devices;FIG. 2 is a graph illustrating exemplary floating body propagation delay vs. time in SOI floating body transistor devices for pulse edges measured from rising edge to rising edge and from falling edge to falling edge;FIG. 3 is a schematic diagram illustrating a conventional test system for measuring SOI floating body delay effects using an external high speed pulse generator and oscilloscope;FIG. 4 is a schematic diagram illustrating an exemplary test system and test apparatus thereof for characterizing floating body delay effects in an SOI wafer in accordance with one or more aspects of the present invention;FIG. 5 is a partial top plan view illustrating a portion of a wafer having scribe line regions between adjacent die areas, in which the test circuitry and apparatus of the present invention may be formed;FIG. 6 is a schematic diagram illustrating another exemplary implementation of a test apparatus in accordance with the invention;FIG. 7 is a schematic diagram illustrating yet another implementation of a test apparatus in accordance with the invention;FIG. 8A is a schematic diagram illustrating still another exemplary implementation of a test apparatus in accordance with the invention;FIG. 8B is a timing diagram illustrating one example of the operation of the test apparatus of FIG. 8A;FIG. 9 is a schematic diagram illustrating another exemplary implementation of a test apparatus in accordance with the invention;FIGS. 10A-10D are graphs illustrating waveforms for hi-lo and lo-hi pulse edges measured from rising edge to rising edge, and from falling edge to falling edge; andFIG. 11 is a flow diagram illustrating an exemplary method of determining a floating body delay value in accordance with one or more aspects of the invention.DETAILED DESCRIPTION OF THE INVENTIONOne or more implementations of the present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout, and wherein the various structures are not necessarily drawn to scale. The present invention relates to methods and apparatus for characterizing floating body effects in SOI wafers, which may be implemented in PC-based testers and other types of automated test systems, and which may be employed to provide propagation delay information in a timely fashion. The various aspects of the invention are hereinafter illustrated and described in the context of partially depleted SOI devices and processes. However, it will be appreciated that the invention is not limited to use in association with such devices, and that alternative implementations are possible within the scope of the invention, wherein floating body effects are determined in non-partially depleted SOI devices.Referring initially to FIGS. 1A-1D, an exemplary floating body NMOS transistor 2 is illustrated as part of a floating body inverter device 4 fabricated in an SOI wafer 6, wherein it is noted that the structures illustrated herein are not necessarily drawn to scale. A cross section of the transistor 2 is illustrated in FIG. 1A. The SOI wafer 6 in this example includes a p doped silicon substrate 8, over which is formed an insulating layer of SiO2 10. Silicon overlies the oxide layer 10, in which isolation structures 12 are formed. A polysilicon gate G is formed over a thin gate dielectric 14, having spacers 16 formed along the sidewalls thereof. A source region S and a drain region D are implanted and diffused with n+ dopants in one or more implantation steps, and an upper portion of p doped silicon 18 overlies a p+ doped body region 20 of the silicon under the gate G. Optional halo implant regions 22 are provided with p+ type dopants to reduce hot-carrier effects in the transistor 2.As discussed above, the floating body operation of the transistor 2 causes switching history dependent charging of the body region 20, and thus, hysteretic threshold voltage changes, depending upon the current and past states of the transistor 2. This is because the body region 20 is electrically isolated from the substrate 8 by the intervening oxide layer 10. FIG. 1B illustrates a schematic representation of the transistor 2, wherein the body region 20 is diode coupled to the source S and the drain D through diodes 30 and 32, respectively. In addition, the body region 20 is capacitively coupled to the gate G, the source S, and the drain D via capacitances 34, 36, and 38, respectively. A current source I in FIG. 1B represents generated holes injected into the body 20 from the drain D. Without a body contact (e.g., to tie the body 20 to the substrate 8 in FIG. 1A), the floating body 20 is allowed to electrically attain a voltage potential through various charging/discharging mechanisms, such as impact ionization current, drain-induced barrier lowering (DIBL), junction active and/or leakage currents, etc., as well as through capacitive coupling from the gate G, the source S, and the drain D during switching.An inverter circuit 4 in FIG. 1C includes the NMOS transistor 2, as well as a floating body PMOS transistor 40, to invert the signal level at an inverter input 42 to provide an output 44. In various cases of previous switching state history, the capacitive coupling via the capacitors 34, 36, and 38, the diode coupling via the diodes 30 and 32, and the current source I have different impacts on the floating body transistor 2 in the inverter device 4. In the case of a steady state DC condition, the input 42 is held constant for a relatively long enough period of time, wherein the capacitive coupling of capacitors 34, 36, and 38 has essentially no effect. Assuming the generated current I is negligible, the body potential of region 20 is determined by the DC solution of the body-to-drain and body-to-source diodes 32 and 34, respectively. For instance, it is assumed that a high logic level (HI) is 1.5V and a low (LO) level is 0V. If the input 42 is HI, the gate G of the NMOS transistor 2 is HI and the drain D and the source S are LO. In this situation, the body 20 is LO, wherein 0V is the DC solution of the two back-to-back connected diodes 30 and 32.However, where the input 42 is LO, the drain D is HI and the gate G and the source S are LO. Ignoring the generated current I, the body 20 is charged by the reverse-biased body-to-drain junction leakage through the diode 32 and discharged by the weakly forward-biased body-to-source junction current through the diode 30. The body potential at 20 thus stabilizes at a value between LO and HI, depending on device properties, supply voltage and temperature, for example, about 0.25V. Due to the dependence of the threshold voltage of the transistor 2 on the bias at the body 20, the threshold voltage is lower for the same device with the gate G at LO (e.g., wherein higher body bias causes lower threshold voltage) than that at HI (e.g., lower body bias, higher threshold voltage). The converse is true for the PMOS transistor 40.If a number of the devices 4 of FIG. 1C are cascaded to form an open chain 50 of floating body inverter devices, wherein the NMOS transistors N thereof are similar to the transistor 2 illustrated in FIGS. 1A-1C, and if the input 42 stays LO for a relatively long period of time, the transistors N1, P2, N3, etc. in the chain 50 have lower threshold voltages due to their bias condition, as compared to the devices, P1, N2, P3, etc. For example, the NMOS transistor N1 has the drain at HI and the gate and source at LO, wherein the body potential (e.g., body 20 of FIGS. 1A and 1B) is about 0.25V. Therefore, the threshold voltages of N1, N3 . . . are lower than those of N2, N4 . . . Likewise, the threshold voltages of P2, P4 . . . are lower than those of P1, P3 . . . .Where a LO to HI or a HI to LO switch occurs at the input 42, starting from the DC state, for instance, if the input 42 stays LO for a relatively long period of time, and then goes through a LO/HI transition. In a dynamic switching, capacitive coupling (e.g., capacitors 34, 36, and 38 of FIG. 1B) also has an effect. During the LO/HI transition, the body 20 is capacitively coupled up by the gate G through the capacitor 34 (FIG. 1B) and the forward-biased body-to-source junction diode 30 discharges the body 20. As the gate voltage ramps up beyond the threshold voltage, the body 20 is not coupled up any further because of an inversion layer formed under the gate G. At the same time, the drain voltage drops, thereby coupling down the body potential at 20. If no further switching lakes place, the body 20 will eventually return to the DC equilibrium, although this may take as long as several milliseconds because generation through the two reverse-biased diodes 30 and 32 is slow. Similarly, during the HI/LO transition, the gate voltage G initially has no effect on the body potential 20 due to the inversion layer shielding the body 20. The drain voltage at D increases as the gate voltage G decreases, thereby coupling up the body potential at 20. As the gate voltage G ramps down below the threshold voltage, the body 20 is coupled down, which takes a certain amount of time if no further pulses are provided to switch the input.For a LO/HI/LO or HI/LO/HI pulse starting from an initial DC state, for instance, if the input 42 stays LO for a relatively long period of time and rises from the LO state, the rising edge arrives at the nth stage of the chain 50 with a delay equal to tpd1+tpu2+tpd3+Y+[tau]pun=n*(tpd1+tpu2)/2 where tpd1, the pull-down delay of the first stage, is directly related to the body potential of the nth stage NMOS transistor prior to the first switch, and the pull-up delay tpu2 is related to the body voltage of the second stage PMOS transistor P2 prior to the first switch. The body potential of the transistors N1 and of P2 will eventually return LO. However, if before full recovery, the input falls to the LO state, the falling-edge delay through the first n stages is equal to tpu1+tpd2+tpu3+ . . . +tpdn=n*(tpu1+tpd2)/2 where tpu1 is the delay of the first stage, which is directly related to the body potential of P1 prior to the second switch and the second stage delay tpd2 is related to the body voltage of the NMOS transistor N2 prior to the second switch.The body potential of the transistor N2 is coupled up from 0V to about 0.4V after the first switch. If the input pulse width (tf2-tr2) is much less than the time required for the body 20 to return to the DC high value, then, prior to the second switch, the body voltage of N2 is between about 0.4V and about 0.25V. Thus, tpd1>tpd2 and, for the same reason, tpu1<tpu2. In addition, tpd1+tpu2>tpu1+tpd2 (e.g., the falling-edge propagation is faster than the rising edge propagation), and as a result, the input pulse width becomes compressed. Conversely, the input pulse width can stretch rather than compress if tpd1+tpu2<tpu1+tpd2.Thus, the delay variation is determined by the relative magnitude of the four fundamental delays tpd1, tpu2, tpu1 and tpd2, which are related to the transient body voltage. The stretch is possible by decreasing coupling-down between the body 20 and the drain D (FIGS. 1A and 1B) and increasing the coupling-up between the body 20 and the gate G. In this regard, some relevant adjustment parameters include the ratio of the body-to-gate capacitance 34 to the body-to-source capacitance 36, the design threshold voltages, the properties of the body-junction diodes 30 and 32, the generation and/or recombination rate of the body 20, the Wp/Wn ratio between PMOS and NMOS transistors 40 and 2, the pulse input switching frequency, duty cycle, slew rate (e.g., rise/fall time), the power supply voltage, and the temperature, etc. Similar relationships are found for a HI/LO/HI input pulse case.In the case of periodic pulses, after a first switch of a LO/HI/LO pulse, the body potential of the transistor N1 in the chain 50 (FIG. 1D) is coupled down from the DC-HI level 0.25V to an AC dynamic level of about -1.5V. The second input switch couples up the body 20 to a level that is slightly lower than 0.4V, e.g., the AC dynamic level coupled up from the DC-LO level of 0V, but still slightly higher than the DC-HI level of about 0.3V. If another pulse follows, the first switch of the second pulse couples down the body 20 to a level that is slightly higher than -1.5V, e.g., the AC dynamic level coupled down from the DC-HI level 0.25V, but still lower than the DC-LO level of 0V. The second switch of the second pulse couples up the body 20 to a level that is slightly lower than the AC dynamic coupled-up level at the second switch of the first pulse, but still higher than the DC-HI level of 0.25V. If the circuit switches constantly, the body potential at 20 will reach a steady state value, whereat tpd1=tpd2 and tpu1=tpu2, regardless of the initial DC-HI or DC-LO state. Thus, at AC steady state, there is substantially no delay variation.Referring also to FIG. 2, these floating body effects are illustrated in a graph 60 having curves 62 and 64 of propagation delay vs. time for pulse edges measured from rising edge to rising edge and from falling edge to falling edge, respectively, starting from the DC HI state. In the above example, the propagation delay times are dependent upon the floating body potential, wherein tpd1>tpd2 and tpu1<tpu2 in the floating body chain 50. Thus, where an individual inverter device in the chain 50 is initially pulsed after being at a DC steady state for a relatively long period of time (e.g., t is small in the graph 60), the falling edge to falling edge curve 62 is higher than the rising edge to rising edge curve 64. As pulses continue to be applied to the chain, the floating body potential moves less and less (e.g., f increasing in the graph 60), with the curves 62 and 64 coming closer together. Continuing on, an AC steady state condition is reached, wherein the curves 62 and 64 join at an AC steady state delay value 66, whereat tpd1=tpd2 and tpu1=tpu2, and accordingly, the propagation delay is essentially constant.Thus, the delay caused by the body potential variation depends on the switching history. However, most circuits do not switch constantly, nor do they typically sit idle for long periods of time. Thus, manufacturing process control of this history effect is important to the design and manufacturing of floating body SOI devices. Absent some measure of control over these variations, the design margin required to protect against this uncertainty may erode some or all of the benefits provided by the SOI technology under nominal operation. Accordingly, it is desired to measure the floating body propagation delay of a particular wafer, in order to provide assurance of design veracity, as well as of manufacturing process stability.Referring to FIG. 3, these effects have previously been measured using a chain 70 of floating body inverter devices I1, I2, I3, . . . , I2048 formed in an SOI wafer 72, with probe pads 74, 76, and 78 for connecting a pulse generator 80 and an oscilloscope 82, respectively, to the wafer 72. Additional VDD and GND pads are provided for connection of an external power source 84 to power the devices I1-I128 and to the buffers, wherein the pulse generator 80, the oscilloscope 82, and the power source 84 constitute a test system 78. The test system 78 is operated manually on a test bench (not shown), to monitor floating body transistor switching delays under various conditions. The pulse generator 80 is connected to the input pad 72 at the input of the first inverter device I1 and to the oscilloscope 82, and one or more inverter device outputs are monitored, such as the outputs of inverters I14 and I2014 in the illustrated example, by connection of high frequency oscilloscope probes to the pads 74 and 76, respectively. However, this form of testing is time consuming, requiring an operator to manually locate trace edges of interest using the oscilloscope, and is not suited to automation. Moreover, the system 78 and the prior floating body effect measurement techniques require the use of high-resolution equipment and high speed probe cards, wherein the test structure requires a relatively long chain and the testing throughput is relatively low. In addition, accurate edge placement control in a standard wafer tester is extremely difficult for the setup of FIG. 3.Referring now to FIGS. 4-9, the present invention provides test systems and apparatus for measuring hysteretic propagation delay and for otherwise characterizing floating body effects in SOI wafers, which apparatus may be automated using commercially available test equipment. In this regard, the present invention may be implemented to facilitate testing of a short delay chain or other test circuitry, and may be carried out using commercially available test equipment and probe cards therefor. Moreover, the systems and apparatus of FIGS. 4-9, and other apparatus may be employed in practicing one or more methods of the present invention, as described further below with respect to FIG. 11.FIG. 4 illustrates one exemplary implementation of a test apparatus 100 formed in an SOI wafer 102, and a test system 104 for characterizing floating body effects in the wafer 102 in accordance with the invention. The apparatus 100 comprises a floating body chain 110 comprising an integer number n of floating body inverter devices FBI1-FBIn, which are connected in series with one another in the wafer 102, wherein the floating body inverter devices FBI individually comprise floating body MOS transistors fabricated in the wafer (e.g., FIGS. 1A-1D above). The test apparatus 100 further comprises a reference delay chain, such as a tied body chain 112 comprising an integer number m of tied body inverter devices TBI1-TBIm fabricated in series with one another in the wafer 102, wherein the tied body inverter devices individually comprise tied body MOS transistors fabricated in the wafer 102.A plurality of storage elements such as flip-flop type, single input, edge-triggered registers 116 (e.g., flip-flops FF1-FFm in this implementation) are formed in the wafer 102 and individually coupled with the tied body inverters TBI1-TBIm, wherein the inverter outputs are coupled to the flip-flop inputs. The flip-flops 116 operate to store the inverter output data from the tied body devices TBI1-TBIm according to one or more signals from the floating body chain 110 via a clock control 114, wherein the clock control 114 may comprise circuit elements (not shown), or may merely comprise electrical connections from one or more of the floating body inverter outputs to the clock inputs of the flip-flops 116. Thus, the flip-flops 116 are coupled with at least one of the floating body devices FBI1-FBIn, so as to store tied body chain data from the tied body devices TBI1-TBIM according to the floating body devices FBI1-FBIn.First inverter devices TBI1 and FBI1 in the tied body chain 112 and the floating body chain 110, respectively, are coupleable with a tester 120 to receive a pulse edge or pulse train input from a pulse generator 122 in the tester 120. Stored tied body data may be obtained from the flip-flops 116 via a data bus 118 and a data interface 119 by a processor 124 in the tester 120, for use in characterizing floating body effects in the wafer 102 in accordance with the invention. The data transfer may be performed in any appropriate manner, either parallel or serial, by which the tied body data is made available to the processor 124 for computation of one or more floating body delay values. The tester 120 also provides electrical power to the devices in the wafer 102 using a power source 126 coupled with VDD and GND pads on the wafer 102.Referring also to FIG. 5, the exemplary test apparatus 100 is implemented as a scribe line monitor (SLM) formed in scribe line regions 140 of the wafer 102 between adjacent die areas 144 thereof, although the test apparatus 100 and other apparatus in accordance with the present invention may be alternatively fabricated anywhere on the wafer 102. The die areas 144 are generally rectangular regions within the die boundaries 148, wherein individual electrical components and circuits (not shown) are formed in fabricating integrated circuit devices. The scribe line regions 140 are defined between adjacent die areas 144, through which channels are subsequently saw-cut to separate the individual dies 144 from the wafer 102. The scribe line regions 140 commonly have a width 146 sufficient to accommodate the width of saw blades or other separation tools (not shown) and to provide appropriate tool alignment tolerance during subsequent die separation operations. The test apparatus 100 of the present invention may alternatively be formed in the die areas 144. However, it is noted that fabricating the apparatus 100 in the scribe line regions 140 facilitates improved device density and space utilization in the die areas 144, wherein the test apparatus 100 may be employed to characterize the SOI process during the manufacturing process prior to die separation.Another exemplary implementation of test apparatus 100' is illustrated in FIG. 6, wherein the output of an ith inverter stage FBIi in the floating body chain 110 is buffered using a clock control buffer 114', which is then provided to the tied body chain 112 to clock the flip-flops 116. This latches or stores the tied body device data states from the inverters TBI in the tied body chain 112 according to the output state of the floating body inverter FBIi. In operation, a pulse edge is applied to an input pad 150, which then propagates through the tied body chain 112 and the floating body chain 110. As the pulse edge reaches the floating body inverter FBIi, the output states of the tied body devices TBI are stored by the flip-flops 116.This stored tied body data is then available to a test system processor or other device through the data bus 118 and the data interface 119. The propagation delay variance in the floating body devices FBI will not be seen in the tied body devices TBI, wherein the delay chain 110 with the floating body devices FBI is used as a vehicle to characterize hysteretic delays in the wafer 102, while the delay chain 112 with the tied body devices TBI is used as a reference. The tied body data from the latches 116 will thus indicate the number of tied body devices TBI to which the pulse edge has propagated, and from this information, the processor may determine a floating body delay value.Another possible implementation 100'' is illustrated in FIG. 7, wherein the outputs of two floating body inverter devices FBI14 and FBI114 are provided through buffers 114'' to first and second sets 116odd and 116even of flip-flops FF. The first set 116odd includes the flip-flops FF1, FF3, . . . , FFm-1. (e.g., FFodd) which are individually coupled with odd numbered tied body inverter devices TBI1, TBI3, . . . TBIm-1, (e.g., TBIodd, wherein m is an even integer) where the odd flip-flops FFodd are adapted to store tied body chain data from the odd numbered tied body inverters TBIodd according to a first control or clock signal 151 indicating the pulse edge has propagated to the floating body inverter FBI14. The second set 116even includes the even numbered flip-flops FF2, . . . , FFm, (e.g., FFeven) which are individually coupled with even numbered tied body inverter devices TBI2, . . . TBIm, (e.g., TBIeven) where the even numbered flip-flops FFeven are adapted to store tied body chain data from the even numbered tied body inverters TBIeven according to a second control or clock signal 152 indicating the pulse edge has propagated to the floating body inverter FBI114.The stored tied body data from the flip-flops 116 may be segmented into first and second tied body data corresponding to the data latched according to the first and second clock signals. Thus, the first tied body data is indicative of the number of tied body devices TBI to which the pulse edge has propagated at the time when the pulse edge propagates through the floating body chain 110 to the 14th inverter FBI14. Similarly, the second tied body data is indicative of the number of tied body devices TBI to which the pulse edge has propagated at the time when the pulse edge propagates through the floating body chain 110 to the 114th inverter FBI114. In operation, a single pulse edge may thus be applied to the pad 150, which then propagates through the tied body and floating body chains 112 and 110, respectively, with the (first) data from the odd numbered inverters being stored according to the control signal 151 when the pulse edge reaches the floating body inverter FBI14, and the (second) data from the even numbered inverters being stored according to the second control signal 152 when the pulse edge reaches the floating body inverter FBI114.Thereafter, the first and second data is read out from the flip-flops 116odd and 116even, respectively, through the interface 119. These data values can be used to determine first and second delay values, spaced by the number of floating body chain devices between the inverters FBI14 and FBI114 (e.g. 100 in this example), by which a determination may be made (e.g., in a test system processor or by other appropriate means) of a floating body propagation value in accordance with the present invention. It is noted in this regard, that the data is obtained without the need for expensive oscilloscopes, as was the case with prior test systems (e.g., FIG. 3 above), and that the provision of the pulse edge and the reading of the latch data may be done using low speed, inexpensive testers and probe cards therefor. Thus, the present invention provides significant advantages in testing large numbers of SOI wafers in a short period of time, whereby the invention finds particular utility in production testing applications.Another example of the invention is illustrated as a test apparatus 200 of FIG. 8A. In this example, an edge select pad 202 and exclusive OR (XOR) gates 204 and 206 provide control over whether the first and second control or clock signals 151 and 152 are indicative of rising-to-rising edge delay or falling-to-falling edge delay. In addition, a ring oscillator select pad 210 and a NAND gate 212 provide the capability of selectively connecting the first and last inverters TBI1 and TBIm together, by which the tied body chain 112 can be turned into a ring oscillator by application of a ring oscillator control signal (RO) to the pad 210. Also illustrated in the apparatus 200 are a VDD power pad 220 and a GND pad 222 for application of power to the apparatus 200 in the wafer using an external test system power source. In other respects, the apparatus 200 operates in similar fashion to the test apparatus 100'' of FIG. 7.In operation, the beginning and end of the reference tied body chain 112 are connected according to the control signal RO at the pad 210, by which the delay per tied body stage may be measured. In this regard, it is noted that an odd number of tied body inverter devices TBIm are provided in the chain 112 (e.g., m is an even integer), wherein an even number m inverters are in the chain 112 and the NAND gate 212 adds a further inversion, such that connection of the output of TBIm to the input of TBI1 through the inverting NAND gate 212 creates an oscillator in the chain 112. In this example, the ring will oscillate at a period of about n(tpd+tpu)/2, wherein tpd is approximately equal to tpu. The period of the oscillation in the chain 112 may be measured by a test system, for example, at the pulse input pad 150, and provided to a frequency divider to ascertain the period, and hence to determine the delay time of the devices TBI absent floating body effects. This information may then be correlated with the delay values obtained above (e.g., correlated with the stored tied body data from the flip-flops 116), to determine a floating body delay value associated with the devices FBI in the floating body chain 110. In alternative implementations (e.g., FIG. 9 below), a ring oscillator circuit may be included in the test apparatus on the wafer, wherein the ring oscillator circuit comprises a frequency divider receiving an output of one of the tied body devices in the tied body chain as well as a buffer receiving a divided count of transitions on the output of the tied body device from the frequency divider.In one example of the operation of the apparatus 200, a single LO-HI switch 00001111 is applied to the input pad 150 with the edge select signal at the pad 202 LO. Initially, the output of the odd numbered tied body inverters TBIodd is 1, and the even numbered inverters TBIeven are at 0. The rising edges 01 start propagating through the floating body and tied body chains 110 and 112 at about the same time with an offset due to different propagation paths. When the rising edge reaches the floating body device FBI14, a positive edge travels through the associated buffer 114'' and serves as a clock signal 151 to load the data of the TBIodd devices into the flip-flops 116odd. Because of possible setup and hold time violations, the pattern of the odd numbered flip-flops 116odd may be 11 . . . 1111*00 . . . 000, where x is at the jth stage of the tied body chain 112. The total delay is in the range of (j-1)tt to (j+1)tt where tt is the delay per stage of the tied body chain 112, assuming the rising and falling delays are about the same by choosing a proper Wp/Wn ratio. In this case, the measurement accuracy is between 1.0 and 2.0 times tt. This delay time can be described by equation (1) below:j'tt≈tf14+tbuf1+[delta], (1)where tf14 is the time for the rising edge to travel from the input to the 14<th > stage of the floating body chain 110, tbuf1 is the time from the 14<th > floating body gate FBI14 to the latches 116, and - is the skew between the rising edges at the floating-body and tied-body chain inputs at FBI1 and TBI1, respectively.Likewise, when the rising edge reaches the 114<th > FB stage at inverter FBI114, the pattern of the even numbered latches 116even would be 00 . . . 00*1111 . . . 11, where x is the k-th stage of the tied body chain 112. In this case:k'tt≈tf114+tbuf2+[delta]. (2)Assuming tbuf1=tbuf2, the delay per stage of the floating body chain 110 may be approximated by the following equation (3):delay≈(tf114-tf14)/(114-14)≈((k-j)tt+(tbuf1-tbu2))/100 (3)≈(k-j)tt/100,wherein the accuracy is about 2tt to 4tt. The above also holds true for the case of a single HI-LO pulse edge input switch at pad 150 where the edge select signal at pad 202 is 1.To illustrate the accuracy potential of the apparatus 200, it is assumed in the above example that the input pad 150 is initially LO, and that a LO-HI transition pulse edge is provided at the pulse input pad 150, wherein the delay per stage is 15 ps for the floating body inverters FBI and 40 ps for the tied body inverters TBI. For [delta]=0 and tbuf1=tbuf2=120 ps, it will take 15 ps*14 stages=210 ps for the rising edge to reach the 14<th > stage at FBI14, plus 120 ps (e.g., tbuf1) to propagate to the load or clock pins of the odd-numbered latches 116odd. As an example, it is assumed that the pulse edge has propagated between the 8th and 9th tied body devices in the chain 112 at this point. Thus, after 210 ps+120 ps=330 ps, the rising edge arrives between the 8th and 9th stages of the TB chain 112 (330 ps/40 =8.25 ps).The data latched to the odd numbered flip-flops 116odd will accordingly comprise a pattern shown in a graph 250 of FIG. 8B. Also shown is the pattern of the data latched to the even numbered flip-flops 116even after 15 ps*114+120 ps=1830 ps, for a case in which the pulse edge has propagated to the 45th tied body device as it also propagates to the 114th floating body device FBI114. In this example, the 0 to 1 pattern changes at the 45<th > stage after about 1830 ps/40=45.75 ps. By comparing the two patterns (e.g., the first and second latched tied body data), it is seen in this example that it takes about (45-8)*40 ps=1480 ps for the rising edge to travel between the 14<th > and 114<th > floating body stages. Thus, the calculated floating body propagation delay value in terms of delay per stage is about 1480 ps/100=14.8 ps for the floating body inverters, where the error is about 1%≈2% for the measured 14.8 ps as compared to the assumed value of 15 ps.Preconditioning pulses may also be applied before the measurement as described above. Thus, where a finite number of preconditioning pulses is applied to precondition the body potential, the delay of the tied body stage tt, is essentially the same owing to the lack of the floating-body effects. However, tf14 and tf114 will be different because of the hysteretic floating body effects. Therefore, the number of stages j and k may be different than in the above example, depending on the number of the pre-conditioning pulses and the initial DC HI or LO state if the number of pulses is small. For example, suppose a number of pre-conditioning pulses have been applied, such that the delay per stage reduces to 12 ps for the floating body devices FBI but remains essentially the same for the tied body devices TBI. It is assumed for the sake of illustration that the 1 to 0 pattern change in the odd numbered latches 116odd takes place at the 6<th > stage of the tied body chain 112 when the odd data is stored, although the actual transition occurs at the 7<th > stage ((12 ps*14+120)/40=7.2). In addition, the 0 to 1 pattern changes at the 37<th > stage of the chain 112 ((12 ps*114+120 ps)/40=37.2 ps). For this example, then, the calculated delay per stage is (37-6)*40 ps/100=12.4 ps. The error is about 3%≈4% for the measured 12.4 ps as compared to the actual 12 ps.Decreasing the delay per stage of the tied body inverter devices TBI may be done to increase the resolution of these measurements. In one example, this can be accomplished by applying a higher supply voltage to the tied body chain 112. Alternatively or in combination, the number of the floating body and tied body stages can be increased to improve the floating body delay value measurement for a given resolution. For instance, if the number of stages increases from 100 to 200, the error can be reduced by 50%. While this approach increases the area utilization of the SMT test apparatus, the employment of scribe line region space for the test apparatus (e.g., FIG. 5) may allow such increases in the number of stages without adversely impacting the die regions, where improved measurement accuracy is desired.Referring now to FIG. 9, still another possible implementation is illustrated of a test apparatus 300 in accordance with the present invention. The test apparatus 300, like the apparatus 100, 100', 100'', and 200 above, may be fabricated in the scribe line region 140 of the wafer 102 (e.g., FIG. 5), wherein appropriate connection pads are provided for accessing the pulse input, control signal, and data output connections by a test system, as described below. In this apparatus 300, a ring oscillator circuit is also provided, comprising a 2-by-1 mux 212 to selectively provide a pulse edge or edges from a pulse input pad 150 through a 13 stage wave squaring circuit 302 to the tied body inverter chain 112, or to alternatively couple the last tied body inverter output TBIm to the first tied body inverter input TBI1 via tri-state inverter gates 304 and 306, respectively, according to a ring oscillator select signal at the pad 210. Whereas input signals driving a large pad capacitance tend to have slower rising and falling time (slew rate), the wave squaring circuit 302 is employed in this example to restore the square waveform, and may comprise Schmitt triggers or cascaded inverters, although any appropriate circuitry may be employed.One or more signal restoring gates 308 are optionally provided upstream of the inverter tied body chain 112, and one of the signal restoring inverter outputs is selectively provided to a frequency divider 310 (e.g., divide by 1024 in this example) via a NAND gate 312 according to the signal at the ring oscillator select pad 210. For example, where separate power supplies are used during testing to power the floating body chains 110 and the tied body chain 112, a pulse input signal may be weak if the power supply voltage of the floating body chain is lower than that of the tied body chain. In this regard, the signal restoring gates 308 operate to restore the signal strength. A divided signal is then provided from the frequency divider 310 to a buffer 314, wherein the buffered frequency data is made available to a test system via a ring oscillator output interface 316.The apparatus 300 also comprises two floating body chains 110' and 110'', comprising floating body inverter devices FBI1-FBI115 and floating body NAND gate devices FBN1-FBN48, respectively. A chain select signal is provided at a chain select pad 320 by an external tester (not shown) to a 1-by-2 demux 322 comprising 2-input NAND gates 322' and 322'' to selectively provide the pulse input edge or edges to one of the floating body chains 110' or 110'' via gates 322' and 322'', respectively. The chain select signal from pad 320 also provides a select signal to a 2-by-1 mux 326 comprising tri-state inverters 326' and 326'' to selectively provide the pulse output edge or edges from one of the floating body chains 110' and 110'' via gates 326' and 326'', respectively. The selected pulse output edge or edges from mux 326 are then applied to one input of a 2-input exclusive OR (XOR) gate 324 comprising tri-state inverters 324a, 324b and 324c. An edge select signal from an edge select pad 202 is applied to the other input of the XOR gate 324 to select whether the clock signal from the floating body chain indicates rising to rising edge delay or falling to falling edge delay measurements. The XORed signal is further applied to a buffer 328 for providing a control or clock signal or signals to store the tied body data into differential logic, level sensitive latches 116.As with the above test apparatus, the floating body inverter chain 110' is tapped at the 14th and 114th stages via a 2-by-1 mux 330 comprising tri-state inverters 330a and 330b, respectively, whereas the floating body NAND gate stage 110'' is tapped at the 8th and 48th stages via a 2-by-1 mux 332 comprising inverters 332a and 332b, respectively in this exemplary implementation. A test signal may be applied to a stage select pad 334 to select which stage tap of the selected floating body chain will provide the control signal to the buffer 328, wherein the flip-flops 116 all store tied body chain data according to a single event from the buffer 328. For example, the stage select signal selects which stage to be measured, wherein a "1" selects the 14th stage of the inverter chain 110' and the 8th stage of the NAND chain 110'', and a "0" selects the 114th stage of the inverter chain 110' and the 48th stage of the NAND chain 110''. The stored tied body data is then provided to the data interface 119 via a data buffer 340.The apparatus 300 may be operated to determine floating body effects in one or both of the chains 110' or 110'', wherein the period of the oscillation in the tied body chain 112 may be measured, either before or after obtaining data from the latches 116, using the ring oscillator circuit, whereby the period of tied body device oscillation is obtained by the tester via the frequency divider 310, the buffer 314, and the interface 316, with an appropriate select signal applied at the ring oscillator select pad 210. The ring oscillator is then deactivated using the pad 210, and a pulse edge may then be applied to the pad 150 and data is stored in the latches 116.Unlike the stored tied body data of the apparatus 200, the data in FIG. 9 comprises both odd and even data, whereas a pattern of alternating "1" and "0" values will be interrupted at the point where the applied pulse edge has propagated when the latch control signal occurs. For example, the latched tied body data will comprises alternating "1" and "0" values, with one occurrence of a pair of consecutive "1" values or two consecutive "0" values, at the point to which the pulse edge has propagated. As in FIG. 8A above, this information may then be correlated with the stored tied body data from the latches 116, to determine a floating body delay value associated with the floating body devices in one or both of the floating body chains 110' and/or 110''.For sake of discussion, it is assumed that the floating body inverter chain 110' is selected via the chain select pad 320. Also, a signal is provided to the edge select pad 202 to select whether the latch control signal from the floating body chain 110' indicates rising-to-rising edge delay or falling-to-falling edge delay. In addition, one of the stage taps (14th or 114th) is selected using the stage select pad 334, wherein the external tester applies appropriate control signals to the various selection pads discussed herein. For example, the 14th stage of the floating body inverter chain 110' is initially selected, and a pulse edge is applied to the pulse input pad 150 (e.g., by a pulse generator in the tester, not shown).As the pulse edge propagates through the tied body chain 112 and the selected floating body inverter chain 110', the mux 330, the mux 326, the XOR circuit 324, and the buffer 328, provide a latch signal to the latches 116 when the pulse edge reaches the output of the 14th floating body inverter FBI14, causing the tied body chain data to be stored. This data may then be clocked out of the latches 116 and into the buffer 340 for later or contemporaneous retrieval by the tester.The stage select signal at the pad 334 is then switched, to now employ the tap at the 114th stage of the floating body inverter chain 110' as the control signal. Another pulse edge is then provided to the input pad 150, and the process is repeated. However, in this case, the control signal is provided through the tri-state inverter 330b to the mux 326, the XOR circuit 324, and the buffer 328, whereupon the tied body data is stored when the pulse edge reaches the output of the 114th floating body inverter FBI114. This stored data is then provided to the buffer 340. These data sets may be evaluated and compared to determine a number of tied body inverter delays corresponding to 100 floating body inverter delays, which is then correlated to the tied body inverted delay obtained from the ring oscillator operation, in order to ascertain an estimate of the floating body delay value of interest.It will be appreciated that any number of such single-pass or dual-pass tests may be performed with varying numbers or types of preconditioning pulses, so as to generate a curve, such as those illustrated above in FIG. 2. Referring to FIGS. 10A-10D, in any of the above tests, the edge select pad 202 and the exclusive OR circuit 324allow data to be obtained for both rising-to-rising edge delays or falling-to-falling edge delays. For example, where the storage devices 116 are negative level sensitive latches, the tied body data prior to the falling edge of the last clock pulse are latched into the latches 116. FIG. 10A illustrates the waveforms 350 for a rising edge to rising edge measurement, where the floating body chain is initially held at a LO DC steady state (e.g., 0V) prior to application of a rising pulse edge with or without preconditioning pulses. FIG. 10B illustrates a diagram 352 showing the waveforms for a falling edge to falling edge measurement, where the floating body chain is initially held at a LO DC steady state prior to application of a falling pulse edge with or without pre-conditioning pulses. FIG. 10C illustrates the waveform in a diagram 354 for a falling edge to falling edge measurement, where the floating body chain is initially held at a HI DC steady state (e.g., 1.5V) prior to application of a falling pulse edge with or without pre-conditioning pulses, and FIG. 10D provides a diagram 356 illustrating the waveforms for a rising edge to rising edge measurement, where the floating body chain is initially held at a HI DC steady state prior to application of a rising pulse edge with or without preconditioning pulses. The plot 60 of FIG. 2 illustrates a bench measurement result for a long delay chain with a fan-out of 1 and the initial state at HI, using 250M Hz frequency and 50% duty cycle, i.e. 40 ns period and 20 ns pulse width. The upper curve 62 in this example shows a first switch delay (falling edge to falling edge delay) and the lower curve 64 shows a second switch delay (rising edge to rising edge delay). To get a data point for the falling edge to falling edge delay at time=0, the waveform of FIG. 10C (40 ns period and 20 ns pulse width) can be applied without preconditioning pulses and by selecting the inverter floating body chain and the 14<th > stage and setting EDGE=0. The process repeats for the 114<th > stage, and a propagation delay can be obtained by comparing the two data patterns. Similarly, to get a data point for the rising edge to rising edge delay at time=1 us, the waveform of FIG. 10D can be applied with 25 pre-conditioning pulses (1 us/40 ns=25). From these measurements, therefore, the propriety and stability of an SOI design and/or an SOI manufacturing process may be measured in accordance with the invention. Furthermore, it is noted that the test system required to operate the apparatus 300 and the other implementations of the invention, may be easily automated, whereby expeditious testing of a large number of wafers may be performed in a relatively short period of time. Thus, the invention is particularly advantageous in testing production wafers in a manufacturing facility prior to die separation.Alternatively or in combination, further tests may be performed in the apparatus 300, wherein the NAND floating body device chain 110'' is selected via the chain select pad 320. For example, the first stage tap at the 8th NAND device FBM8 may initially be selected via a signal applied to the stage select pad 334, and a pulse edge is applied to the pulse input pad 150 (e.g., with or without preconditioning pulses). As the pulse edge propagates through the tied body chain 112 and the floating body NAND gate chain 110'', the gate 332a, the mux 326, the XOR circuit 324, and the buffer 328, provide a control signal to the latches 116 when the pulse edge reaches the output of the 8th floating body NAND gate FBN8, causing the tied body chain data to be stored. This data is then provided from the latches 116 to the buffer 340.Thereafter, the stage select signal at pad 334 is again switched, to select the tap at the 48th stage of the floating body NAND gate chain 110'' for latch signal generation. Another pulse edge is then provided to the input pad 150, and the process is repeated, wherein a latch signal is provided through the tri-state inverter 332b to the mux 326, the XOR circuit 324, and the buffer 328, whereupon the tied body data is latched when the pulse edge reaches the output of the 48th NAND gate FBN48. This latched data is then provided to the buffer 340. These data sets may be evaluated and compared to determine a number of tied body inverter delays corresponding to 40 floating body NAND gate delays, which is then correlated to the tied body inverted delay obtained from the ring oscillator operation, in order to ascertain an estimate of the floating body delay value of interest. Further passes may be performed using one or more preconditioning pulses, whereby a curve of delay vs. time may be plotted for the floating body NAND gate delays (e.g. FIG. 2).It will be appreciated that many forms of test apparatus fall within the scope of the present invention and the appended claims, including and in addition to those specifically illustrated and described herein. For example, the exemplary test apparatus 300 of FIG. 9 may be modified within the scope of the invention to provide two sets of storage devices 116 (e.g., such as in FIGS. 7 and 8A), wherein one set operates to store odd numbered tied body device data, and a second set stored even numbered tied body device data. Moreover, any appropriate type of storage device 116 may be employed, including but not limited to edge-sensitive latch circuits, differential latch devices, registers, flip-flops, and others, within the scope of the present invention.Referring now to FIG. 11, another aspect of the invention provides methods for characterizing floating body delay effects in an SOI wafer, wherein one exemplary method 400 is illustrated and described below. Although the method 400 is illustrated and described hereinafter as a series of acts or events, it will be appreciated that the present invention is not limited by the illustrated ordering of such acts or events. For example, some acts may occur in different orders and/or concurrently with other acts or events apart from those illustrated and/or described herein, in accordance with the invention. In addition, not all illustrated steps may be required to implement a methodology in accordance with the present invention. Furthermore, the methods according to the present invention may be implemented in association with the apparatus and systems illustrated and described herein as well as in association with other apparatus or systems not illustrated.Beginning at 402, the method 400 comprises providing a floating body inverter or NAND gate chain in an SOI wafer at 404, and optionally providing one or more preconditioning pulses to the floating and tied body chains at 406. At 408, a pulse edge is provided to the floating body chain and to the tied body chain, and first tied body chain data is latched at 410 according to a first device in the floating body chain. A second set of tied body data is then optionally latched at 412, according to a second floating body device, and a floating body delay value is determined at 414 to characterize floating body delay effects in the SOI wafer according to the latched tied body data at 414, before the method 400 ends at 416.In one example, storing the first tied body chain data at 410 comprises storing data outputs from the tied body device chain when the pulse edge propagates through the floating body chain to the first floating body device. Storing the second tied body chain data at 412 may comprise storing data states from the tied body devices when the pulse edge propagates through the floating body chain to the second floating body device. As in the above discussion of FIGS. 7-9, for example, the first and second tied body chain data may be latched during propagation of a single pulse edge, or a second pulse edge may be applied for obtaining the second tied body data.In this regard, the methods of the invention comprise single pass and multi-pass testing. Furthermore, storing the first tied body chain data may comprise storing data states from odd numbered tied body devices in the tied body chain, and storing the second tied body chain data may comprise storing data states from even numbered tied body devices in the tied body chain, for example, as illustrated and described above with respect to FIGS. 7 and 8A. Moreover, the first data may alternatively be latched from odd numbered tied body devices, with the second data being latched from odd numbered tied body devices. In another alternative, all tied body device data (e.g., odd and even) may be latched at two different times (e.g., according to latch signals from first and second floating body chain devices) in either a single pass test or in a dual pass test. In addition, it is noted that the testing may be carried out with the pulse edge being applied following stabilization of the floating body chain and the tied body chain at a DC state, or alternatively, after provision of one or more preconditioning pulse to the floating body chain and to the tied body chain.The determination of the floating body delay value at 414 may comprise determining a first value representing a number of tied body devices in the tied body chain to which the pulse edge has propagated in the first tied body chain data, determining a second value representing a number of tied body devices in the tied body chain to which the pulse edge has propagated in the second tied body chain data, and determining the floating body delay value according to the first and second values. In one implementation, moreover, a tied body device propagation delay value may be obtained for use in characterizing the floating body effects at 414. Thus, for instance, the method 400 may comprise coupling first and last tied body devices in the tied body chain to form a tied body chain ring oscillator, measuring a tied body device propagation delay value using the tied body chain ring oscillator, and decoupling the first and last tied body devices from one another in the tied body chain. In this example, the determination of the floating body delay value at 414 comprises determining the floating body delay value according to the first and second values and according to the tied body device propagation delay value.Although the invention has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, etc.), the terms (including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes", "having", "has", "with", or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising."
A folded bit line DRAM device is provided. The folded bit line DRAM device includes an array of memory cells. Each memory cell in the array of memory cells includes a pillar extending outwardly from a semiconductor substrate. Each pillar includes a single crystalline first contact layer and a single crystallne second contact layer separated by an oxide layer. A single crystalline vertical transistor is formed along alternating sides of the pillar within a row of pillars. The single crystalline vertical transistor includes an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer, an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer, and an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions. A plurality of buried bit lines are formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells. Further, a plurality of word lines are included. Each word line is disposed orthogonally to the plurality of buried bit lines in a trench between rows of th epillars for addressing laternating body regions of the single crystalline vertical transistors that are adjacent to the trench.
WHAT IS CLAIMED IS: 1. A folded bit line DRAM device, comprising: an array of memory cells formed in rows and columns, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along alternating sides of the pillar within a row of pillars, wherein the single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; and an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. <Desc/Clms Page number 30> 2. The folded bit line DRAM device of claim 1, wherein the ultra thin single crystalline vertical body region includes a channel having a vertical length of less than 100 nanometers. 3. The folded bit line DRAM device of claim 1, wherein the ultra thin single crystalline vertical body region has a horizontal width of less than 10 nanometers. 4. The folded bit line DRAM device of claim 1, wherein the ultra thin single crystalline vertical body region is formed from solid phase epitaxial growth. 5. A folded bit line DRAM device, comprising: an array of memory cells, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along side of the pillar, wherein the single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; an ultra thin single crystalline vertical body region formed along alternating sides of the pillar within a row of pillars and coupling the <Desc/Clms Page number 31> first and the second source/drain regions; and a gate opposing the vertical body region and separated therefrom by a gate oxide; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing gates of the single crystalline vertical transistors that are adjacent to the trench in alternating pillars along a row of pillars. 6. The folded bit line DRAM device of claim 5, wherein the plurality of buried bit lines are more heavily doped than the first contact layer and are formed integrally with the first contact layer. 7. The folded bit line DRAM device of claim 5, wherein the ultra thin single crystalline vertical body region includes a p-type channel having a vertical length of less than 100 nanometers. 8. The folded bit line DRAM device of claim 7, wherein the ultra thin single crystalline vertical body region has a horizontal width of less than 10 nanometers 9. The folded bit line DRAM device of claim 5, wherein the pillar extends outwardly from an insulating portion of the semiconductor substrate. 10. The folded bit line DRAM device of claim 5, wherein the semiconductor substrate includes a silicon on insulator substrate. <Desc/Clms Page number 32> 11. The folded bit line DRAM device of claim 5, wherein the gate includes a horizontally oriented gate, wherein a vertical side of the horizontally oriented gate has a length of less than 100 nanometers. 12. The folded bit line DRAM device of claim 5, wherein the gate includes a vertically oriented gate having a vertical length of less than 100 nanometers. 13. A folded bit line DRAM device, comprising: an array of memory cells formed in rows and columns, wherein each memory cell in the array of memory cells includes : a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along alternating sides of the pillar within a row of pillars, wherein the single crystalline vertical transistor includes ; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions; and wherein a surface space charge region for the single crystalline vertical transistor scales down as other dimensions of the transistor scale down; <Desc/Clms Page number 33> a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. 14. A folded bit line DRAM device, comprising: an array of memory cells formed in rows and columns, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along alternating sides of the pillar within a row of pillars, wherein the single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; and an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions; and wherein a horizontal junction depth for the first and the second ultra thin single crystalline vertical source/drain regions is much less <Desc/Clms Page number 34> than a vertical length of the ultra thin single crystalline vertical body region a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing body regions of the single crystalline vertical transistors that are adjacent to the trench. 15. The folded bit line DRAM device of claim 14, wherein the ultra thin single crystalline vertical body region includes a p-type channel having a vertical length of less than 100 nanometers. 16. A semiconductor device, comprising: an array of pillars formed in rows and columns extending outwardly from a semiconductor substrate, wherein each pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a pair of single crystalline vertical transistors formed along opposing sides of each pillar, wherein each single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; an ultra thin single crystalline vertical body region formed along side of the oxide in each pillar and which couples the first and the second source/drain regions formed along side of the pillar a number of buried bit lines formed of single crystalline semiconductor material and disposed below the single crystalline vertical body regions, wherein <Desc/Clms Page number 35> the number of buried bit lines couple to the first contact layer along columns of pillars; a number of wordlines, wherein each wordline is disposed in a trench formed between rows of pillars and below a top surface of the pillars, and wherein each wordline independently addresses body regions for the pair of single crystalline vertical transistors in alternating pillars along a row of pillars; and a number of capacitors which independently couple to the second contact layer in each pillar. 17. The semiconductor device of claim 16, wherein each wordline integrally forms a gate for addressing the body region in a pillar on a first side of the trench and is isolated from the body region in a column adjacent pillar on a second side of the trench. 18. The semiconductor device of claim 16, wherein each wordline integrally forms a gate for addressing the body region in a pillar on the first side of the trench and is isolated from the body region in a row adjacent pillar on the first side of the trench. 19. The semiconductor device of claim 16, wherein each ultra thin single crystalline vertical body region includes a p-type channel having a vertical length of less than 100 nanometers. 20. The semiconductor device of claim 16, wherein the number of buried bit lines are formed integrally with the first contact layer and are separated from the semiconductor substrate by an oxide layer. 21. The semiconductor device of claim 16, wherein each wordline includes a horizontally oriented wordline having a vertical side length of less than 100 nanometers. <Desc/Clms Page number 36> 22. The semiconductor device of claim 16, wherein each wordline includes a vertically oriented wordline having a vertical length of less than 100 nanometers. 23. A semiconductor device, comprising: a folded bit line array of memory cells, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along alternating sides of the pillar within a row of pillars, wherein the single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; and an ultra thin single crystalline vertical body region formed along side of the oxide layer and which couples the first and the second source/drain regions; and a gate opposing the vertical body region and separated therefrom by a gate oxide; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing <Desc/Clms Page number 37> gates of the single crystalline vertical transistors that are adjacent to the trench in alternating pillars along a row of pillars. 24. The semiconductor device of claim 23, wherein each single crystalline vertical body region includes a p-type channel having a vertical length of less than 100 nanometers. 25. The semiconductor device of claim 23, wherein each of the plurality of buried bit lines is separated by an oxide layer from the semiconductor substrate. 26. The semiconductor device of claim 23, wherein each gate in a trench along a row of pillars is integrally formed with one of the plurality of word lines in the adjacent trench, and wherein each of the plurality of word lines includes a horizontally oriented word line having a vertical side of less than 100 nanometers opposing the single crystalline vertical body regions. 27. The semiconductor device of claim 23, wherein each gate in a trench along a row of pillars is integrally formed with one of the plurality of word lines in the adjacent trench, and wherein each of the plurality of word lines includes a vertically oriented word line having a vertical length of less than 100 nanometers. 28. A memory device, comprising: an array of memory cells, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor, wherein the single crystalline vertical transistor includes; <Desc/Clms Page number 38> an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; an ultra thin single crystalline vertical body region formed along alternating sides of the pillar within a row of pillars and coupling the first and the second source/drain regions; and a gate opposing the vertical body region and separated therefrom by a gate oxide; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; a plurality of first word lines, each first word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing gates of the single crystalline vertical transistors that are adjacent to a first side of the trench in alternating pillars along the first side of the trench; and a plurality of second word lines, each second word line disposed orthogonally to the bit lines in the trench between rows of the pillars and separated from each first word line by an insulator such that the second wordline is adjacent a second side of the trench and addresses gates of the single crystalline vertical transistors that are adjacent to a second side of the trench in alternating pillars along a second side of the trench. 29. The memory device of claim 28, wherein each gate adjacent to a first side of the trench along a row of pillars is integrally formed with one of the plurality of first word lines in the adjacent trench, and wherein each of the plurality of <Desc/Clms Page number 39> first word lines includes a vertically oriented word line having a vertical length of less than 100 nanometers. 30. The memory device of claim 28, wherein each pillar includes a capacitor coupled to the second contact layer. 31. The memory device of claim 28, wherein each single crystalline vertical body region has a vertical length of less than 100 nanometers. 32. The memory device of claim 28, wherein each single crystalline vertical transistors has a vertical length of less than 100 nanometers and a horizontal width of less than 10 nanometers. 33. A memory device, comprising: a folded bit line array of memory cells, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a pair of single crystalline vertical transistors formed along opposing sides of each pillar, wherein each single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; and an ultra thin single crystalline vertical body region which opposes the oxide layer and couples <Desc/Clms Page number 40> the first and the second source/drain regions; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of first word lines, each first word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing body regions of the single crystalline vertical transistors in alternating row adjacent pillars that are adjacent to a first side of the trench; and a plurality of second word lines, each second word line disposed orthogonally to the bit lines in the trench between rows of the pillars and separated from each first word line by an insulator such that the second wordline is adjacent a second side of the trench and addresses body regions of the single crystalline vertical transistors in alternating row adjacent pillars that are adjacent to a second side of the trench. 34. The memory device of claim 33, wherein each of the plurality of first wordlines integrally forms a gate for addressing the body region in a pillar on a first side of the trench and is isolated by an insulator layer from the body region in a row adjacent pillar on the first side of the trench. 35. The memory device of claim 33, wherein each of the plurality of second wordlines integrally forms a gate for addressing the body region in a pillar on a second side of the trench and is isolated by an insulator layer from the body region in a row adjacent pillar on the second side of the trench. 36. The memory device of claim 33, wherein each of the plurality of first and second word lines includes a vertically oriented word line having a vertical length of less than 100 nanometers. <Desc/Clms Page number 41> 37. The memory device of claim 33, wherein each single crystalline vertical transistors has a vertical length of less than 100 nanometers and a horizontal width of less than 10 nanometers. 38. An electronic system, comprising : a processor; and a folded bit line DRAM device coupled to the processor, wherein the folded bit line DRAM device includes: an array of memory cells formed in rows and columns, wherein each memory cell in the array of memory cells includes: a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; a single crystalline vertical transistor formed along alternating sides of the pillar within a row of pillars, wherein the single crystalline vertical transistor includes; an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions; and wherein a surface space charge region for the single crystalline vertical transistor scales <Desc/Clms Page number 42> down as other dimensions of the transistor scale down; a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells; and a plurality of word lines, each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. 39. A method for forming a folded bit line DRAM device, comprising: forming an array of memory cells formed in rows and columns, wherein forming each memory cell includes: forming a pillar extending outwardly from a semiconductor substrate, wherein forming the pillar includes forming a single crystalline first contact layer of a first conductivity type and forming a single crystalline second contact layer of the first conductivity type vertically separated by an oxide layer; forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars, and wherein forming the single crystalline vertical transistor includes: depositing a lightly doped polysilicon layer of a second conductivity type over the pillar and directionally etching the polysilicon layer of the second conductivity type to leave only on sidewalls of the pillars; annealing the pillar such that the lightly doped polysilicon layer of the second conductivity type recrystallizes and lateral epitaxial solid phase regrowth occurs vertically to form a single crystalline vertically <Desc/Clms Page number 43> oriented material of the second conductivity type; and wherein the annealing causes the single crystalline first and second contact layers of a first conductivity type seed a growth of single crystalline material of the first conductivity type into the lightly doped polysilicon layer of the second type to form vertically oriented first and second source/drain regions of the first conductivity type separated by the now single crystalline vertically oriented material of the second conductivity type; forming a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array of memory cells, wherein forming the plurality of buried bit lines includes coupling the first contact layer of column adjacent pillars in the array of memory cells; and forming a plurality of word lines, wherein forming the plurality of word lines includes forming each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. 40. The method of claim 39, wherein forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars includes forming the transistor such that the transistor has an ultra thin single crystalline vertical body region having a horizontal width of less than 10 nanometers. 41. The method of claim 39, wherein forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars includes forming the transistor such that the transistor has a vertical channel length of less than 100 nanometers and has a first and a second source/drain regions wherein the first and the second source/drain regions have a horizontal width of less than 10 nanometers. <Desc/Clms Page number 44> 42. A method for forming a folded bit line DRAM device, comprising: forming an array of memory cells formed in rows and columns, wherein forming each memory cell in the array of memory cells includes: forming a pillar extending outwardly from a semiconductor substrate, wherein the pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer; forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars, wherein forming the single crystalline vertical transistor includes; forming an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer; forming an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer; forming an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions; and wherein forming the single crystalline vertical transistor includes forming the transistor such that a surface space charge region for the single crystalline vertical transistor scales down as other dimensions of the transistor scale down; forming a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells, wherein forming the plurality of buried bit lines includes coupling the first contact layer of column adjacent pillars in the array of memory cells memory cells; and <Desc/Clms Page number 45> forming a plurality of word lines, wherein forming the plurality of word lines includes forming each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. 43. The method of claim 42, wherein forming the plurality of buried bit lines includes forming a plurality of buried bit lines which are more heavily doped than the first contact layer and are formed integrally with the first contact layer. 44. The method of claim 42, wherein forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars includes forming the transistor such that the transistor has the ultra thin single crystalline vertical body region with a p-type channel having a vertical length of less than 100 nanometers. 45. The method of claim 44, wherein forming the transistor such that the transistor has the ultra thin single crystalline vertical body region includes forming the ultra thin single crystalline vertical body region to have a horizontal width of less than 10 nanometers 46. The method of claim 42, wherein forming a plurality of buried bit lines of single crystalline semiconductor material below the pillar includes forming a pluraltiy of buried bit lines which are separated from the semiconductor substrate by an insulator layer. 47. The method of claim 42, wherein forming the plurality of wordlines includes integrally forming a horizontally oriented gate for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench, wherein a vertical side of the horizontally oriented gate has a length of less than 100 nanometers. <Desc/Clms Page number 46> 48. The method of claim 42, wherein forming the plurality of wordlines includes integrally forming a vertically oriented gate for addressing alternating body regions of the single crystalline vertical transistors in a row of pillars that are adjacent to the trench, wherein the integrally formed vertically oriented gate has a vertical length of less than 100 nanometers. 49. A method for forming a memory array, comprising: forming a folded bit line array of memory cells, wherein forming each memory cell in the array of memory cells includes: forming a pillar extending outwardly from a semiconductor substrate, wherein forming the pillar includes forming a single crystalline first contact layer of a first conductivity type and forming a single crystalline second contact layer of the first conductivity type vertically separated by an oxide layer; forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars, wherein forming the single crystalline vertical transistor includes: depositing a lightly doped polysilicon layer of a second conductivity type over the pillar and directionally etching the polysilicon layer of the second conductivity type to leave only on sidewalls of the pillars; annealing the pillar such that the lightly doped polysilicon layer of the second conductivity type recrystallizes and lateral epitaxial solid phase regrowth occurs vertically to form a single crystalline vertically oriented material of the second conductivity type; and wherein the annealing causes the single crystalline first and second contact layers of a first conductivity type seed a growth of single crystalline material of <Desc/Clms Page number 47> the first conductivity type into the lightly doped polysilicon layer of the second type to form vertically oriented first and second source/drain regions of the first conductivity type separated by the now single crystalline vertically oriented material of the second conductivity type; and forming a gate opposing the single crystalline vertically oriented material of the second conductivity type and separated therefrom by a gate oxide; forming a plurality of buried bit lines of single crystalline semiconductor material and disposed below the pillars in the array memory cells such that each one of the plurality of buried bit lines couples the first contact layer of column adjacent pillars in the array of memory cells; and forming a plurality of word lines disposed orthogonally to the plurality of buried bit lines, wherein forming the plurality of word lines includes forming each one of the plurality of wordlines in a trench between rows of the pillars for addressing gates of the single crystalline vertical transistors that are adjacent to the trench. 50. The method of claim 49, wherein forming each single crystalline vertical transistor includes forming an ultra thin body region with a p-type channel having a vertical length of less than 100 nanometers and a horizontal width of less than 10 nanometers. 51. The method of claim 49, wherein forming the plurality of buried bit lines includes forming the plurality of buried bit lines separated by an oxide layer from the semiconductor substrate. 52. The method of claim 49, wherein forming the plurality of wordlines includes integrally forming each gate, present in the adjacent trench along alternating pillars along a row of pillars, with one of the plurality of word lines, and wherein forming each of the plurality of word lines includes forming a <Desc/Clms Page number 48> horizontally oriented word line having a vertical side of less than 100 nanometers opposing the single crystalline vertical transistor. 53. The method of claim 49, wherein forming the plurality of wordlines includes integrally forming, present in the adjacent trench along alternating pillars along a row of pillars, with one of the plurality of word lines, and wherein forming each of the plurality of word lines includes forming a vertically oriented word line having a vertical length of less than 100 nanometers. 54. A method of forming a memory device, comprising: forming an array of memory cells, wherein forming each memory cell in the array of memory cells includes: forming a pillar extending outwardly from a semiconductor substrate, wherein forming the pillar includes forming a single crystalline first contact layer of a first conductivity type and forming a single crystalline second contact layer of the first conductivity type vertically separated by an oxide layer; forming a pair of single crystalline vertical transistor along opposing sides of the pillar, wherein forming each one of the pair of single crystalline vertical transistors includes: depositing a lightly doped polysilicon layer of a second conductivity type over the pillar and directionally etching the polysilicon layer of the second conductivity type to leave only on opposing sidewalls of the pillars; annealing the pillar such that the lightly doped polysilicon layer of the second conductivity type recrystallizes and lateral epitaxial solid phase regrowth occurs vertically to form a single crystalline vertically oriented material of the second conductivity type; and <Desc/Clms Page number 49> wherein the annealing causes the single crystalline first and second contact layers of a first conductivity type seed a growth of single crystalline material of the first conductivity type into the lightly doped polysilicon layer of the second type to form vertically oriented first and second source/drain regions of the first conductivity type separated by the now single crystalline vertically oriented material of the second conductivity type; and forming a pair of gates, each gate opposing the single crystalline vertically oriented material of the second conductivity type and separated therefrom by a gate oxide; forming a plurality of buried bit lines of single crystalline semiconductor material and disposed below the pillars in the array memory cells such that each one of the plurality of buried bit lines couples the first contact layer of column adjacent pillars in the array of memory cells; and forming a plurality of first word lines disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing gates of the single crystalline vertical transistors that are adjacent to a first side of the trench in alternating pillars along the first side of the trench; and forming a plurality of second word lines disposed orthogonally to the bit lines in the trench between rows of the pillars and separated from each first word line by an insulator such that the second wordline is adjacent a second side of the trench and addresses gates of the single crystalline vertical transistors that are adjacent to a second side of the trench in alternating pillars along the second side of the trench. 55. The method of claim 54, wherein forming the plurality of first and second wordlines includes integrally forming each gate in alternating pillars along a row of pillars adjacent the first and the second side of the trench respectively with one of the plurality of the first and the second word lines, and isolating the <Desc/Clms Page number 50> plurality of first and second wordlines adjacent the first side and the second side of the trench respectively from the gates in row adjacent pillars. 56. The method of claim 55, wherein forming each of the plurality of first and second word lines includes forming vertically oriented word lines having a vertical length of less than 100 nanometers. 57. The method of claim 54, wherein forming each single crystalline vertical transistor includes forming the single crystalline vertical transistor to have a vertical length of less than 100 nanometers and a horizontal width of less than 10 nanometers. 58. The method of claim 54, wherein forming each single crystalline vertical transistor includes forming the single crystalline vertical transistor such that a horizontal junction depth for the first and the second source/drain regions first conductivity type is much less than a vertical length of the single crystalline vertically oriented material of the second conductivity type. 59. The method of claim 54, wherein forming each single crystalline vertical transistor includes forming the single crystalline vertical transistor such that a surface space charge region for the single crystalline vertical transistor scales down as other dimensions of the transistor scale down.
<Desc/Clms Page number 1> FOLDED BIT LINE DRAM WITH VERTICAL ULTRA THIN BODY TRANSISTORS Cross Reference To Related Applications This application is related to the following co-pending, commonly assigned U. S. patent applications:"Open Bit Line DRAM with Ultra Thin Body Transistors, "attorney docket no. 1303. 005US1, serial number 09/780,125, "Flash Memory with Ultra Thin Vertical Body Transistors, "attorney docket no. 1303.003US1, serial number 09/780,169,"Programmable Logic Arrays with Ultra Thin Body Transistors, "attorney docket no. 1303. 007US 1, serial number 09/780,087, and"Memory Address and Decode Circuits with Ultra Thin Body Transistors, "attorney docket no. 1303. 006US1, serial number 09/780,144, "Programmable Memory Address and Decode Circuits with Ultra Thin Body Transistors, "attorney docket no. 1303.008US1, serial number 09/780, 126,"In Service Programmable Logic Arrays with Ultra Thin Body Transistors,"attorney docket no. 1303.009US1, serial number 09/780,129, which are filed on even date herewith and each of which disclosure is herein incorporated by reference. Field of the Invention The present invention relates generally to integrated circuits, and in particular to folded bit line DRAM with ultra thin body transistors. Background of the Invention Semiconductor memories, such as dynamic random access memories (DRAMs), are widely used in computer systems for storing data. A DRAM memory cell typically includes an access field-effect transistor (FET) and a storage capacitor. The access FET allows the transfer of data charges to and from the storage capacitor during reading and writing operations. The data charges on the storage capacitor are periodically refreshed during a refresh operation. Memory density is typically limited by a minimum lithographic feature size (F) that is imposed by lithographic processes used during fabrication. For example, the present generation of high density dynamic random access <Desc/Clms Page number 2> memories (DRAMs), which are capable of storing 256 Megabits of data, require an area of 8F2per bit of data. There is a need in the art to provide even higher density memories in order to further increase data storage capacity and reduce manufacturing costs. Increasing the data storage capacity of semiconductor memories requires a reduction in the size of the access FET and storage capacitor of each memory cell. However, other factors, such as subthreshold leakage currents and alpha-particle induced soft errors, require that larger storage capacitors be used. Thus, there is a need in the art to increase memory density while allowing the use of storage capacitors that provide sufficient immunity to leakage currents and soft errors. There is also a need in the broader integrated circuit art for dense structures and fabrication techniques. As the density requirements become higher and higher in gigabit DRAMs and beyond, it becomes more and more crucial to minimize cell area. One possible DRAM architecture is the folded bit line structure. The continuous scaling, however, of MOSFET teclmology to the deep sub-micron region where channel lengths are less than 0.1 micron, 100 nm, or 1000 A causes significant problems in the conventional transistor structures. As shown in Figure 1, junction depths should be much less than the channel length of 1000 A, or this implies junction depths of a few hundred Angstroms. Such shallow junctions are difficult to form by conventional implantation and diffusion techniques. Extremely high levels of channel doping are required to suppress short-channel effects such as drain-induced barrier lowering; threshold voltage roll off, and sub-threshold conduction. Sub-threshold conduction is particularly problematic in DRAM technology as it reduces the charge storage retention time on the capacitor cells. These extremely high doping levels result in increased leakage and reduced carrier mobility. Thus making the channel shorter to improve performance is negated by lower carrier mobility. Therefore, there is a need in the art to provide improved memory densities while avoiding the deleterious effects of short-channel effects such as drain-induced barrier lowering; threshold voltage roll off, and sub-threshold conduction, increased leakage and reduced carrier mobility. At the same time charge storage retention time must be maintained. <Desc/Clms Page number 3> Summary of the Invention The above mentioned problems with semiconductor memories and other problems are addressed by the present invention and will be understood by reading and studying the following specification. Systems and methods are provided for transistors with ultra thin bodies, or transistors where the surface space charge region scales down as other transistor dimensions scale down. In one embodiment of the present invention, a folded bit line DRAM device is provided. The folded bit line DRAM device includes an array of memory cells. Each memory cell in the array of memory cells includes a pillar extending outwardly from a semiconductor substrate. Each pillar includes a single crystalline first contact layer and a single crystalline second contact layer separated by an oxide layer. A single crystalline vertical transistor is formed along alternating sides of the pillar within a row of pillars. The single crystalline vertical transistor includes an ultra thin single crystalline vertical first source/drain region coupled to the first contact layer, an ultra thin single crystalline vertical second source/drain region coupled to the second contact layer, and an ultra thin single crystalline vertical body region which opposes the oxide layer and couples the first and the second source/drain regions. A plurality of buried bit lines are formed of single crystalline semiconductor material and disposed below the pillars in the array memory cells for interconnecting with the first contact layer of column adjacent pillars in the array of memory cells. Further, a plurality of word lines are included. Each word line is disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. The invention also provides a method of fabricating a method for forming a folded bit line DRAM device. The method includes forming an array of memory cells formed in rows and columns. Forming each memory cell includes forming a pillar extending outwardly from a semiconductor substrate. Forming each pillar includes forming a single crystalline first contact layer of a first conductivity type and forming a single crystalline second contact layer of the first conductivity type vertically separated by an oxide layer. Forming each <Desc/Clms Page number 4> memory cell further includes forming a single crystalline vertical transistor along alternating sides of the pillar within a row of pillars. According to the teachings of the present invention forming each single crystalline vertical transistor includes depositing a lightly doped polysilicon layer of a second conductivity type over the pillar and directionally etching the polysilicon layer of the second conductivity type to leave only on sidewalls of the pillars. Forming each single crystalline vertical transistor includesannealing the pillar such that the lightly doped polysilicon layer of the second conductivity type recrystallizes and lateral epitaxial solid phase regrowth occurs vertically to form a single crystalline vertically oriented material of the second conductivity type. Furhter, the annealing causes the single crystalline first and second contact layers of a first conductivity type seed a growth of single crystalline material of the first conductivity type into the lightly doped polysilicon layer of the second type to form vertically oriented first and second source/drain regions of the first conductivity type separated by the now single crystalline vertically oriented material of the second conductivity type Forming the folded bit line DRAM device further includes forming a plurality of buried bit lines formed of single crystalline semiconductor material and disposed below the pillars in the array of memory cells. Forming the plurality of buried bit lines includes coupling the first contact layer of column adjacent pillars in the array of memory cells. The method further includes forming a plurality of word lines. According to the teachings of the present invention forming the plurality of word lines includes forming each word line disposed orthogonally to the plurality of buried bit lines in a trench between rows of the pillars for addressing alternating body regions of the single crystalline vertical transistors that are adjacent to the trench. These and other embodiments, aspects, advantages, and features of the present invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art by reference to the following description of the invention and referenced drawings or by practice of the invention. The aspects, advantages, and features of the invention are realized <Desc/Clms Page number 5> and attained by means of the instrumentalities, procedures, and combinations particularly pointed out in the appended claims. Brief Description of the Drawings Figure 1 is an illustration of a convention MOSFET transistor illustrating the shortcomings of such conventional MOSFETs as continuous scaling occurs to the deep sub-micron region where channel lengths are less than 0.1 micron, 100 nm, or 1000 A. Figure 2A is a diagram illustrating generally one embodiment of a folded bit line DRAM with vertical ultra thin body transistors according to the teachings of the present invention. Figure 2B illustrates an embodiment of the present invention for a folded bit line architecture practiced having a single wordline/gate per vertical ultra thin body transistors formed on opposing sides of pillars according to the teachings of the present invention. Figure 3 is a diagram illustrating a vertical ultra thin body transistor formed along side of a pillar according to the teachings of the present invention. Figure 4A is a perspective view illustrating generally one embodiment of a portion of a folded bit line memory according to the present invention. Figure 4B is a top view of Figure 4A illustrating generally pillars including the ultra thin single crystalline vertical transistors. Figure 4C is a perspective view illustrating another embodiment of a portion of a folded bit line memory array according to the present invention. Figure 4D is a cross sectional view taken along cut-line 4D-4D of Figure 4C illustrating generally pillars including the ultra thin single crystalline vertical transistors according to the teachings of the present invention. Figures 5A-5C illustrate an initial process sequence which for forming pillars along side of which vertical ultra thin body transistors can later be formed as part of forming a folded bit line DRAM according to the teachings of the present invention. Figures 6A-6C illustrate that the above techniques described in connection with Figures 5A-5C can be implemented with a bulk CMOS technology or a silicon on insulator (SOI) technology. <Desc/Clms Page number 6> Figures 7A-7D illustrate a process sequence continuing from the pillar formation embodiments provided in Figures 5A-6C to form vertical ultra thin body transistors along side of the pillars. Figures 8A-8C illustrate a process sequence for forming a horizontal gate structure embodiment, referred to herein as horizontal replacement gates, in connection with the present invention. Figures 9A-9D illustrate a process sequence for forming a vertical gate structure embodiment, in connection with the present invention. Description of the Preferred Embodiments In the following detailed description of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. The embodiments are intended to describe aspects of the invention in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and changes may be made without departing from the scope of the present invention. In the following description, the terms wafer and substrate are interchangeably used to refer generally to any structure on which integrated circuits are formed, and also to such structures during various stages of integrated circuit fabrication. Both terms include doped and undoped semiconductors, epitaxial layers of a semiconductor on a supporting semiconductor or insulating material, combinations of such layers, as well as other such structures that are known in the art. The following detailed description is not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. Figure 2A is a diagram illustrating generally one embodiment of a folded bit line DRAM with vertical ultra thin body transistors according to the teachings of the present invention. In general, Figure 2A shows an integrated circuit 200, such as a semiconductor memory device, incorporating an array of memory cells provided by the invention. As shown in Figure 2A, circuit 200 includes memory cell arrays 210, such as 210A and 21 OB. Each array 210 includes M rows and N columns of memory cells 212. <Desc/Clms Page number 7> In the embodiment of Figure 2A, each memory cell includes a transfer device, such as n-channel cell access field-effect transistor (FET) 230. More particularly, access FET 230 includes at least one, but may include two, gates for controlling conduction between the access FET's 230 first and second source/drain terminals. Access FET 230 is coupled at a second source/drain terminal to a storage node of a storage capacitor 232. The other terminal of storage capacitor 232 is coupled to a reference voltage such as a ground voltage VSS. Each of the M rows includes one of word lines WLO, WLl... WLm-l, WLm which serve as or are coupled to a first gate in alternating row adjacent access FETs 230. In the embodiment shown in Figure 2A, each of the M rows also includes one of word lines R0, R2,..., Rm-l, Rm coupled to a second gate in alternating row adjacent access FETs 230 in memory cells 212. As one of ordinary skill in the art will understand upon reading this disclosure, two wordlines per access FET 230 are not required to practice the invention, but rather represent one embodiment for the same. The invention may be practiced having a single wordline/gate per alternating row adjacent access FET 230 and the same is illustrated in Figure 2B. The invention is not so limited. The term wordline includes any interconnection line for controlling conduction between the first and second source/drain terminals of access FETs 230. According to the teachings of the present invention, and as explained in more detail below, access FETs 230 include vertical ultra thin body transistors 230. Each of the N columns includes one of bit lines BLO, BLl... BLn-l, Bln. Bit lines BLO-BLn are used to write to and read data from memory cells 212. Word lines WLO-WLm and RO-Rm are used to activate alternating row adjacent access FETs 230 to access a particular row of memory cells 212 that is to be written or read. As shown in Figures 2A and 2B, addressing circuitry is also included. For example, address buffer 214 controls column decoders 218, which also include sense amplifiers and input/output circuitry that is coupled to bit lines BLO-BLn. Address buffer 214 also controls row decoders 216. Row decoders 216 and column decoders 218 selectably access memory cells 212 in response to address signals that are provided on address lines 220 during read <Desc/Clms Page number 8> and write operations. The address signals are typically provided by an external controller such as a microprocessor or other memory controller. Each of memory cells 212 has a substantially identical structure, and accordingly, only one memory cell 212 structure is described herein. The same are described in more detail in connection with Figure 3. In one example mode of operation, circuit 200 receives an address of a particular memory cell 212 at address buffer 214. Address buffer 214 identifies one of the word lines WLO-WLm of the particular memory cell 212 to row decoder 216. Row decoder 216 selectively activates the particular word line WLO-WLm to activate access FETs 230 of each memory cell 212 that is connected to the selected word line WLO-WLm. Column decoder 218 selects the one of bit lines BLO-BLn of the particularly addressed memory cell 212. For a write operation, data received by input/output circuitry is coupled to the one of bit lines BLO-BLn and through the access FET 230 to charge or discharge the storage capacitor 232 of the selected memory cell 212 to represent binary data. For a read operation, data stored in the selected memory cell 212, as represented by the charge on its storage capacitor 232, is coupled to the one of bit lines BLO- BLn, amplified, and a corresponding voltage level is provided to the input/output circuits. According to one aspect of the invention, each of the first and second gates of access FET 230 is capable of controlling the conduction between its first and second source/drain terminals, as described below. In this embodiment, parallel switching functionality can be effected between the first and second source/drain terminals of access FET 230 by independently operating the particular ones of word lines WLO-WLm and corresponding ones of word lines RO-Rm. For example, by independently activating word line WLO and word line R0, both of which are coupled to the same row of memory cells 212, independently controlled inversion channels can be formed in each corresponding access FET 230 by respective first and second gates for allowing conduction between the first and second source/drain regions. According to another aspect of the invention, each of the first and second gates of access FET 230 is capable of controlling the conduction between its first <Desc/Clms Page number 9> and second source/drain terminals, but the first and second gates of particular access FETs 230 are synchronously activated, rather than independently operated. For example, by synchronously activating word line WLO and word line R0, both of which are coupled to the same row of memory cells 212, synchronously activated inversion channels can be formed in each corresponding access FET 230 by respective first and second gates for allowing conduction between the first and second source/drain regions. In this embodiment, synchronous activation and deactivation of the first and second gates allows better control over the potential distributions in the access FET 230 when it is in a conductive state. Synchronous activation and deactivation can be used to obtain well-controlled fully depleted operating characteristics of access FET 230. In a further embodiment in which the first and second gates are either synchronously or independently activated, different activation voltages can be applied to the first and second gates of the access FET 230. For example, different voltages can be provided to synchronously activated word lines WLO and R0, thereby providing different activation voltages to the first and second gates of the access FET 230 to obtain particular desired operating characteristics. Similarly, different deactivation voltages can be applied to the first and second gates of the access FET 230. For example, different deactivation voltages can be provided to synchronously deactivated word lines WLO and RO and corresponding first and second gates of access FETs 230, in order to obtain particular desired operating characteristics. Similarly, different activation and deactivation voltages can be applied to independently operated word lines such as WLO and R0. Figure 3 is a diagram illustrating an access FET 300 formed according to the teachings of the present invention which make up a portion of the memory cells 212 shown in Figures 2A and 2B. As shown in Figure 3, access FET 300 includes a vertical ultra thin body transistor, or otherwise stated an ultra thin single crystalline vertical transistor. According to the teachings of the present invention, the structure of the access FET 300 includes a pillar 301 extending outwardly from a semiconductor substrate 302. The pillar includes a single <Desc/Clms Page number 10> crystalline first contact layer 304 and a single crystalline second contact layer 306 vertically separated by an oxide layer 308. An ultra thin single crystalline vertical transistor 310 is formed along side of the pillar 301. The ultra thin single crystalline vertical transistor 310 includes an ultra thin single crystalline vertical body region 312 which separates an ultra thin single crystalline vertical first source/drain region 314 and an ultra thin single crystalline vertical second source/drain region 316. A gate 318, which may be integrally formed with a word line as described above and below, is formed opposing the ultra thin single crystalline vertical body region 312 and is separated therefrom by a thin gate oxide layer 320. According to embodiments of the present invention, the ultra thin single crystalline vertical transistor 310 includes a transistor having a vertical length of less than 100 nanometers and a horizontal width of less than 10 nanometers. Thus, in one embodiment, the ultra thin single crystalline vertical body region 312 includes a channel having a vertical length (L) of less than 100 nanometers. Also, the ultra thin single crystalline vertical body region 312 has a horizontal width (W) of less than 10 nanometers. And, the ultra thin single crystalline vertical first source/drain region 314 and an ultra thin single crystalline vertical second source/drain region 316 have a horizontal width of less than 10 nanometers. According to the teachings of the present invention, the ultra thin single crystalline vertical transistor 310 is formed from solid phase epitaxial growth. Figure 4A is a perspective view illustrating generally one embodiment of a portion of a folded bit line memory device or array 410 formed in rows and columns according to the present invention. Figure 4 illustrates portions of six memory cells 401-1, 401-2,401-3, 401-4,401-5, and 401-6 which include ultra thin single crystalline vertical transistors 430. According to the teachings of the present invention, these ultra thin single crystalline vertical transistors 430 are formed, as described in connection with Figure 3, along side of pillars extending outwardly from a semiconductor substrate 400. These pillars are formed on conductive segments of bit lines 402 which represent particular ones of bit lines BLO-BLn aligned in the column direction. In the embodiment shown in Figure <Desc/Clms Page number 11> 4A conductive segments of first word line 406 represents any one of word lines WLO-WLm, which provide integrally formed first gates for ultra thin single crystalline vertical transistors 430 for row adjacent pillars, on one side of a trench in which the particular first word line 406 is interposed. This is thus dependant on the desired circuit configuration as presented in connection with Figure 2B. Conductive segments of second word line 408 represents any one of word lines WLO-WLm, which provide integrally formed second gates for ultra thin single crystalline vertical transistors 430 for alternating, row adjacent pillars, in a neighboring trench in which the particular second word line 408 is interposed. As explained in connection with Figure 3, ultra thin single crystalline vertical transistors 430 are formed alongside of pillars that extend outwardly from an underlying substrate 410. As described below, substrate 400 includes bulk semiconductor starting material, semiconductor-on-insulator (SOI) starting material, or SOI material that is formed from a bulk semiconductor starting material during processing. Figure 4A illustrates one example embodiment, using bulk silicon processing techniques. As shown in Figure 4A, the pillars include an n+ silicon layer formed on a bulk silicon substrate 400 to produce first contact layer 412 and integrally formed n++ conductively doped bit lines 402 defining a particular column of memory cells shown as BLO-Bln in Figures 2A and 2B. An oxide layer 414 is formed on n+ first contact layer 412. A further n+ silicon layer is formed on oxide layer 414 to produce second contact layer 416 of in the pillars. Storage capacitors 432 are formed on the second contact layers 416 using any suitable technique as the same will be known and understood by one of ordinary skill in the art upon reading this disclosure. Word lines WLO-WLm are disposed (interdigitated) within the array 410. For example, first word line 406 is interposed in a trench 431 between pillars of 401-1 and 401-3 and between pillars 401-2 and 401-4. Second word line 408 is interposed in a trench 432 between semiconductor pillars of memory cell pairs 401-3 and 401-5 and between pillars 401-4 and 401-6. In the embodiment shown in Figure 4A, the ultra thin single crystalline vertical transistors 430 <Desc/Clms Page number 12> which are formed along side of the pillars adjacent to the trenches 431 and 432 in alternating, row adjacent pillars. Accordingly, the folded bit line device is provided with word lines 406 and 408 serving as or addressing gates for transistors 430 in alternating pillars along a row. As shown in Figure 4A, the ultra thin single crystalline vertical transistors 430 which are formed along side of the pillars are also in contact with bit lines 402 through the first contact layers 412. In this embodiment, bit lines 402 contact bulk semiconductor substrate 400. Isolation trenches 420,431 and 432 provide isolation between ultra thin single crystalline vertical transistors 430 of adjacent memory cells 401-1, 401-2, 401-3,401-4, 401-5, and 401-6. Columns of pillars along a bit line direction are separated by a trench 420 that is subsequently filled with a suitable insulating material such as silicon dioxide. For example, a trench 420 provides isolation between pillars 401-1 and 401-2 and between pillars 401-3 and 401-4. Rows of pillars including the ultra thin single crystalline vertical transistors 430 are altenzatingly separated by trenches 431 and 432, each of which contain word lines WLO-WLm as described above. Such word lines WLO-WLm are separated from substrate 400 by an underlying insulating layer, described below. Also, as shown in the embodiment of Figure 4A, word lines WLO-WLm are separated by a gate oxide from the ultra thin vertically oriented single crystalline body regions of ultra thin single crystalline vertical transistors 430 which are adjacent to the trenches 431 and 432 in alternating row adjacent pillars. Trenches 431 and 432 extend substantially orthogonally to bit lines 402. In one embodiment, respective first and second word lines 406 and 408 are formed of a refractory metal, such as tungsten or titanium. In another embodiment, first and second word lines 406 and 408 can be formed of n+ doped polysilicon. Similarly, other suitable conductors could also be used for first and second words lines 406 and 408, respectively. One of ordinary skill in the art will further understand upon reading this disclosure that the conductivity types described herein can be reversed by altering doping types such that the present invention is equally applicable to include structures having ultra thin vertically <Desc/Clms Page number 13> oriented single crystalline p-channel type transistors 430. The invention is not so limited. Burying first and second word lines 406 and 408 below semiconductor a top surface of the vertical pillars provides additional space on the upper portion of memory cells, 401-1,401-2, 401-3,401-4, 401-5, and 401-6, for formation of storage capacitors 432. Increasing the area available for forming storage capacitor 432 increases the possible obtainable capacitance value of storage capacitor 432. In one embodiment, storage capacitor 432 is a stacked capacitor that is formed using any of the many capacitor structures and process sequences known in the art. Other techniques could also be used for implementing storage capacitor 432. Contacts to the first and second word lines 406 and 408, respectively, can be made outside of the memory array 410. Figure 4B is a top view of Figure 4A illustrating generally pillars 401-1, 401-2,401-3, 401-4,401-5, and 401-6 including the ultra thin single crystalline vertical transistors 430. Figure 4B illustrates subsequently formed insulator such as oxide 424, formed in trenches 420 to provide isolation between the columns of pillars including the ultra thin single crystalline vertical transistors 430. In this embodiment, first word line 406 is in a trench 431 between column adjacent pillars having the ultra thin single crystalline vertical transistors 430, such as between pillars 401-1 and 401-3 which are coupled to the same bit line. As shown in Figure 4A, no ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-3 which adjoins trench 431. Thus in Figure 4B, wordline 406 is only a passing wordline along a side of pillar 401-3 in trench 431. However, as shown in Figure 4A, an ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-1 which adjoins trench 431. Thus, as shown in Figure 4B, wordline 406 is serves as a gate separated by gate oxide 418 for the ultra thin single crystalline vertical transistor 430 along the side of pillar 401-1 which adjoins trench 431. Similarly, as shown in Figure 4A, no ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-2 which adjoins trench 431. Thus, in Figure 4B, wordline 406 is only a passing wordline along a side of pillar 401-2 in trench 431. However, as shown in Figure 4A, an ultra thin single <Desc/Clms Page number 14> crystalline vertical transistor 430 has been formed on the side of pillar 401-2 which adjoins trench 431. Thus, as shown in Figure 4B, wordline 406 is serves as a gate separated by gate oxide 418 for the ultra thin single crystalline vertical transistor 430 along the side of pillar 401-4 which adjoins trench 431. Thus, in the folded bit line DRAM embodiment of Figure 4B, first word line 406 is shared between alternating, row adjacent pillars including the ultra thin single crystalline vertical transistors 430, which are coupled to different bit lines 402. First word line 406 is located in trench 431 that extends between the pillars 401- 1 and 401-3. As shown in Figure 4B, first word line 406 is separated by a thin oxide 418 from the vertically oriented pillars 401-1,401-2, 401-3, and 401-4 which are adjacent trench 431. Thus, thin oxide 418 serves as a thin gate oxide for those pillars which have the ultra thin single crystalline vertical transistors 430 on a side adjoining trench 431, e. g. pillars 401-1 and 401-4. Analogously, in the embodiment of Figure 4B, second word line 408 is in a trench 432 between column adjacent pillars having the ultra thin single crystalline vertical transistors 430, such as between pillars 401-3 and 401-5 which are coupled to the same bit line. As shown in Figure 4A, no ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-5 which adjoins trench 432. Thus in Figure 4B, wordline 408 is only a passing wordline along a side of pillar 401-5 in trench 431. However, as shown in Figure 4A, an ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-3 which adjoins trench 432. Thus, as shown in Figure 4B, wordline 408 is serves as a gate separated by gate oxide 418 for the ultra thin single crystalline vertical transistor 430 along the side of pillar 401-3 which adjoins trench 431. Similarly, as shown in Figure 4A, no ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-6 which adjoins trench 432. Thus, in Figure 4B, wordline 408 is only a passing wordline along a side of pillar 401-4 in trench 432. However, as shown in Figure 4A, an ultra thin single crystalline vertical transistor 430 has been formed on the side of pillar 401-4 which adjoins trench 432. Thus, as shown in Figure 4B, wordline 408 is serves as a gate separated by gate oxide 418 for the ultra thin single crystalline vertical <Desc/Clms Page number 15> transistor 430 along the side of pillar 401-4 which adjoins trench 432. Thus, in the folded bit line DRAM embodiment of Figure 4B, second word line 408 is shared between alternating, row adjacent pillars including the ultra thin single crystalline vertical transistors 430, which are coupled to different bit lines 402. Second word line 408 is located in trench 432 that extends between the pillars 401-3 and 401-5. As shown in Figure 4B, second word line 408 is separated by a thin oxide 418 from the vertically oriented pillars 401-3,401-4, 401-5, and 401-6 which are adjacent trench 432. Thus, thin oxide 418 serves as a thin gate oxide for those pillars which have the ultra thin single crystalline vertical transistors 430 on a side adjoining trench 432, e. g. pillars 401-3 and 401-6. Figure 4C is a perspective view illustrating another embodiment of a portion of a folded bit line memory array 410 according to the present invention. Figure 4C illustrates portions of six memory cells 401-1, 401-2,401-3, 401-4, 401-5, and 401-6 which include ultra thin single crystalline vertical transistors 430. According to the teachings of the present invention, these ultra thin single crystalline vertical transistors 430 are formed, as described in connection with Figure 3, along side of pillars extendingly outwardly from a semiconductor substrate 400. These pillars are formed on conductive segments of bit lines 402 which represent particular ones of bit lines BLO-BLn. In the embodiment shown in Figure 4C, conductive segments of first word line 406A and 406B represent any one of word lines WLO-WLm, which provide integrally formed first gates for ultra thin single crystalline vertical transistors 430 formed along alternating, row adjacent pillars on opposing sides of a trench in which the particular first word lines 406A and 406B are interposed. Conductive segments of second word line 408A and 408B represent any one of word lines RO-Rm, which provide integrally formed second gates for ultra thin single crystalline vertical transistors 430 formed along alternating, row adjacent pillars on opposing sides of a trench in which the particular second word lines 408A and 408B are interposed. Thus, word lines WLO-WLm and RO-Rm are alternatingly disposed (interdigitated) within the array 410. As explained in connection with Figure 3, ultra thin single crystalline vertical transistors 430 are formed alongside of pillars that extend outwardly <Desc/Clms Page number 16> from an underlying substrate 410. As described below, substrate 400 includes bulk semiconductor starting material, semiconductor-on-insulator (SOI) starting material, or SOI material that is formed from a bulk semiconductor starting material during processing. Figure 4C illustrates one example embodiment, using bulk silicon processing techniques. As shown in Figure 4C, the pillars include an n+ silicon layer formed on a bulk silicon substrate 400 to produce first contact layer 412 and integrally formed n++ conductively doped bit lines 402 defining a particular column of memory cells shown as BLO-BIn in Figures 2A and 2B. An oxide layer 414 is formed on n+ first contact layer 412. A further n+ silicon layer is formed on oxide layer 414 to produce second contact layer 416 of in the pillars. Storage capacitors 432 are formed on the second contact layers 416 using any suitable technique as the same will be known and understood by one of ordinary skill in the art upon reading this disclosure. Word lines WLO-WLm and RO-Rm are alternatingly disposed (interdigitated) within the array 410. For example, first word lines 406A and 406B are interposed in a trench 431 between pillars of 401-1 and 401-3 and between pillars 401-2 and 401-4 and separated by an insulator material such as an oxide. Second word lines 408A and 408B are interposed in a trench 432 between semiconductor pillars of memory cell pairs 401-3 and 401-5 and between pillars. In the embodiment shown in Figure 4C, the ultra thin single crystalline vertical transistors 430 are formed along side of the pillars adjacent to the trenches 431 and 432 in alternating, row adjacent pillars. Accordingly, the folded bit line device is provided with word lines 406A, 406B and 408A, 408B serving as or addressing gates for transistors 430 in alternating pillars along a row. As shown in Figure 4C, the ultra thin single crystalline vertical transistors 430 which are formed along side of the pillars are also in contact with bit lines 402 through the first contact layers 412. In this embodiment, bit lines 402 contact bulk semiconductor substrate 400. Isolation trenches provide isolation between ultra thin single crystalline vertical transistors 430 of adjacent memory cells 401-1,401-2, 401-3,401-4, 401-5, and 401-6. Columns of pillars along a bit line direction are separated by a <Desc/Clms Page number 17> trench 420 that is subsequently filled with a suitable insulating material such as silicon dioxide. For example, a trench 420 provides isolation between pillars 401-1 and 401-2 and between pillars 401-3 and 401-4. Rows of pillars including the ultra thin single crystalline vertical transistors 430 are alternatingly separated by trenches 431 and 432, each of which contain word lines WLO-WLm and RO- Rm as described above. Such word lines WLO-WLm and RO-Rm are separated from substrate 400 by an underlying insulating layer, described below, and separated from the ultra thin vertically oriented single crystalline body regions of ultra thin single crystalline vertical transistors 430 (as described in connection with Figure 3) by a gate oxide, also described below. Trenches 431 and 432 extend substantially orthogonally to bit lines 402. In one embodiment, respective first and second word lines, 406A, 406B and 408A, 408B respectively, are formed of a refractory metal, such as tungsten or titanium. In another embodiment, first and second word lines, 406A, 406B and 408A, 408B respectively, can be formed of n+ doped polysilicon. Similarly, other suitable conductors could also be used for first and second word lines, 406A, 406B and 408A, 408B respectively. One of ordinary skill in the art will further understand upon reading this disclosure that the conductivity types described herein can be reversed by altering doping types such that the present invention is equally applicable to include structures having ultra thin vertically oriented single crystalline p-channel type transistors 430. The invention is not so limited. Burying first and second word lines, 406A, 406B and 408A, 408B respectively, below semiconductor a top surface of the vertical pillars provides additional space on the upper portion of memory cells, 401-1,401-2, 401-3,401- 4,401-5, and 401-6, for formation of storage capacitors 433. Increasing the area available for forming storage capacitor 433 increases the possible obtainable capacitance value of storage capacitor 433. In one embodiment, storage capacitor 433 is a stacked capacitor that is formed using any of the many capacitor structures and process sequences known in the art. Other techniques could also be used for implementing storage capacitor 433. Contacts to the first <Desc/Clms Page number 18> and second word lines, 406A, 406B and 408A, 408B respectively, can be made outside of the memory array 410. Figure 4D is a cross sectional view taken along cut-line 4D-4D of Figure 4C illustrating generally pillars including the ultra thin single crystalline vertical transistors 430. As shown in Figure 4D, first word lines 406A and 406B are formed on opposing sides of trench 431 adjacent pillars including the ultra thin single crystalline vertical transistors 430, such as between pillars 401-2 and 401- 4 which are coupled to the same bit line in a given column. In the embodiment of Figure 4C, the ultra thin single crystalline vertical transistors 430 have been formed as pairs on opposing sides of the pillars 401-1, 401-2,401-3, 401-4,401- 5, and 401-6. Accordingly, in the folded bit line DRAM device of the present invention, wordline 406A is separated by a thick oxide 418A from the ultra thin single crystalline vertical transistor 430 formed along side of pillar 401-2 adjoining trench 431 such that wordline 406A only serves as a passing wordline for this ultra thin single crystalline vertical transistor 430. Conversely, wordline 406B is separated by a thin gate oxide 418B from the ultra thin single crystalline vertical transistor 430 formed along side of pillar 401-4 adjoining trench 431 such that wordline 406B serves as an integrally formed gate for this ultra thin single crystalline vertical transistor 430. Similarly, wordline 408A is separated by a thick oxide 418A from the ultra thin single crystalline vertical transistor 430 formed along side of pillar 401-4 adjoining trench 432 such that wordline 408A only serves as a passing wordline for this ultra thin single crystalline vertical transistor 430. And, wordline 408B is separated by a thin gate oxide 418B from the ultra thin single crystalline vertical transistor 430 formed along side of pillar 401-6 adjoining trench 432 such that wordline 408B serves as an integrally formed gate for this ultra thin single crystalline vertical transistor 430. Figures 5A-5C illustrate an initial process sequence for forming pillars along side of which vertical ultra thin body transistors can later be formed as part of forming a folded bit line DRAM according to the teachings of the present invention. The dimensions suggested are appropriate to a 0. 1, um cell dimension (CD) technology and may be scaled accordingly for other CD sizes. In the embodiment of Figure 5A, a p-type bulk silicon substrate 510 starting material is <Desc/Clms Page number 19> used. An n++ and n+ silicon composite first contact layer 512 is formed on substrate 510, such as by ion-implantation, epitaxial growth, or a combination of such techniques to form a single crystalline first contact layer 512. According to the teachings of the present invention, the more heavily conductively doped lower portion of the first contact layer 512 also functions as the bit line 502. The thickness of the n++ portion of first contact layer 512 is that of the desired bit line 502 thickness, which can be approximately between 0.1 to 0. 25 um. The overall thickness of the first contact layer 512 can be approximately between 0.2 to 0. 5, um. An oxide layer 514 of approximately 100 nanometers (nm), 0.1 p, m, thickness or less is formed on the first contact layer 512. In one embodiment, the oxide layer 514 can be formed by thermal oxide growth techniques. A second contact layer 516 of n+ silicon is formed on the oxide layer 514. The second contact layer 516 is formed to a thickness of 100 nm or less. Next, a thin silicon dioxide layer (SiO) 518 of approximately 10 nm is deposited on the second contact layer 516. A thicker silicon nitride layer (Si3N4) 520 of approximately 20 to 50 nm in thickness is deposited on the thin silicon dioxide layer (SiO2) 518 to form pad layers, e. g. layers 518 and 520. These pad layers 518 and 520 can be deposited using any suitable technique such as by chemical vapor deposition (CVD). A photoresist is applied and selectively exposed to provide a mask for the directional etching of trenches 525, such as by reactive ion etching (RIE). The directional etching results in a plurality of column bars 530 containing the stack of nitride layer 520, pad oxide layer 518, second contact layer 516, oxide layer 514, and first contact layer 512. Trenches 525 are etched to a depth that is sufficient to reach the surface 532 of substrate 510, thereby providing separation between conductively doped bit lines 502. The photoresist is removed. Bars 530 are now oriented in the direction of bit lines 502, e. g. column direction. In one embodiment, bars 530 have a surface line width of approximately 0.1 micron or less. The width of each trench 525 can be approximately equal to the line width of bars 530. The structure is now as appears in Figure 5A. In Figure 5B, isolation material 533, such as SiO is deposited to fill the trenches 525. The working surface is then planarized, such as by chemical <Desc/Clms Page number 20> mechanical polishing/planarization (CMP). A second photoresist is applied and selectively exposed to provide a mask for the directional etching of trenches 535 orthogonal to the bit line 502 direction, e. g. row direction. Trenches 535 can be formed using any suitable technique such as by reactive ion etching (RIE). Trenches 535 are etched through the exposed Si02 and the exposed stack of nitride layer 520, pad oxide layer 518, second contact layer 516, oxide layer 514, and into the first contact layer 512 but only to a depth sufficient to leave the desired bit line 502 thickness, e. g. a remaining bit line thickness of typically less than 100 nm. The structure is now as appears in Figures 5B having individually defined pillars 540-1, 540-2,540-3, and 540-4. Figure 5C illustrates a cross sectional view of the structure shown in Figure 5B taken along cut-line 5C-5C. Figure 5C shows the continuous bit line 502 connecting adjacent pillars 540-1 and 540-2 in any given column. Trench 535 remains for the subsequent formation of wordlines, as described below, in between adjacent rows of the pillars, such as a row formed by pillars 540-1 and 540-4 and a row formed by pillars 540-2, and 540-3. Figure 6A-6C illustrate that the above techniques described in connection with Figures 5A-5C can be implemented on a bulk CMOS technology substrate or a silicon on insulator (SOI) technology substrate. Figure 6A represents the completed sequence of process steps shown in Figures 5A-5C, minus the pad layers, formed on a lightly doped p-type bulk silicon substrate 610. The structure shown in Figure 6A is similar to the cross sectional view in Figure 5C and shows a continuous bit line 602 with pillar stacks 640-1 and 640-2 formed thereon. The pillars 640-1 and 640-2 include an n+ first contact layer 612, an oxide layer 614 formed thereon, and a second n+ contact layer 616 formed on the oxide layer 614. Figure 6B represents the completed sequence of process steps shown in Figures 5A-5C, minus the pad layers, formed on a commercial SOI wafer, such as SIMOX. As shown in Figure 6B, a buried oxide layer 611 is present on the surface of the substrate 610. The structure shown in Figure 6B is also similar to the cross sectional view in Figure 5C and shows a continuous bit line 602 with pillar stacks 640-1 and 640-2 formed thereon, only here the continous bit line <Desc/Clms Page number 21> 602 is separated from the substrate 610 by the buried oxide layer 611. Again, the pillars 640-1 and 640-2 include an n+ first contact layer 612, an oxide layer 614 formed thereon, and a second n+ contact layer 616 formed on the oxide layer 614. Figure 6C represents the completed sequence of process steps shown in Figures 5A-5C, minus the pad layers, forming islands of silicon on an insulator, where the insulator 613 has been formed by oxide under cuts. Such a process includes the process described in more detail in U. S. patent no. 5,691, 230, by Leonard Forbes, entitled"Technique for Producing Small Islands of Silicon on Insulator, "issued 11/25/1997, which is incorporated herein by reference. The structure shown in Figure 6C is also similar to the cross sectional view in Figure 5C and shows a continuous bit line 602 with pillar stacks 640-1 and 640-2 formed thereon, only here the continous bit line 602 is separated from the substrate 610 by the insulator 613 which has been formed by oxide under cuts such as according to the process referenced above. Again, the pillars 640-1 and 640-2 include an n+ first contact layer 612, an oxide layer 614 formed thereon, and a second n+ contact layer 616 formed on the oxide layer 614. Thus, according to the teachings of the present invention, the sequence of process steps to form pillars, as shown in Figures 5A-5C, can include forming the same on at least three different types of substrates as shown in Figures 6A-6C. Figures 7A-7C illustrate a process sequence continuing from the pillar formation embodiments provided in Figures 5A-5C, and any of the substrates shown in Figures 6A-6C, to form vertical ultra thin body transistors along side of the pillars, such as pillars 540-1 and 540-2 in Figure 5C. For purposes of illustration only, Figure 7A illustrates an embodiment pillars 740-1 and 740-2 formed on ap-type substrate 710 and separated by a trench 730. Analogous to the description provided in connection Figure 5A-5C, Figure 7A shows a first single crystalline n+ contact layer 712 a portion of which, in one embodiment, is integrally formed with an n++ bit line 702. An oxide layer region 714 is formed in pillars 740-1 and 740-2 on the first contact layer 712. A second n+ contact layer 716 is shown formed on the oxide layer region 714 in the pillars 740-1 and <Desc/Clms Page number 22> 740-2. And, pad layers of (SiO2) 718 and (Si3N4) 720, respectively are shown formed on the second contact layer 716 in the pillars 740-1 and 740-2. In Figure 7B, a lightly doped p-type polysilicon layer 745 is deposited over the pillars 740-1 and 740-2 and directionally etched to leave the lightly doped p-type material 745 on the sidewalls 750 of the pillars 740-1 and 740-2. In one embodiment according to the teachings of the present invention, the lightly doped p-type polysilicon layer is directionally etched to leave the lightly doped p-type material 745 on the sidewalls 750 of the pillars 740-1 and 740-2 having a width (W), or horizontal thickness of 10 nm or less. The structure is now as shown in Figure 7B. The next sequence of process steps is described in connection with Figure 7C. At this point another masking step, as the same has been described above, can be employed to isotropically etch the polysilicon 745 off of some of the sidewalls 750 and leave polysilicon 745 only on one sidewall of the pillars 740-1 and 740-2 if this is required by some particular configuration, e. g. forming ultra thin body transistors only on one side of pillars 740-1 and 740-2. In Figure 7C, the embodiment for forming the ultra thin single crystalline vertical transistors, or ultra thin body transistors, only on one side of pillars 740- 1 and 740-2 is shown. In Figure 7C, the wafer is heated at approximately 550 to 700 degrees Celsius. In this step, the polysilicon 745 will recrystallize and lateral epitaxial solid phase regrowth will occur vertically. As shown in Figure 7C, the single crystalline silicon at the bottom of the pillars 740-1 and 740-2 will seed this crystal growth and an ultrathin single crystalline film 746 will form which can be used as the channel of an ultra thin single crystalline vertical MOSFET transistor. In the embodiment of Figure 7C, where the film is left only on one side of the pillar, the crystallization will proceed vertically and into the n+ polysilicon second contact material/layer 716 on top of the pillars 740-1 and 740-2. If however, both sides of the pillars 740-1 and 740-2 are covered, the crystallization will leave a grain boundary near the center on top of the pillars 740-1 and 740-2. This embodiment is shown in Figure 7D. As shown in Figures 7C and 7D, drain and source regions, 751 and 752 respectively, will be formed in the ultrathin single crystalline film 746 along the <Desc/Clms Page number 23> sidewalls 750 of the pillars 740-1 and 740-2 in the annealing process by an out diffusion of the n+ doping from the first and the second contact layers, 712 and 716. In the annealing process, these portions of the ultrathin single crystalline film 746, now with the n+ dopant, will similarly recrystallize into single crystalline structure as the lateral epitaxial solid phase regrowth occurs vertically. The drain and source regions, 751 and 752, will be separated by a vertical single crystalline body region 752 formed of the p-type material. In one embodiment of the present invention, the vertical single crystalline body region will have a vertical length of less than 100 mn. The structure is now as shown in Figures 7C or 7D. As one of ordinary skill in the art will understand upon reading this disclosure. A conventional gate insulator can be grown or deposited on this ultrathin single crystalline film 746. And, either horizontal or vertical gate structures can be formed in trenches 730. As one of ordinary skill in the art will understand upon reading this disclosure, drain and source regions, 751 and 752 respectively, have been formed in an ultrathin single crystalline film 746 to form a portion of the ultra thin single crystalline vertical transistors, or ultra thin body transistors, according to the teachings of the present invention. The ultrathin single crystalline film 746 now includes an ultra thin single crystalline vertical first source/drain region 751 coupled to the first contact layer 712 and an ultra thin single crystalline vertical second source/drain region 752 coupled to the second contact layer 716. An ultra thin p-type single crystalline vertical body region 753 remains along side of, or opposite, the oxide layer 714 and couples the first source/drain region 751 to the second source/drain region 752. In effect, the ultra thin p-type single crystalline vertical body region 753 separates the drain and source regions, 751 and 752 respectively, and can electrically couple the drain and source regions, 751 and 752, when a channel is formed therein by an applied potential. The drain and source regions, 751 and 752 respectively, and the ultra thin body region 753 are formed of single crystalline material by the lateral solid phase epitaxial regrowth which occurs in the annealing step. The dimensions of the structure now include an ultra thin single crystalline body region 753 having a vertical length of less than 100 nm in which <Desc/Clms Page number 24> a channel having a vertical length of less than 100 nm can be formed. Also, the dimensions include drain and source regions, 751 and 752 respectively, having a junction depth defined by the horizontal thickness of the ultrathin single crystalline film 746, e. g. less than 10 nm. Thus, the invention has provided junction depths which are much less than the channel length of the device and which are scalable as design rules further shrink. Further, the invention has provided a structure for transistors with ultra thin bodies so that a surface space charge region in the body of the transistor scales down as other transistor dimensions scale down. In effect, the surface space charge region has been minimized by physically making the body region of the MOSFET ultra thin, e. g. 10 nm or less. One of ordinary skill in the art will further understand upon reading this disclosure that the conductivity types described herein can be reversed by altering doping types such that the present invention is equally applicable to include structures having ultra thin vertically oriented single crystalline p- channel type transistors. The invention is not so limited. From the process descriptions described above, the fabrication process can continue to form a number of different horizontal and vertical gate structure embodiments in the trenches 730 as described in connection with the Figures below. Figures 8A-8C illustrate a process sequence for forming a horizontal gate structure embodiment, referred to herein as horizontal replacement gates, in connection with the present invention. The dimensions suggested in the following process steps are appropriate to a 0.1 micrometer CD technology and may be scaled accordingly for other CD sizes. Figure 8A represents a structure similar to that shown in Figure 7C. That is Figure 8A shows an ultrathin single crystalline film 846 along the sidewalls 850 of pillars 840-1 and 840-2 in trenches 830. The ultrathin single crystalline film 846 at this point includes an ultra thin single crystalline vertical first source/drain region 851 coupled to a first contact layer 812 and an ultra thin single crystalline vertical second source/drain region 852 coupled to a second contact layer 816. An ultra thin p-type single crystalline vertical body region 853 is present along side of, or opposite, an oxide layer 814 and couples the first source/drain region 851 to the second source/drain <Desc/Clms Page number 25> region 852. According to the process embodiment shown in Figure 8A an n+ doped oxide layer 821, or PSG layer as the same will be known and understood by one of ordinary skill in the art will understand, is deposited over the pillars 840-1 and 840-2 such as by a CVD technique. This n+ doped oxide layer 821 is then planarized to remove off of the top surface of the pillars 840-1 and 840-2. An etch process is performed to leave about 50 nm at the bottom of trench 830. Next, an undoped polysilicon layer 822 or undoped oxide layer 822 is deposited over the pillars 840-1 and 840-2 and CMP planarized to again remove from the top surface of the pillars 840-1 and 840-2. Then, the undoped polysilicon layer 822 is etched, such as by RIE to leave a thickness of 100 nm or less in the trench 830 along side of, or opposite oxide layer 814. Next, another n+ doped oxide layer 823, or PSG layer as the same will be known and understood by one of ordinary skill in the art will understand, is deposited over the pillars 840-1 and 840-2 such as by a CVD process. The structure is now as appears in Figure 8A. Figure 8B illustrates the structure following the next sequence of fabrication steps. In Figure 8B, a heat treatment is applied to diffuse the n-type dopant out of the PSG layers, e. g. 821 and 823 respectively, into the vertical ultrathin single crystalline film 846 to additionally form the drain and source regions, 851 and 852 respectively. Next, as shown in Figure 8B, a selective etch is performed, as the same will be known and understood by one of ordinary skill in the art upon reading this disclosure, to remove the top PSG layer 823 and the undoped polysilicon layer 822, or oxide layer 822 in the trench 830. The structure is now as appears in Figure 8B. Next, in Figure 8C, a thin gate oxide 825 is grown as the same will be known and understood by one of ordinary skill in the art, such as by thermal oxidation, for the ultra thin single crystalline vertical transistors, or ultra thin body transistors on the surface of the ultra thin single crystalline vertical body region 853 for those transistors in alternating, row adjacent pillars which will be connected to trench wordlines for completing the folded bit line DRAM device. Next, doped n+ type polysilicon layer 842 can be deposited to form a gate 842 for the ultra thin single crystalline vertical transistors, or ultra thin body transistors. The structure then undergoes a CMP process to remove the doped n+ <Desc/Clms Page number 26> type polysilicon layer 842 from the top surface of the pillars 840-1 and 840-2 and RIE etched to form the desired thickness of the gate 842 for the ultra thin single crystalline vertical transistors, or ultra thin body transistors. In one embodiment, the doped n+ type polysilicon layer 842 is RIE etched to form an integrally formed, horizontally oriented word line/gate having a vertical side of less than 100 nanometers opposing the ultra thin single crystalline vertical body region 853. Next, an oxide layer 844 is deposited such as by a CVD process and planarized by a CMP process to fill trenches 830. An etch process is performed, as according to the techniques described above to strip the nitride layer 820 from the structure. This can include a phosphoric etch process using phosphoric acid. The structure is now as appears as is shown in Figure 8C. As one of ordinary skill in the art will understand upon reading this disclosure, contacts can be formed to the second contact layer 816 on top of the pillars 840-1 and 840-2 to continue with capacitor formation and standard BEOL processes. Figures 9A-9C illustrate a process sequence for forming a vertical gate structure embodiment according to the teachings of the present invention. The dimensions suggested in the following process steps are appropriate to a 0.1 micrometer CD technology and may be scaled accordingly for other CD sizes. Figure 9A represents a structure similar to that shown in Figure 7C. That is Figure 9A shows an ultrathin single crystalline film 946 along the sidewalls 950 of pillars 940-1 and 940-2 in trenches 930. The ultrathin single crystalline film 946 at this point includes an ultra thin single crystalline vertical first source/drain region 951 coupled to a first contact layer 912 and an ultra thin single crystalline vertical second source/drain region 952 coupled to a second contact layer 916. An ultra thin p-type single crystalline vertical body region 953 is present along side of, or opposite, an oxide layer 914 and couples the first source/drain region 951 to the second source/drain region 952. According to the process embodiment shown in Figure 9A, a conforma nitride layer of approximately 20 nm is deposited, such as by CVD, and directionally etched to leave only on the sidewalls 950. A oxide layer is then grown, such as by thermal oxidation, to a thickness of approximately 50 nm in order to insulate the exposed bit line bars <Desc/Clms Page number 27> 902. The conforma nitride layer on the sidewalls 950 prevents oxidation along the ultrathin single crystalline film 946. The nitride layer is then stripped, using conventional stripping processes as the same will be known and understood by one of ordinary skill in the art. The structure is now as appears in Figure 9A. As shown in Figure 9B, an intrinsic polysilicon layer 954 is deposited over the pillars 940-1 and 940-2 and in trenches 930 and then directionally etched to leave the intrinsic polysilicon layer 954 only on the vertical sidewalls of the pillars 940-1 and 940-2. A photoresist is applied and masked to expose pillar sides where device channes are to be formed, e. g. integrally formed wordline/gates on alternating, row adjacent pillars. In these locations, the intrinsic polysilicon layer 954 is selectively etched, as the same will be known and understood by one of ordinary skill in the art, to remove the exposed intrinsic polysilicon layer 954. Next, a thin gate oxide layer 956 is grown on the exposed sidewalls of the ultrathin single crystalline film 946 for the ultra thin single crystalline vertical transistors, or ultra thin body transistors. The structure is now as appears in Figure 9B. In Figure 9C, a wordline conductor of an n+ doped polysilicon material or suitable metal 960 is deposited, such as by CVD, to a thickness of approximately 50 nm or less. This wordline conductor 960 is then directionally etched to leave only on the vertical sidewalls of the pillars, including on the thin gate oxide layers 956 of alternating, row adjacent pillars in order to form separate vertical, integrally formed wordline/gates 960A and 960B. The structure is now as appears in Figure 9C. In Figure 9D, a brief oxide etch is performed to expose the top of the remaining intrinsic polysilicon layer 954. Then, a selective isotropic etch is performed, as the same will be known and understood by one of ordinary skill in the art, in order to remove all of the remaining intrinsic polysilicon layer 954. An oxide layer 970 is deposited, such as by CVD, in order to fill the cavities left by removal of the intrinsic polysilicon layer and the spaces in the trenches 930 between the separate vertical wordlines 960A and 960B neighboring pillars 940- 1 and 940-2. As mentioned above, the separate vertical wordlines will integrally form gates on alternating, row adjacent pillars. The oxide layer 970 is planarized <Desc/Clms Page number 28> by CMP to remove from the top of the pillars 940-1 and 940-2 stopping on the nitride pad 920. Then the remaining pad material 918 and 920 is etched, such as by RIE, to remove from the top of the pillars 940-1 and 940-2. Next, deposit CVD oxide 975 to cover the surface of the pillars 940-1 and 940-2. The structure is now as appears in Figure 9D. As one of ordinary skill in the art will understand upon reading this disclosure, the process can now proceed with storage capacitor formation and BEOL process steps. As one of ordinary skill in the art will understand upon reading this disclosure, the process steps described above produce integrally formed vertically oriented wordlines 960A and 960B which serve as integrally formed vertical gates along the sides of alternating, row adjacent pillars. This produces a folded bit line DRAM structure embodiment which is similar the perspective view of Figure 4C and the cross sectional view taken along the direction of the bit lines in Figure 4D. CONCLUSION The above structures and fabrication methods have been described, by way of example, and not by way of limitation, with respect to a folded bit line DRAM with ultra thin body transistors. Different types of gate structures are shown which can be utilized on three different types of substrates to form open bit line DRAM memory arrays. It has been shown that higher and higher density requirements in DRAMs result in smaller and smaller dimensions of the structures and transistors. Conventional planar transistor structures are difficult to scale to the deep sub- micron dimensional regime. The present invention provides vertical access or transfer transistor devices which are fabricated in ultra-thin single crystalline silicon films grown along the sidewall of an oxide pillar. These transistors with ultra-thin body regions scale naturally to smaller and smaller dimensions while preserving the performance advantage of smaller devices. The advantages of smaller dimensions for higher density and higher performance are both achieved in folded bit line memory arrays.
The invention discloses techniques for dynamic input/output scaling. Techniques for controlling input/output (I/O) power usage are disclosed. In an illustrative embodiment, a power policy engine of a computing device monitors power usage, I/O data transfer rate, and temperature, and determines when a change in I/O power setting should be present. The I/O data transfer requires the data to be properly handled such that the computing device consumes power on the I/O data transfer. The power policy engine may instruct a device driver, such as a driver of the I/O device, to change a data transfer rate of the I/O device, thereby reducing power spent by the computing device to handle the I/O.
1.A computing device for controlling power usage, the computing device comprising:processor;a memory communicatively coupled to the processor;I/O devices;a data store, the data store including a device driver for the I/O device; andA power strategy engine for:determining whether to change a power setting of the computing device; andinstructing the device driver to vary the power consumption caused by the operation of the I/O device,Wherein, the device driver is configured to change the operation of the I/O device in response to an instruction of the power policy engine, so as to change the power consumption caused by the operation of the I/O device.2.The computing device of claim 1, wherein determining whether to change a power setting of the computing device comprises determining whether a current temperature of the computing device exceeds a threshold.3.2. The computing device of any of claims 1 or 2, wherein changing the operation of the I/O device to change power consumption caused by the operation of the I/O device comprises changing the I/O device data transfer speed.4.4. The computing device of claim 3, wherein the I/O device is a data storage device, wherein changing a data transfer speed of the I/O device comprises changing a data storage rate of the data storage device.5.4. The computing device of claim 3, wherein the I/O device is a communication circuit, wherein changing a data transfer speed of the I/O device comprises changing a network data rate of the communication circuit.6.2. The computing device of claim 1, wherein instructing the device driver to change power consumption caused by operation of the I/O device comprises instructing the device driver to place the I/O device at a predefined multiple one of the power states.7.6. The computing device of claim 6, wherein the computing device is configured to enumerate a plurality of devices upon startup of the computing device to determine whether the device supports being placed into the plurality of power states.8.7. The computing device of any of claims 6-7, wherein changing the operation of the I/O device to change the power consumption caused by the operation of the I/O device comprises changing a relationship with the I/O device The power delivery contract for the device.9.7. The computing device of any of claims 1, 2, 6, or 7, wherein the I/O device is a storage device, a communications circuit, or a graphics processor.10.A method for controlling power usage, the method comprising:determining, by a power policy engine of a computing device, whether to change a power setting of the computing device;instructing, by the power policy engine, a device driver for an I/O device of the computing device to change power consumption caused by operation of the I/O device; andThe operation of the I/O device is changed by the device driver and in response to instructions from the power policy engine to change the power consumption caused by the operation of the I/O device.11.11. The method of claim 10, wherein determining whether to change a power setting of the computing device comprises determining whether a current temperature of the computing device exceeds a threshold.12.11. The method of any of claims 10-11, wherein changing the operation of the I/O device to change the power consumption caused by the operation of the I/O device comprises changing the power consumption of the I/O device data transfer speed.13.11. The method of any of claims 10-11, wherein instructing the device driver to change power consumption caused by operation of the I/O device comprises instructing the device driver to set the I/O device to in one of a plurality of predefined power states.14.One or more computer-readable media including a plurality of instructions stored thereon that, when executed by a computing device, cause the computing device to:determining, by a power policy engine of the computing device, whether to change a power setting of the computing device;instructing, by the power policy engine, a device driver for an I/O device of the computing device to change power consumption caused by operation of the I/O device;The operation of the I/O device is changed by the device driver and in response to instructions from the power policy engine to change the power consumption caused by the operation of the I/O device.15.15. The one or more computer-readable media of claim 14, wherein determining whether to change a power setting of the computing device comprises determining whether a current temperature of the computing device exceeds a threshold.16.15. The one or more computer-readable media of claim 14, wherein changing the operation of the I/O device to change the power consumption caused by the operation of the I/O device comprises changing the I/O The data transfer speed of the device.17.The one or more computer-readable media of any of claims 14-16, wherein changing the operation of the I/O device to change the power consumption caused by the operation of the I/O device comprises Change the data transfer speed of the I/O device.18.16. The one or more computer-readable media of any of claims 14-16, wherein instructing the device driver to change power consumption caused by operation of the I/O device comprises instructing the device driver The I/O device is placed into one of a predefined plurality of power states.19.19. The one or more computer-readable media of claim 18, wherein the plurality of instructions further cause the computing device to: enumerate a plurality of devices for the computing device upon startup of the computing device Each of the plurality of devices determines whether the corresponding device supports being placed in the plurality of power states, wherein the plurality of devices includes the I/O device.20.The one or more computer-readable media of any of claims 14-16, wherein the plurality of instructions further cause the computing device to: responsive to determining the I/O of the computing device The power setting should be changed to instruct the device driver to change the power delivery contract with the I/O device.21.A computing device for dynamic input/output I/O scaling, the computing device comprising:means for determining, by a power policy engine of a computing device, whether an I/O power setting of the computing device should be changed;for instructing a device driver for an I/O device of the computing device to change the I/O device by the power policy engine and in response to determining that the I/O power setting of the computing device should be changed the data transfer rate of the device; andMeans for changing the data transfer rate of the I/O device by the device driver and in response to instructions from the power policy engine.22.21. The computing device of claim 21, wherein means for determining whether the I/O power setting of the computing device should be changed comprises means for determining whether a current temperature of the computing device exceeds a threshold.23.23. The computing device of claim 22, wherein the I/O device is a data storage device, wherein the means for changing the data transfer rate of the I/O device comprises changing the data storage The device's data storage rate means.24.23. The computing device of claim 22, wherein the I/O device is a communication circuit, wherein the means for changing the data transfer rate of the I/O device comprises a means for changing the communication circuit Device for network data rate.25.24. The computing device of any of claims 21-24, wherein the means for instructing the device driver to change the data transfer rate of the I/O device comprises instructing the device driver to change all A means for placing the I/O device in one of a plurality of predefined power states.
Techniques for dynamic input/output scalingCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of provisional application 63/065,858, filed August 14, 2020, entitled "Dynamic I/O Scaling," which is incorporated by reference in its entirety.Background techniqueIn computing devices such as system-on-chip (SoC), power density and thermal management can be challenges. Increasing the number of I/O ports and I/O bandwidth exacerbates the problem by increasing the SoC's power consumption when in use. On-chip computing power can be controlled through techniques such as dynamic voltage and frequency scaling (DVFS), but throttling I/O interconnects by introducing low-power link states or reducing data rates can lead to data loss, poor users experience, or equipment failure. In current platform architectures, the device DVFS is generally responsible for thermal management of the device and not for throttling the interconnect.Description of drawingsIn the accompanying drawings, the concepts described herein are illustrated by way of example and not by way of limitation. For simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. Where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.1 is a simplified block diagram of at least one embodiment of a computing device for controlling input/output power usage;Figure 2 is a simplified block diagram of at least one embodiment of an environment that may be established by the computing device of Figure 1; and3-4 are simplified flow diagrams of at least one embodiment of a method for controlling input/output power usage that may be performed by the computing device of FIG. 1 .detailed descriptionWhile the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments of the present disclosure have been illustrated by way of example in the accompanying drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit the concepts of the present disclosure to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc. indicate that the described embodiment may include a particular feature, structure, or characteristic, however each embodiment may or may not necessarily include Include that particular feature, structure or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure or characteristic is described in conjunction with one embodiment, it is considered within the purview of those skilled in the art to implement such feature, structure or characteristic in conjunction with other embodiments, whether or not explicitly described . Additionally, it should be appreciated that terms included in a list in the form "at least one of A, B, and C" may mean (A); (B); (C); (A and B); (A) and C); (B and C); or (A, B and C). Similarly, an item listed in the form "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); ( B and C); or (A, B and C).In some cases, the disclosed embodiments can be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as readable and executable by one or more processors carried by or stored on a transient or non-transitory machine-readable (eg, computer-readable) storage medium. instruction. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (eg, volatile or nonvolatile memory, media disk, or other media devices).In the drawings, some structural or method features may be shown in particular arrangements and/or orderings. It should be appreciated, however, that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or method features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments such features may not be included, or such features may be combined with other features .Referring now to FIG. 1, an illustrative computing device 100 is configured to control I/O power usage by instructing an input/output (I/O) device driver to change the data transfer rate of an I/O device as necessary. For example, the power policy engine 202 (see FIG. 2) may instruct a device driver for a storage device to throttle I/O data transfers. The device driver may then delay I/O requests to the storage device, thereby reducing power consumption of the computing device 100 caused by I/O traffic from the storage device. It should be appreciated that the power policy engine 202 need not be aware of the specific protocol used by the storage device, and need not be aware of the specific action a device driver may take to reduce power consumption. Instead, the details of how power consumption is changed can be implemented by the device driver without additional involvement of the power policy engine.Additionally, in some embodiments, the power policy engine 202 may use dynamic voltage and frequency scaling to control, for example, the power usage of the processor 102 to stay within the power constraints of the computing device 100 . It should be appreciated, however, that forcibly throttling the I/O interconnect by introducing lower power link states or reducing data rates may result in data loss, poor user experience, or equipment failure. As such, power policy engine 202 may manage the power impact of I/O devices on computing device 100 by instructing device drivers to change their data transfer rates.As used herein, an I/O device is any device that provides input and/or output to processor 102 , memory 104 , or other components of computing device 100 . For example, an I/O device may refer to storage device 108, communication circuitry 110, graphics processor 112, external or internal bus connected devices (such as USB devices, PCIe connected devices, Thunderbolt connected devices), and the like.Computing device 100 may be embodied as any type of computing device. For example, without limitation, computing device 100 may be embodied as or otherwise included in a server computer, an embedded computing system, a system on a chip (SoC), a multiprocessor system, a Processor systems, consumer electronics, smart phones, cellular phones, desktop computers, tablet computers, notebook computers, laptop computers, network devices, routers, switches, networked computers, wearable computers, handheld devices, messaging devices , camera equipment and/or any other computing equipment. Illustrative computing device 100 includes processor 102 , memory 104 , input/output (I/O) subsystem 106 , data storage 108 , communication circuitry 110 , graphics processor 112 , display 114 , and one or more peripheral devices 116 . In some embodiments, one or more of the illustrative components of computing device 100 may be incorporated in, or otherwise form part of, another component. For example, in some embodiments, memory 104 or portions thereof may be incorporated in processor 102 . In some embodiments, one or more of the illustrative components may be physically separate from another component. For example, in one embodiment, an SoC with processor 102 and memory 104 may be connected to data storage 108 external to the SoC through a universal cross-bar (USB) connector.Processor 102 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 102 may be embodied as a single-core or multi-core processor(s), a single- or multi-socket processor, a digital signal processor, a graphics processor, a neural network computing engine, an image processor, a micro A controller or other processor or processing/control circuit. Similarly, memory 104 may be embodied as any type of volatile or nonvolatile memory or data storage capable of performing the functions described herein. In operation, memory 104 may store various data and software used during operation of computing device 100, such as operating systems, applications, programs, libraries, and drivers. Memory 104 may be communicatively coupled to processor 102 via I/O subsystem 106 , which may be embodied to facilitate communication with processor 102 , memory 104 , and other components of computing device 100 . Circuitry and/or components for input/output operations. For example, the I/O subsystem 106 may be embodied as or otherwise include: a memory controller hub, an input/output control hub, firmware devices, communication links (ie, point-to-point links, bus links, wires, cables) , light guides, printed circuit board traces, etc.) and/or other components and subsystems for facilitating input/output operations. I/O subsystem 106 may connect the various internal and external components of computing device 100 to each other using any suitable connectors, interconnects, buses, protocols, etc. (such as SoC fabric, USB2, USB3, USB4, etc.). In some embodiments, I/O subsystem 106 may form part of a system on a chip (SoC) and may be incorporated on a single integrated circuit chip along with processor 102 , memory 104 , and other components of computing device 100 .Data store 108 may be embodied as one or more devices of any type configured for short-term or long-term data storage. For example, data storage 108 may include any one or more memory devices as well as circuits, memory cards, hard drives, solid state drives, or other data storage devices.Communication circuitry 110 may be embodied as any type of interface capable of interfacing computing device 100 with other computing devices, such as through one or more wired or wireless connections. In some embodiments, communication circuitry 110 may be capable of interfacing with any suitable cable type, such as electrical or fiber optic cables. Communication circuitry 110 may be configured to use any one or more communication technologies and associated protocols (eg, Ethernet, WiMAX, Near Field Communication (NFC), etc.). Communication circuitry 110 may be located on separate silicon from processor 102 , or communication circuitry 110 may be included in a multi-chip package with processor 102 , or even on the same die as processor 102 . Communication circuitry 110 may be embodied as one or more plug-in boards, daughter cards, network interface cards, controller chips, chipsets, application-specific components such as field programmable gate arrays (FPGAs) or application specific integrated circuits (ASICs) , or other devices that may be used by computing device 102 to connect with another computing device. In some embodiments, the communication circuit 110 may be embodied as part of a system on a chip (SoC) that includes one or more processors, or the communication circuit 110 may be included in a multi-chip that also includes one or more processors on the package. In some embodiments, the communication circuit 110 may include a local processor (not shown) and/or a local memory (not shown), both local to the communication circuit 110 . In such embodiments, the local processor of the communication circuit 110 may be capable of performing one or more of the functions of the processor 102 described herein. Additionally or alternatively, in such embodiments, the local memory of communication circuitry 110 may be integrated into one or more components of computing device 102 at the board level, socket level, chip level, and/or other levels.Graphics processor 112 is configured to perform graphics computations, such as rendering graphics to be displayed on display 114 . Additionally or alternatively, in some embodiments, graphics processor 112 may perform general computing tasks, and/or may perform transfer tasks for which graphics processor 112 is well suited, such as massively parallel operations. Graphics processor 112 may be embodied as any type of processor capable of performing the functions described herein. For example, graphics processor 112 may be embodied as a single-core or multi-core processor(s), single-socket or multi-socket processor, digital signal processor, microcontroller, or other processor or processing/control circuit .Display 114 may be embodied as any type of display on which information may be displayed to a user of computing device 100, such as a touch screen display, liquid crystal display (LCD), light emitting diode (LED) display, cathode ray tube (CRT) display , plasma displays, image projectors (eg, 2D or 3D), laser projectors, heads-up displays, and/or other display technologies. In some embodiments, computing device 100 may have more than one display 114 connected to the computing device. Computing device 100 may be capable of disconnecting some or all of displays 114 , such as to reduce power used by displays 114 . Similarly, in some embodiments, computing device 100 may be capable of changing various parameters of some or all of displays 114 to reduce power usage, such as changing refresh rates, resolutions, and the like.In some embodiments, computing device 100 may include other or additional components (such as those typically found in computing devices). For example, computing device 100 may also have peripherals 116, such as keyboards, mice, speakers, microphones, external storage devices, and the like. In some embodiments, computing device 100 can be connected to a docking device that can interface with various devices, including peripheral devices 116 .In an illustrative embodiment, various components of computing device 100 may be capable of monitoring the current power usage and/or current temperature of the corresponding components. For example, the processor 102, the memory 104, etc. may have integrated circuits or components capable of determining the power usage and/or temperature of the processor 102, the memory 104, respectively. Additionally or alternatively, computing device 100 may have separate components that measure power and/or temperature of the components shown in FIG. 1 .Referring now to FIG. 2, in an illustrative embodiment, computing device 100 establishes environment 200 during operation. The illustrative environment 200 includes a power policy engine 202 , a power management controller 204 , a power delivery controller 206 , and a device driver 208 . The various modules of environment 200 may be embodied in hardware, software, firmware, or a combination thereof. For example, the modules, logic, and other components of environment 200 may form part of, or otherwise be established by, processor 102 or other hardware components of computing device 100, Other hardware components such as memory 104, data storage 108, and the like. As such, in some embodiments, one or more of the modules of environment 200 may be embodied as circuitry or collections of electrical devices (eg, power policy engine circuitry 202, power management controller circuitry 204, power delivery controller circuitry 206, etc.). It should be appreciated that in such embodiments, one or more of the circuits (eg, power policy engine circuitry 202, power management controller circuitry 204, power delivery controller circuitry 206, etc.) may form the processor 102, Portions of one or more of memory 104 , I/O subsystem 106 , data storage 108 , and/or other components of computing device 100 . For example, in some embodiments, some or all of these modules may be embodied as a processor 102 and a memory 104 and/or a data store 108 that stores instructions to be executed by the processor 102 . Additionally, in some embodiments, one or more of the illustrative modules may form part of another module, and/or one or more of the illustrative modules may be independent of each other. Furthermore, in some embodiments, one or more of the modules of environment 200 may be embodied as virtualized hardware components or emulation architectures that may be implemented by processor 102 or computing device 100 The other components are built and maintained. It should be appreciated that some of the functions of one or more of the modules of environment 200 may require hardware implementation, in which case embodiments of the modules implementing such functions will be embodied, at least in part, in hardware.Power policy engine 202 , which may be embodied as hardware, firmware, software, virtualized hardware, emulation architecture, and/or combinations thereof, as discussed above, is configured to manage the overall power policy of computing device 100 . Power policy engine 202 receives power and operational information from various components of computing device 100, and instructs components of computing device 100 to change power consumption or change I/O data transfer rates as necessary. Power policy engine 202 may receive information from, for example, power management controller 204, power delivery controller 206, device drivers 208, various components of computing device 100, and the like.Power policy engine 202 may receive power information from any suitable components in any suitable manner. For example, the power policy engine 202 may receive current power usage, current temperature, and/or the current I/O data transfer rate. It should be appreciated that I/O data transfers to devices external to computing device 100 result in power usage of computing device 100 in order to properly handle the I/O data transfers. As a result, reducing the I/O data transfer rate reduces the power spent processing the I/O, thereby freeing up power for other components such as the processor 102 . In some embodiments, such as those in which computing device 100 is battery operated, power policy engine 202 may monitor the power provided by computing device 100 to a device, such as an external storage device connected to a Type-C USB port. In some embodiments, power policy engine 202 may monitor current power usage based on instructions previously sent to various components of computing device 100, and may not require receipt of any additional information to determine current power usage levels.Similarly, power policy engine 202 may receive operational information from any suitable component in any suitable manner. For example, power policy engine 202 may receive information related to current or future workloads of processor 102, data store 108, communication circuitry 110, graphics processor 112, and the like. This information may include workload volume, workload type, workload priority, workload dependencies on other components, and the like. Operational information for data store 108 and/or communication circuitry 110 may include queue depths, bandwidth rates, and the like.The power policy engine 202 may process power information and operational information to determine whether to make changes to power settings. The power policy engine 202 may process information in any suitable manner, such as by comparing power usage or temperature or I/O data transfer rates to thresholds. In some embodiments, the power policy engine 202 may monitor current and past power usage to determine whether to make changes to power settings. For example, power policy engine 202 may determine that computing device 100 may be in a higher power state for a predetermined amount of time, such as any time between 1 millisecond and 1000 seconds. The power policy engine 202 may calculate whether power usage exceeds a threshold in any suitable manner (such as by comparing current power usage to a threshold, integrating past power usage over a particular time frame, calculating expected thermal effects, etc.). The power policy engine 202 may also monitor operational information such as frequency, voltage, priority, and I/O bandwidth and throughput to determine whether to make changes to power settings. For example, the power policy engine 202 may also allow high power usage to handle high priority tasks, and then reduce power usage when the high power tasks are completed.The power policy engine 202 is configured to instruct other components of the computing device 100 to change operations when the power policy engine 202 determines that a change in the I/O power setting is appropriate. In the illustrative embodiment, power policy engine 202 instructs device driver 208 to change the data transfer rate of the corresponding I/O device. For example, power policy engine 202 may instruct storage driver 210 to delay or otherwise throttle I/O operations on storage device 108 . The power policy engine 202 may instruct the network driver 212 to delay outgoing packets, or may instruct the network driver 212 to slow down the connection. The power policy engine 202 may instruct the graphics driver 214 to render graphics at a lower frame rate, refresh the display 114 at a lower refresh rate, render graphics at a lower resolution, on the display 114 at a lower resolution Display graphics, etc. In some embodiments, the power policy engine 202 may instruct the device driver 208 to reduce the data transfer rate (or increase the available data transfer limit) of the corresponding I/O device without providing specific instructions on how to reduce the data transfer The details of the rate are left to the specific device driver 208 . It should be appreciated that the power policy engine 202 does not require any knowledge of how data transfer rates may be reduced for a particular communication protocol, thereby allowing the power policy engine 202 to communicate with any suitable device for any suitable interface, protocol, connector, interconnect, etc. The driver 208 is docked. In some embodiments, the power policy engine 202 may instruct the device driver 208 to reduce the data transfer rate without providing a specific amount by which the device driver 208 should reduce the data transfer.In some embodiments, power policy engine 202 may send indications of device power states, such as power states D0, D1, D2, D3, etc., including such as DOax, D3 hot, D3 cold, etc., to some or all of device drivers 208 The possible power substates of the class. In some embodiments, some device drivers 208 and/or corresponding devices may support those power states, and some may not.Additionally or alternatively, in some embodiments, power policy engine 202 may instruct other components (such as processor 102, memory 104, and/or graphics processor 112) to reduce power usage. For example, power policy engine 202 may instruct processor 102, memory 104, and/or graphics processor 112 to implement dynamic voltage and power scaling. In some embodiments, the power policy engine 202 may instruct the memory controller to reduce power usage, such as by delaying memory read or write requests, throttling memory bandwidth, and the like.The power policy engine 202 may have one or more user-defined policies that control power and thermal management. These policies can define average power usage, maximum power usage, average I/O data transfer rate, maximum I/O data transfer rate, various thresholds for when to reduce or increase the availability of I/O bandwidth for various components, Various actions to take when various thresholds are exceeded, etc. It should be appreciated that in addition to instructing components to reduce the I/O data transfer rate, the power policy engine 202 may instruct the component to increase the I/O data transfer rate or notify the component that the increased I/O data transfer rate is permitted. In some embodiments, the power policy may define under what circumstances different components should be prioritized. For example, if the processor 102 has a high workload volume or high workload priority, additional power may be provided to the processor 102 at the expense of reduced I/O data transfers for I/O components. Alternatively, if the processor 102 does not have a high workload or the processor 102 has a low priority workload, less power may be provided to the processor 102 and more power may be used to service the I/O components I/O data transfer.The illustrative power management controller 204 (which may be embodied as hardware, firmware, software, virtualized hardware, an emulation architecture, and/or a combination thereof as discussed above) is configured to manage and manage certain components of the computing device 100 (such as processor 102, memory 104, graphics processor 112, etc.) associated power. In an illustrative embodiment, power management controller 204 may be integrated on the SoC along with processor 102 , memory 104 and graphics processor 112 . The power management controller 204 may communicate with the power policy engine 202 which is not integrated into the SoC. In other embodiments, the power management controller 204 and the power policy engine 202 may be on the same chip or integrated into the same component. In some embodiments, power management controller 204 may be partially or fully integrated into power policy engine 202 .The power management controller 204 is configured to communicate with the power policy engine 202 to provide power information and operation of the power policy engine 202 as discussed above. For example, power management controller 204 may convert current power usage, data from components such as processor 102, communication circuitry 110, graphics processor 112, power monitoring circuitry of computing device 100, temperature monitoring circuitry of computing device 100, etc. The transmission usage and/or current temperature is sent to the power policy engine 202 . Similarly, power management controller 204 may send operational information to power policy engine 202 regarding current or future workloads of processor 102, data store 108, communication circuitry 110, graphics processor 112, and the like.The power management controller 204 is also configured to receive and implement instructions from the power policy engine 202 to change the current power consumption. For example, power management controller 204 may receive an instruction to enter a certain power state, such as power state D0, D1, D2, D3, etc. . Power management controller 204 may also be configured to receive and implement instructions for implementing dynamic voltage and frequency scaling associated with processor 102 , memory 104 , and/or graphics processor 112 .Power delivery controller 206 (which may be embodied as hardware, firmware, software, virtualized hardware, emulation architecture, and/or combinations thereof as discussed above) is configured to manage delivery to devices powered by computing device 100 (such as USB3 drives, USB4 drives, USB4 local area network (LAN) devices, etc.). The power delivery controller 206 may send information to the power policy engine 202 from each or all of the connected devices indicating the current power delivery contract and/or the current power usage. Power delivery controller 206 may also receive instructions from power policy engine 202 to reduce power delivery. The power delivery controller 206 may then renegotiate the power delivery contract with the various connected devices. In some embodiments, the power delivery controller 206 may cut off power delivery to some or all of the connected devices if the power delivery contract cannot be agreed within the constraints provided by the power policy engine 202 .Device drivers 208 (which may be embodied as hardware, firmware, software, virtualized hardware, emulation architectures, and/or combinations thereof as discussed above) are configured to manage the various devices of computing device 100 and are specifically configured to manage the I/O data transfer rates of the various devices of computing device 100 . As discussed above with respect to the power policy engine 202 , the device driver 208 may send power and operational information to the power policy engine 202 . The device driver 208 may also receive instructions from the power policy engine 202 to change the I/O data transfer rate of the various devices managed by the device driver 208 . Device drivers 208 may include any suitable drivers, such as storage drivers 210 , networked drivers 212 , and/or graphics drivers 214 . When the computing device 100 is powered on, the device driver 208 may indicate the power management capabilities of the device driver 208 to the operating system or other components of the computing device 100 during enumeration.Storage driver 210 may provide power and operational information to power policy engine 202 . Power and operational information may include I/O data transfer rates, device power ratios, bus link status, queue depth, and the like. The power policy engine 202 may instruct the storage driver 210 to change the I/O data transfer rate in any suitable manner (such as by reducing the storage data rate, reducing the device power ratio, changing the bus link state, etc.). In some embodiments, storage driver 210 may manage how to manage I/O data transfer rate reduction without any specific instruction from power policy engine 202 other than an instruction to reduce the data transfer rate. The storage driver 210 may control, for example, a USB3 driver, an NVMe driver, a USB4 driver, and the like.Networked drivers 212 may provide power and operational information to power policy engine 202 . Power and operational information may include networking data rates, device power ratios, bus link status, queue depths, and the like. The power policy engine 202 may instruct the networked driver 212 to reduce the data transfer rate in any suitable manner (such as by reducing the network data rate, reducing the device power ratio, changing the bus link state, etc.). The networking driver 212 may, for example, reduce the WiFi bandwidth rate, change the LAN bandwidth rate, change the 5G cellular bandwidth rate, and the like. In some embodiments, networked driver 212 may manage how to manage data transfer rate reduction without any specific instruction from power policy engine 202 other than an instruction to reduce the data transfer rate. The networked driver 212 can control, for example, LAN, Wifi, 5G, USB4 LAN, and the like.Graphics driver 214 may provide power and operational information to power policy engine 202 . Power and operating information may include frame rate, refresh rate, resolution, number of connected displays 114, and the like. The power policy engine 202 may instruct the graphics driver 214 to reduce data transfers in any suitable manner (eg, by reducing the refresh rate, reducing the frame rate, reducing the resolution, or reducing the number of displays 114 in use). Graphics driver 214 may disconnect display 114 or may instruct the user to disconnect display 114 or turn off display 114 . In some embodiments, graphics driver 214 may manage how to manage data transfer rate reduction without any specific instruction from power policy engine 202 other than an instruction to reduce the data transfer rate. The graphics driver can control PCIe graphics processors, PCIe Intel Graphics (iGfx) displays, USB4 displays, and more.3, in use, computing device 100 may perform a method 300 for dynamic I/O power scaling. Method 300 may be performed by any suitable component or combination of components of computing device 100, including hardware, software, firmware, and the like. For example, some or all of method 300 may be performed by processor 102, memory 104, power policy engine 202, or the like. The method 300 begins at block 302, where the computing device 100 performs an enumeration of connected devices. Computing device 100 may load device drivers 208 for connected devices and determine the power management capabilities of device drivers 208 and corresponding devices. For example, computing device 100 may determine whether the device supports various power state configurations.In block 304, the computing device 100 determines the power state of the connected device. Computing device 100 may determine the power usage of various devices or components, the power delivered by power delivery controller 206, the storage data rate of storage device 108, the network data rate of communication device 110, and the like.In block 306, computing device 100 configures power settings. Computing device 100 may determine a threshold temperature at block 308 and may determine a power policy at block 310 . The power policy may be one or more user-defined policies that control power and thermal management, such as policies stored in data store 108 , policies received from communications circuitry 110 , and/or policies received from a user of computing device 100 . These policies may define average power usage, maximum power usage, average data transfer rate, maximum data transfer rate, various thresholds for when to reduce or increase power availability or data rate availability of various components, Various actions to be taken when exceeding, etc. The method proceeds to block 312 in FIG. 4 .Referring now to FIG. 4 , in block 314 the computing device 100 receives power information from various components of the computing device 100 . Computing device 100 or components thereof, such as processor 102 or power policy engine 202 , receive power and operational information from various components of computing device 100 . Power information may be received from, for example, power management controller 204, power delivery controller 206, device drivers 208, various components of computing device 100, and the like. In block 314 , the computing device 100 receives current temperature information indicative of one or more temperatures of one or more components of the computing device 100 .Computing device 100 may receive power information from any suitable component in any suitable manner. For example, computing device 100 may receive current power usage, current data transfer usage from components such as processor 102, communication circuitry 110, graphics processor 112, power monitoring circuitry of computing device 100, temperature monitoring circuitry of computing device 100, and the like , and/or the current temperature. In some embodiments, computing device 100 may receive information indicative of power provided to a device connected to the computing device, such as an external storage device powered by a USB Type-C connection.In block 316 , the computing device 100 receives operational information related to the current operation of the computing device 100 . For example, computing device 100 may receive information about current processor 102 usage in block 318 . In block 320, computing device 100 may receive information related to current I/O data transfer rates for various devices. Computing device 100 may receive information regarding current graphics processor 112 usage and, in block 324, may receive information regarding current power being delivered to the bus powered device. Operational information may include workload volume, workload type, workload priority, workload dependency on other components, storage data rate, network data rate, bus link status, display frame rate, display refresh rate, display resolution Wait.In block 326, the computing device 100 determines whether to change the I/O power setting based on the received power information and/or operational information. Computing device 100 may process this information to determine whether to change power settings in any suitable manner, such as based on a power policy. For example, in block 328, computing device 100 may compare the current temperature to a threshold, or in block 330, may compare current and/or past power usage to a threshold. Computing device 100 may calculate whether power usage exceeds a threshold in any suitable manner (such as by comparing current power usage to a threshold, integrating past power usage over a particular time frame, calculating expected thermal effects, etc.). In some embodiments, computing device 100 may additionally or alternatively monitor operational information to determine whether to make changes to power settings.In block 332, if computing device 100 will not change the I/O power setting, method 300 loops back to block 312 to receive additional power information. If the computing device 100 is to change the I/O power setting, the method 300 proceeds to block 334 where the computing device 100 changes the I/O power setting. Computing device 100 may change the I/O power settings in any suitable manner. In the illustrative embodiment, in block 336, computing device 100 instructs device driver 208 to change the data transfer rate. For example, computing device 100 may instruct storage driver 210 to delay, throttle, or otherwise slow down operations on storage device 108 . Computing device 100 may instruct network driver 212 to delay outgoing packets, or may instruct network driver 212 to slow down the connection. Computing device 100 may instruct graphics driver 214 to render graphics at a lower frame rate, refresh display 114 at a lower refresh rate, render graphics at a lower resolution, display on display 114 at a lower resolution graphics etc. In some embodiments, computing device 100 may instruct device driver 208 to reduce the data transfer rate (or increase the available data transfer limit) of the corresponding I/O device without providing specific instructions on how to reduce the data transfer rate The details are left to the specific device driver 208. It should be appreciated that for component instructions, device driver 208 does not need any knowledge of how power usage may be reduced for a particular communication protocol. In some embodiments, the component may instruct the device driver 208 to reduce the data transfer rate without providing a specific amount by which the device driver 208 should reduce the data transfer.It should be appreciated that I/O data transfers of devices sending data to and from components within computing device 100 cause computing device 100 power usage to properly handle the I/O data transfers. As a result, reducing the I/O data transfer rate reduces the power spent processing the I/O, thereby freeing up power for other components such as the processor 102 .In some embodiments, computing device 100 may send indications of device power states, such as power states DO, D1, D2, D3, etc., including such as DOax, D3 hot, D3 cold, etc., to some or all of device drivers 208 possible power substates. In some embodiments, some device drivers 208 and/or corresponding devices may support those power states, and some may not.Additionally or alternatively, in some embodiments, computing device 100 may instruct other components, such as processor 102, memory 104, and/or graphics processor 112, to reduce power usage. For example, computing device 100 may instruct processor 102, memory 104, and/or graphics processor 112 to implement dynamic voltage and power scaling. The method 300 then loops back to block 312 to receive additional power information.ExampleIllustrative examples of the techniques disclosed herein are provided below. Embodiments of these techniques may include any one or more of the examples described below, and any combination thereof.Example 1 includes a computing device for dynamic input/output (I/O) scaling, the computing device comprising: a processor; a memory communicatively coupled to the processor; and a data store including for connecting a device driver to an I/O device of the computing device; and a power policy engine for: determining whether an I/O power setting of the computing device should be changed; and instructing the device driver responsive to determining the I/O power setting of the computing device The O power setting should be changed to change the data transfer rate of the I/O device, wherein the device driver is used to change the data transfer rate of the I/O device in response to instructions from the power policy engine.Example 2 includes the subject matter of Example 1, wherein determining whether an I/O power setting of a computing device should be changed includes determining whether a current temperature of the computing device exceeds a threshold.Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the I/O device is a graphics processor, wherein changing the data transfer rate of the I/O device comprises changing the frame rate of the output of the graphics processor or resolution.Example 4 includes the subject matter of any of Examples 1-3, and wherein the I/O device is a data storage device, wherein changing a data transfer rate of the I/O device includes changing a data storage rate of the data storage device.Example 5 includes the subject matter of any of Examples 1-4, and wherein the I/O device is a communication circuit, wherein changing a data transfer rate of the I/O device includes changing a network data rate of the communication circuit.Example 6 includes the subject matter of any of Examples 1-5, and wherein instructing the device driver to change the data transfer rate of the I/O device includes instructing the device driver to place the I/O device at a predefined plurality of power One of the power states in the state.Example 7 includes the subject matter of any of Examples 1-6, and wherein the computing device is configured to enumerate a plurality of devices upon startup of the computing device to determine for each device of the plurality of devices Whether the corresponding device supports being placed into multiple power states, where the multiple devices include I/O devices.Example 8 includes the subject matter of any of Examples 1-7, and wherein the power policy engine is further for changing power delivery with the I/O device in response to determining that an I/O power setting of the computing device should be changed contract.Example 9 includes the subject matter of any of Examples 1-8, and wherein the I/O device is a storage device, a communication circuit, or a graphics processor.Example 10 includes a method for dynamic input/output (I/O) scaling, the method comprising: determining, by a power policy engine of a computing device, whether an I/O power setting of the computing device should be changed; instructing, and in response to determining that the I/O power setting of the computing device should be changed, instructing a device driver for the I/O device of the computing device to change the data transfer rate of the I/O device; and by the device driver and in response to the power policy Engine instructions to change the data transfer rate of the I/O device.Example 11 includes the subject matter of Example 10, wherein determining whether an I/O power setting of a computing device should be changed includes determining whether a current temperature of the computing device exceeds a threshold.Example 12 includes the subject matter of any of Examples 10 and 11, and wherein the I/O device is a graphics processor, wherein changing the data transfer rate of the I/O device includes changing the frame rate of the output of the graphics processor or resolution.Example 13 includes the subject matter of any of Examples 10-12, and wherein the I/O device is a data storage device, wherein changing a data transfer rate of the I/O device includes changing a data storage rate of the data storage device.Example 14 includes the subject matter of any of Examples 10-13, and wherein the I/O device is a communication circuit, wherein changing a data transfer rate of the I/O device includes changing a network data rate of the communication circuit.Example 15 includes the subject matter of any of Examples 10-14, and wherein instructing the device driver to change the data transfer rate of the I/O device includes instructing the device driver to place the I/O device at a predefined plurality of power One of the power states in the state.Example 16 includes the subject matter of any of Examples 10-15, and further comprising: enumerating a plurality of devices at startup of the computing device to determine, for each device of the plurality of devices, whether the corresponding device Putting into a plurality of power states is supported, wherein the plurality of devices includes I/O devices.Example 17 includes the subject matter of any of Examples 10-16, and further comprising: changing, by a power policy engine, a power delivery contract with an I/O device in response to determining that an I/O power setting of a computing device should be changed .Example 18 includes one or more computer-readable media including a plurality of instructions stored thereon that, when executed by a computing device, cause the computing device to: determine, by a power policy engine of the computing device, an I/O of the computing device Whether the O power setting should be changed; instructing, by the power policy engine, a device driver for the I/O device of the computing device to change the data transfer rate of the I/O device in response to determining that the I/O power setting of the computing device should be changed; The data transfer rate of the I/O device is changed by the device driver and in response to instructions from the power policy engine.Example 19 includes the subject matter of Example 18, wherein determining whether an I/O power setting of a computing device should be changed includes determining whether a current temperature of the computing device exceeds a threshold.Example 20 includes the subject matter of any of Examples 18 and 19, and wherein the I/O device is a graphics processor, wherein changing the data transfer rate of the I/O device comprises changing the frame rate of the output of the graphics processor or resolution.Example 21 includes the subject matter of any of Examples 18-20, and wherein the I/O device is a data storage device, wherein changing a data transfer rate of the I/O device includes changing a data storage rate of the data storage device.Example 22 includes the subject matter of any of Examples 18-21, and wherein the I/O device is a communication circuit, wherein changing a data transfer rate of the I/O device includes changing a network data rate of the communication circuit.Example 23 includes the subject matter of any of Examples 18-22, and wherein instructing the device driver to change the data transfer rate of the I/O device includes instructing the device driver to place the I/O device at a predefined plurality of power One of the power states in the state.Example 24 includes the subject matter of any of Examples 18-23, and wherein the plurality of instructions further cause the computing device to enumerate a plurality of devices when the computing device starts up for each of the plurality of devices. Each device determines whether the corresponding device supports being placed in multiple power states, wherein the multiple devices include I/O devices.Example 25 includes the subject matter of any of Examples 18-24, wherein the plurality of instructions further cause the computing device to instruct the device driver to change and I/O in response to determining that an I/O power setting of the computing device should be changed The power delivery contract for the device.Example 26 includes a computing device for dynamic input/output (I/O) scaling, the computing device comprising: for determining, by a power policy engine of the computing device, whether an I/O power setting of the computing device should be changed Means; means for instructing, by a power policy engine, and in response to determining that an I/O power setting of a computing device should be changed, instructing a device driver for an I/O device of a computing device to change the data transfer rate of the I/O device ; and means for changing the data transfer rate of the I/O device by the device driver and in response to instructions from the power policy engine.Example 27 includes the subject matter of Example 26, wherein determining whether an I/O power setting of a computing device should be changed includes determining whether a current temperature of the computing device exceeds a threshold.Example 28 includes the subject matter of any of Examples 26 and 27, and wherein the I/O device is a graphics processor, wherein the means for changing a data transfer rate of the I/O device comprises changing the graphics processor The output frame rate or resolution of the device.Example 29 includes the subject matter of any of Examples 26-28, and wherein the I/O device is a data storage device, wherein the means for changing a data transfer rate of the I/O device comprises changing the data storage The device's data storage rate means.Example 30 includes the subject matter of any one of Examples 26-29, and wherein the I/O device is a communication circuit, wherein the means for changing a data transfer rate of the I/O device includes means for changing the communication circuit. Device for network data rate.Example 31 includes the subject matter of any of Examples 26-30, and wherein the means for instructing the device driver to change the data transfer rate of the I/O device includes instructing the device driver to place the I/O device on the I/O device. A device in one of a predefined plurality of power states.Example 32 includes the subject matter of any of Examples 26-31, and further includes for enumerating a plurality of devices at startup of the computing device to determine for each device of the plurality of devices whether the corresponding device An apparatus being placed into multiple power states is supported, wherein the multiple devices include I/O devices.Example 33 includes the subject matter of any of Examples 26-32, and further comprising: for changing, by a power policy engine, power to an I/O device in response to determining that an I/O power setting of a computing device should be changed A device for delivering contracts.
Some features pertain to an integrated device that includes a die and a first redistribution portion coupled to the die. The first redistribution portion includes at least one dielectric layer and a capacitor. The capacitor includes a first plate, a second plate, and an insulation layer located between the first plate and the second plate. The first redistribution portion further includes several first pins coupled to the first plate of the capacitor. The first redistribution portion further includes several second pins coupled to the second plate of the capacitor. In some implementations, the capacitor includes the first pins and/or the second pins. In some implementations, at least one pin from the several first pins traverses through the second plate to couple to the first plate of the capacitor. In some implementations, the second plate comprises a fin design.
CLAIMS1. An integrated device comprising:a die;a first redistribution portion coupled to the die, wherein the first redistribution portion comprises:at least one dielectric layer;a capacitor comprising:a first plate;a second plate;an insulation layer located between the first plate and the second plate;a plurality of first pins coupled to the first plate of the capacitor; and a plurality of second pins coupled to the second plate of the capacitor.2. The integrated device of claim 1, wherein at least one pin from the plurality of first pin traverses through the second plate to couple to the first plate of the capacitor.3. The integrated device of claim 1, wherein at least one pin from the plurality of first pins comprises at least one interconnect.4. The integrated device of claim 3, wherein the at least one interconnect comprises a via, a trace, and/or a pad.5. The integrated device of claim 1, wherein the second plate comprises a fin design.6. The integrated device of claim 5, wherein the insulation layer substantially forms over a contour of the fin design of the second plate.7. The integrated device of claim 1, wherein the first redistribution portion further comprises at least one input / output (I O) pin that traverses through the first plate and the second plate of the capacitor.8. The integrated device of claim 1, wherein the insulation layer comprises a k value of at least 7.9. The integrated device of claim 1, further comprising a second redistribution portion, wherein the first redistribution portion comprises a plurality of first interconnects having a first spacing, and wherein the second redistribution portion comprises a plurality of second interconnects having a second spacing that is different than the first spacing.10. The integrated device of claim 1, wherein the integrated device is incorporated into a device selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in a automotive vehicle.11. An apparatus comprising:a die;a first redistribution portion coupled to the die, wherein the first redistribution portion comprises:at least one dielectric layer;a means for capacitance located in the at least one dielectric layer;a plurality of first pins coupled to a first portion of the means for capacitance; anda plurality of second pins coupled to a second portion of the means for capacitance.12. The apparatus of claim 11, wherein at least one pin from the plurality of first pin at least partially traverses through the means for capacitance.13. The apparatus of claim 11, wherein at least one pin from the plurality of first pins comprises at least one interconnect.14. The apparatus of claim 13, wherein the at least one interconnect comprises a via, a trace, and/or a pad.15. The apparatus of claim 11, wherein the means for capacitance comprises a fin design.16. The apparatus of claim 15, wherein the means for capacitance comprises an insulation layer that substantially forms over a contour of the fin design.17. The apparatus of claim 11, wherein the first redistribution portion further comprises at least one input / output (I O) pin that traverses through the means for capacitance.18. The apparatus of claim 11, wherein the means for capacitance includes an insulation layer comprises a k value of at least 7.19. The apparatus of claim 11, further comprising a second redistribution portion, wherein the first redistribution portion comprises a plurality of first interconnects having a first spacing, and wherein the second redistribution portion comprises a plurality of second interconnects having a second spacing that is different than the first spacing.20. The apparatus of claim 11, wherein the apparatus is incorporated into a device selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in a automotive vehicle.21. A method for fabricating an integrated device, comprising:providing a die;coupling a first redistribution portion to the die, wherein coupling the first redistribution portion comprises:forming at least one dielectric layer; forming a capacitor in the at least one dielectric layer, wherein forming the capacitor comprises:forming a first plate;forming a second plate;forming an insulation layer between the first plate and the second plate;forming a plurality of first pins over the first plate of the capacitor; and forming a plurality of second pins over the second plate of the capacitor.22. The method of claim 21, wherein at least one pin from the plurality of first pin traverses through the second plate to couple to the first plate of the capacitor.23. The method of claim 21, wherein at least one pin from the plurality of first pins comprises at least one interconnect.24. The method of claim 23, wherein the at least one interconnect comprises a via, a trace, and/or a pad.25. The method of claim 21, wherein forming the second plate comprises forming a second plate that includes a fin design.26. The method of claim 25, wherein forming the insulation layer comprises substantially forming the insulation layer over a contour of the fin design of the second plate.27. The method of claim 21, wherein forming the first redistribution portion further comprises forming at least one input / output (I O) pin that traverses through the first plate and the second plate of the capacitor.28. The method of claim 21, wherein the insulation layer comprises a k value of at least 20.29. The method of claim 21, further comprising forming a second redistribution portion, wherein the first redistribution portion comprises a plurality of first interconnects having a first spacing, and wherein the second redistribution portion comprises a plurality of second interconnects having a second spacing that is different than the first spacing.30. The method of claim 21, wherein the integrated device is incorporated into a device selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in a automotive vehicle.
INTEGRATED DEVICE COMPRISING A CAPACITOR THAT INCLUDES MULTIPLE PINS AND AT LEAST ONE PIN THAT TRAVERSES A PLATE OF THE CAPACITORCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit Non-ProvisionalApplication No. 15/041,853 filed with the U.S. Patent and Trademark Office on February 11, 2016.BACKGROUNDField[0002] Various features relate to an integrated device that includes a capacitor, and more specifically to a capacitor that includes multiple pins and at least one pin that traverses a plate of the capacitor.Background[0003] FIG. 1 illustrates a configuration of an integrated device that includes a die. Specifically, FIG. 1 illustrates an integrated device 100 that includes a first die 102 and a package substrate 106. The package substrate 106 includes a dielectric layer and a plurality of interconnects 110. The package substrate 106 is a laminated substrate. The plurality of interconnects 110 includes traces, pads and/or vias. The first die 102 is coupled to the package substrate 106 through a first plurality of solder balls 112. The package substrate 106 is coupled to a printed circuit board (PCB) 108 through a second plurality of solder balls 116. FIG. 1 illustrates that a capacitor 120 is mounted on the PCB 108. The capacitor 120 is located externally of the integrated device 100, and takes up a lot real estate on the PCB 108.[0004] A drawback of the capacitor 120 shown in FIG. 1 is that it creates a device with a form factor that may be too large for the needs of mobile computing devices and/or wearable computing devices. This may result in a device that is either too large and/or too thick. That is, the combination of the integrated device 100, the capacitor 120 and the PCB 108 shown in FIG. 1 may be too thick and/or have a surface area that is too large to meet the needs and/or requirements of mobile computing devices and/or wearable computing devices.[0005] Therefore, there is a need for an integrated device that includes a compact form factor, while at the same time meeting the needs and/or requirements of mobile devices, Internet of Things (IoT) devices, computing devices and/or wearable computing devices.SUMMARY[0006] Various features relate to a capacitor, and more specifically to a capacitor that includes multiple pins and at least one pin that traverses a plate of the capacitor.[0007] An example provides an integrated device that includes a die and a first redistribution portion coupled to the die. The first redistribution portion includes at least one dielectric layer and a capacitor. The capacitor includes a first plate, a second plate, and an insulation layer located between the first plate and the second plate. The first redistribution portion further includes several first pins coupled to the first plate of the capacitor. The first redistribution portion further includes several second pins coupled to the second plate of the capacitor.[0008] Another example provides an apparatus that includes a die and a first redistribution portion coupled to the die. The first redistribution portion includes at least one dielectric layer, a means for capacitance located in the at least one dielectric layer, a plurality of first pins coupled to a first portion of the means for capacitance, and a plurality of second pins coupled to a second portion of the means for capacitance.[0009] Another example provides a method for fabricating an integrated device. The method provides a die. The method couples a first redistribution portion to the die. The coupling of the first redistribution portion includes forming at least one dielectric layer. The coupling of the first redistribution portion includes forming a capacitor in the at least one dielectric layer. The forming of the capacitor includes forming a first plate, forming a second plate, and forming an insulation layer between the first plate and the second plate. The coupling of the first redistribution portion includes forming a plurality of first pins over the first plate of the capacitor, and forming a plurality of second pins over the second plate of the capacitor. DRAWINGS[0010] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0011] FIG. 1 illustrates an integrated device and a capacitor coupled to a printed circuit board (PCB).[0012] FIG. 2 illustrates a profile view of an example of a capacitor that includes multiple pins and at least one pin that traverses a plate.[0013] FIG. 3 illustrates a plan view of an example of a capacitor that includes multiple pins and at least one pin that traverses a plate.[0014] FIG. 4 illustrates a profile view of an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate.[0015] FIG. 5 illustrates a profile view of an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate.[0016] FIG. 6 illustrates a profile view of an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate.[0017] FIG. 7 illustrates a profile view of an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate.[0018] FIGS. 8A-8D illustrate an example of a sequence for fabricating an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate of the capacitor.[0019] FIG. 9 illustrates an example of a flow diagram of a method for fabricating an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate of the capacitor.[0020] FIG. 10 illustrates an example of an integrated device that includes a capacitor that includes multiple pins and at least one pin that traverses a plate.[0021] FIGS. 11A-11B illustrate an example of a sequence for fabricating a capacitor that includes multiple pins.[0022] FIG. 12 illustrates a profile view of an integrated device that includes a capacitor that includes multiple pins.[0023] FIGS. 13A-13D illustrate an example of a sequence for fabricating an integrated device that includes a capacitor that includes multiple pins.[0024] FIG. 14 illustrates various electronic devices that may integrate an integrated device, an integrated device package, a semiconductor device, a die, an integrated circuit, a substrate, an interposer, a package-on-package device, and/or PCB described herein.DETAILED DESCRIPTION[0025] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.Overview[0026] Some features pertain to an integrated device that includes a die and a first redistribution portion coupled to the die. The first redistribution portion includes at least one dielectric layer and a capacitor. The capacitor includes a first plate, a second plate, and an insulation layer located between the first plate and the second plate. The first redistribution portion further includes several first pins coupled to the first plate of the capacitor. The first redistribution portion further includes several second pins coupled to the second plate of the capacitor. In some implementations, the capacitor includes the first pins and/or the second pins. In some implementations, at least one pin from the several first pins traverses through the second plate to couple to the first plate of the capacitor. In some implementations, at least one pin from the several first pins comprises at least one interconnect. In some implementations, the second plate comprises a fin design. In some implementations, the insulation layer substantially forms over a contour of the fin design of the second plate. In some implementations, the insulator includes a k value of at least 20.Exemplary Capacitor Comprising Multiple Pins[0027] FIG. 2 illustrates an example of a capacitor 200 that includes multiple pins. As shown in FIG. 2, the capacitor 200 includes a first plate 260, a second plate 262, and an insulation layer 264. In some implementations, the capacitor 200 is a metal insulator metal (MIM) capacitor. The capacitor 200 may be a means for capacitance. The capacitor 200 may be implemented in different devices. As will be further described below, the capacitor 200 may be implemented in a redistribution portion (e.g., fan out (FO) portion) of an integrated device (e.g., integrated device package) and/or a package substrate. [0028] The insulation layer 264 is located between the first plate 260 (e.g., top plate) and the second plate 262 (e.g., bottom plate). The first plate 260 may include a first electrically conductive plate (e.g., first metal plate). The second plate 262 may include a second electrically conductive plate (e.g., second metal plate). FIG. 2 illustrates that the first plate 260 and the second plate 262 are substantially flat. However, in some implementations, the first plate 260 and/or the second plate 262 may include a U or V shape.[0029] In some implementations, the insulation layer 264 includes a material that has a k value of at least 7 (e.g., Silicon Nitride (SiN), which has K~7). However, different implementations may use a material with a different k value. For example, in some implementations, the insulation layer 264 includes a material that has a k value of at least 20. In some implementation, the insulation layer 264 has a thickness of about 50 nanometers (nm) or less.[0030] FIG. 2 illustrates that a pin 270 (e.g., first pin) and a pin 272 (e.g., first pin) are coupled to the first plate 260 (e.g., coupled to a first surface of the first plate 260). A pin 290 and a pin 292 are coupled to the first plate (e.g., coupled a second surface of the first plate 260). The pin 290 and the pin 292 traverse through the second plate 262 to couple to the first plate 260.[0031] FIG. 2 also illustrates that a pin 280 (e.g., second pin) and a pin 282 are coupled to the second plate 262 (e.g., coupled to a first surface of the second plate 262). The pin 280 and the pin 282 traverse through the first plate 260 and the insulation layer 264, to couple to the second plate 262. A dielectric layer 266 is located between the pin 280 and the first plate 260 so that pin 280 is not directly touching the first plate 260. Similarly, a dielectric layer 266 is located between the pin 282 and the first plate 260 so that pin 282 is not directly touching the first plate 260.[0032] In addition, a dielectric layer 268 is located between the pin 290 and the second plate 262 so that the pin 290 is not directly touching the second plate 262. Similarly, a dielectric layer 268 is located between the pin 292 and the second plate 262 so that the pin 292 is not directly touching the second plate 262.[0033] In some implementations, the pin 270, the pin 272, the pin 290, and the pin 292 are configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pins 270 and 272 are configured to provide distributed power supply to a chip (e.g., die 404), and the pins 290 and 292 are configured to reduce an IR drop (voltage drop) from a power source to the capacity. In some implementations, the pin 280 and the pin 282 are configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals.[0034] FIG. 2 further illustrates a pin 294 that traverses through the first plate 260, the insulation layer 264 and the second plate 262. In some implementations, the pin 294 is an input / output (I/O) pin configured to provide an electrical path for an input / output signal. The pin 294 is not in direct contact with the first plate 260 and the second plate 262 of the capacitor 200. The dielectric layer 266 is located between the pin 294 and the first plate 260 so that the pin 294 is not directly touching the first plate 260. Similarly, the dielectric layer 268 is located between the pin 294 and the second plate 262 so that the pin 294 is not directly touching the second plate 262.[0035] One or more pins may include one or more interconnects. An interconnect may include a trace, a via, and/or a pad. In some implementations, an interconnect is an element or component of a device or package that allows or facilitates an electrical connection and/or electrical path between two points, elements and/or components. In some implementations, an interconnect may include a trace, a via, a pad, a pillar, a redistribution metal layer, and/or an under bump metallization (UBM) layer. In some implementations, an interconnect is an electrically conductive material that may be configured to provide an electrical path for a signal (e.g., data signal, ground signal, power signal). An interconnect may be part of a circuit. An interconnect may include more than one element or component.[0036] FIG. 3 illustrates a plan view (e.g., top view) of the capacitor 200. As shown in FIG. 3, several pins are coupled to the capacitor 200 or traverse through the capacitor 200. The pins are arranged in a row and/or column. FIG. 3 illustrates that several pins 270, several pins 272, several pins 280 and several pins 282 may be coupled to the capacitor 200. FIG. 3 also illustrates that several pins 294 may traverse through the capacitor 200.[0037] The above design of the capacitor 200 provides a capacitor that is compact and that minimizes blocking of interconnects by allowing interconnects for input / output (I/O) signals to traverse through the capacitor 200. In some implementations, one or more input / output (I/O) interconnects or I/O pins may traverse through one or both plates of the capacitor 200.[0038] In some implementations, the capacitor (e.g., capacitor 200, capacitor 400) may include a size of about 2000 x 2000 microns (μιη) or less (e.g., about 100 x 100 microns (μιη)). In some implementations, the pitch (e.g., center to center distance) of two neighboring pins (e.g., power pins, input/output (I/O) pins) may be about 300 microns (μιη) or less (e.g., about 20-300 microns (μιη)). The above dimensions may apply to other capacitors described in the present disclosure.[0039] In some implementations, the capacitor 200 may include a fin design. An example of capacitor that includes a fin design is further illustrated and described below in at least FIG. 12.Exemplary Integrated Device Comprising a Capacitor That Includes Multiple Pins[0040] FIG. 4 illustrates an integrated device 401 that includes a capacitor 400 that includes multiple pins. The integrated device 401 (e.g., integrated device package) includes a redistribution portion 402, a die 404, an encapsulation layer 410, an underfill 412, a plurality of interconnects 430.[0041] The redistribution portion 402 includes at least one first dielectric layer 420, at least one second dielectric layer 422, and a solder resist layer 424. The capacitor 400 is located at least partially in the at least one first dielectric layer 420.[0042] The redistribution portion 402 includes a plurality of first interconnects 421 in the at least one first dielectric layer 420. The redistribution portion also includes a plurality of second interconnects 423 in the at least one second dielectric layer 422. In some implementations, the plurality of first interconnects 421 includes finer pitch and finer spacing than the plurality of second interconnects 423. That is, in some implementations, the plurality of first interconnects 421 includes a first pitch that is less than a second pitch of the plurality of second interconnects 423. In some implementations, the plurality of first interconnects 421 includes a first spacing that is less than a second spacing of the plurality of second interconnects 423.[0043] The die 404 is coupled to the redistribution portion 402 through the plurality of interconnects 430. The plurality of interconnects 430 may include bumps and/or solder interconnects (e.g., solder balls). The underfill 412 is located between the die 404 and the redistribution portion 402. The underfill 412 may at least partially surround the plurality of interconnects 430. The encapsulation layer 410 at least partially encapsulates the die 404. The encapsulation layer 410 may include a mold, an epoxy and/or a resin material.[0044] As shown in FIG. 4, the capacitor 400 includes a first plate 460 (e.g., top plate), a second plate 462 (e.g., bottom plate), and an insulation layer 464. In some implementations, the capacitor 400 is a metal insulator metal (MIM) capacitor. The capacitor 400 may be a means for capacitance.[0045] FIG. 4 illustrates that a pin 470 (e.g., first pin) and a pin 472 (e.g., first pin) are coupled to the first plate 460 (e.g., coupled to a first surface of the first plate 460). A pin 490 and a pin 492 are coupled to the first plate (e.g., coupled a second surface of the first plate 460). The pin 490 traverses through the second plate 462 to couple to the first plate 460.[0046] FIG. 4 also illustrates that a pin 480 (e.g., second pin) and a pin 482 are coupled to the second plate 462 (e.g., coupled to a first surface of the second plate 462. The pin 480 and the pin 482 traverse through the first plate 460 and the insulation layer 464, to couple to the second plate 462. A pin 484 and a pin 486 are coupled to the second plate (e.g., coupled to a second surface of the second plate 462). A dielectric layer 466 is located between the pin 480 and the first plate 460 so that pin 480 is not directly touching the first plate 460.[0047] Similarly, a dielectric layer 468 is located between the pin 490 and the second plate 462 so that the pin 490 is not directly touching the second plate 462. In some implementations, the dielectric layer 466 and the dielectric layer 468 are part of the dielectric layer 420. In some implementations, the dielectric layer 466, the dielectric layer 468, and the dielectric layer 420 are all the same dielectric layer.[0048] In some implementations, the pin 470, the pin 472, the pin 490, and the pin 492 are configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pin 480, the pin 482, the pin 484, and the pin 486 are configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals.[0049] In some implementations, at least one pin may traverse through one or both the first plate 460 and the second plate 462. The pin may be an input / output (I/O) pin as described above for the pin 294 of FIG. 2.[0050] In some implementations, several pins (e.g., pins 470, 472, 480, 482, 484, 486, 490, 492) may be coupled to the capacitor 400 or traverse through the capacitor 400, as described in FIG. 3. For example, there may be several pins 470, 472, 480, 482, 484, 486, 490, and/or 492. The pins may be arranged in rows and/or columns as described in FIG. 3. Although, not shown in FIG. 4, one or more pins may traverse both the first plate 460 and the second plate 462, in a manner similar to the pin 294 of FIG. 2. [0051] A pin coupled to a capacitor (e.g., coupled to a plate of a capacitor) is configured to provide an electrical path to and from a capacitor. There are several advantages to providing a pin that traverses through one or both plates (e.g., first plate 460, second plate 462). One, an electrical path through one or more plates is a more direct path to and from a die, which means a shorter path to and from the die. Two, an electrical path through one or more plates means that the electrical path does not need to be routed around the capacitor, saving space and real estate in the substrate, which can result in an overall smaller form factor for the integrated device. Three, a more direct path for the power signal and/or ground reference signal means that less material is used, thereby reducing the cost of fabricating the integrated device. Four, a direct and shorter electrical path means less current drop between the die and the capacitor, which means better signal performance. In some implementations, providing and coupling several pins to a capacitor provides lower inductance to and from the capacitor. In some implementations, providing and coupling several pins is possible due to the fabrication processes described in the present disclosure.[0052] In some implementations, the capacitor 400 may include a fin design. An example of a capacitor that includes a fin design is further illustrated and described below in at least FIG. 12.[0053] FIG. 5 illustrates another example of an integrated device 501 that includes a capacitor 500. The capacitor 500 may be a means for capacitance. The integrated device 501 (e.g., integrated device package) is similar to the integrated device 401 of FIG. 4, except that the integrated device 501 includes a capacitor (e.g., capacitor 500) with a different design.[0054] The integrated device 501 includes a redistribution portion 502 coupled to the die 404. The redistribution portion 502 includes the capacitor 500, the at least one first dielectric layer 520 and the at least one second dielectric layer 522.[0055] As shown in FIG. 5, the capacitor 500 is at least partially located in the at least one first dielectric layer 520 and the at least one second dielectric layer 522. The capacitor 500 includes a first plate 560 (e.g., top plate), a second plate 562 (e.g., bottom plate), and an insulation layer 564. The insulation layer 564 is located between the first plate 560 and the second plate 562.[0056] A pin 570 is coupled to the first plate 560, and a pin 590 is coupled to the first plate 560. A pin 580 is coupled to the second plate 562, and a pin 588 is coupled to the second plate 562. A pin may include one or more interconnects (e.g., via, pad, trace). The pin 580 traverses through the first plate 560 to couple to the second plate 562. In FIG. 5, the pin 580 includes two interconnects. However, the pin 580 may include different numbers of interconnects. The pin 590 traverses through the second plate 562 to couple to the first plate 560. The dielectric layer 520 may separate the pin 580 and the first plate 560. Similarly, the dielectric layer 522 may separate the pin 590 and the second plate 562.[0057] In some implementations, the pin 570 and the pin 590 are configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pin 580 and the pin 588 are configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals.[0058] In some implementations, several pins (e.g., pins 570, 580, 588, 590) may be coupled to the capacitor 500 or traverse through the capacitor 500, as described in FIG. 3. Although, not shown in FIG. 5, one or more pins may traverse both the first plate 560 and the second plate 562, in a manner similar to the pin 294 of FIG. 2.[0059] In some implementations, the capacitor 500 may include a fin design. An example of a capacitor that includes fin design is further illustrated and described below in at least FIG. 12.[0060] FIG. 6 illustrates another example of an integrated device 601 that includes a capacitor 600. The capacitor 600 may be a means for capacitance. The integrated device 601 (e.g., integrated device package) is similar to the integrated device 401 of FIG. 4, except that the integrated device 601 includes a capacitor (e.g., capacitor 600) with a different design.[0061] As shown in FIG. 6, the capacitor 600 is located in the redistribution portion 402. In particular, the capacitor 600 is at least partially located in the at least one first dielectric layer 620 and the at least one second dielectric layer 622. The capacitor 600 includes a first plate 660 (e.g., top plate), a second plate 662 (e.g., bottom plate), and an insulation layer 664. The insulation layer 664 is located between the first plate 660 and the second plate 662.[0062] The pin 670, the pin 672 and the pin 682 are coupled to the first plate 660. The pin 680 is coupled to the second plate 662. As shown in FIG. 6, the pin 670, the pin 672, the pin 682, and the pin 680 may each include one or more interconnects (e.g., trace, pad, via). [0063] In some implementations, the pin 670, the pin 672 and 682 are configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pin 680 is configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals.[0064] In some implementations, several pins (e.g., pins 670, 672, 680, 682) may be coupled to the capacitor 600 or traverse through the capacitor 600, as described in FIG. 3. Although, not shown in FIG. 6, one or more pins may traverse both the first plate 660 and the second plate 662, in a manner similar to the pin 294 of FIG. 2.[0065] FIG. 7 illustrates another example of an integrated device 701 that includes a capacitor 700. The capacitor 700 may be a means for capacitance. The integrated device 701 (e.g., integrated device package) is similar to the integrated device 601 of FIG. 6, except that the integrated device 701 includes a capacitor (e.g., capacitor 700) with a different design.[0066] In some implementations, the capacitor 700 may include a fin design. An example of a fin design is further illustrated and described below in at least FIG. 12.[0067] As shown in FIG. 7, the capacitor 700 is located in the redistribution portion 402. In particular, the capacitor 702 is at least partially located in the at least one first dielectric layer 720 and the at least one second dielectric layer 722. The capacitor 700 includes a first plate 760 (e.g., top plate), a second plate 762 (e.g., bottom plate), and an insulation layer 764. The insulation layer 764 is located between the first plate 760 and the second plate 762.[0068] A pin may include one or more interconnects (e.g., trace, pad, via). The pin 770 and the pin 772 may each include several interconnects. The pin 770 and the pin 772 are coupled to the first plate 760. FIG. 7 also illustrates that an interconnect of the pin 772 traverses through the second plate 762. The pin 780 and the pin 782 are coupled to the second plate 762. The pin 780 and the pin 782 may each include several interconnects.[0069] In some implementations, the pin 770 and the pin 772 are configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pin 780 and the pin 782 are configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals. [0070] In some implementations, several pins (e.g., pins 770, 772, 780, 782) may be coupled to the capacitor 700 or traverse through the capacitor 700, as described in FIG. 3. Although, not shown in FIG. 7, one or more pins may traverse both the first plate 760 and the second plate 762, in a manner similar to the pin 294 of FIG. 2.[0071] In some implementations, the capacitor 700 may include a fin design. An example of a capacitor that includes a fin design is further illustrated and described below in at least FIG. 12.[0072] In some implementations, one or more input / output (I/O) interconnects or I/O pins may traverse through one or both plates of the above capacitors (e.g., capacitors 400, 500, 600, 700), as described above in FIGS. 2 and 3 (see e.g., pin 294). In such instances, an input / output (I/O) pin or an input / output (I/O) interconnects that traverses one or both plates of a capacitor may include one or more interconnects (e.g., trace, via, pad).Exemplary Sequence for Fabricating an Integrated Device Comprising a Capacitor[0073] In some implementations, providing / fabricating an integrated device that includes a capacitor with multiple pins includes several processes. FIG. 8 (which includes FIGS. 8A-8D) illustrates an exemplary sequence for providing / fabricating an integrated device (e.g., integrated device package) that includes a capacitor with multiple pins. In some implementations, the sequence of FIGS. 8A-8D may be used to fabricate the integrated device that includes a capacitor with multiple pins of FIGS. 4-7 and/or other integrated devices described in the present disclosure. FIGS. 8A-8D will be described in the context of providing / fabricating the integrated device of FIG. 4. In particular, FIGS. 8A-8D will be described in the context of fabricating the integrated device 401 of FIG. 4.[0074] It should be noted that the sequence of FIGS. 8A-8D may combine one or more stages in order to simplify and/or clarify the sequence for providing an integrated device. In some implementations, the order of the processes may be changed or modified.[0075] Stage 1 of FIG. 8A, illustrates a carrier 800 being provided. The carrier 800 may be used as temporary base to fabricate an integrated device. [0076] Stage 2 illustrates a die 404 coupled to the carrier 800, through a plurality of interconnects 430. The plurality of interconnects 430 may include pillars and solder interconnects (e.g., solder ball).[0077] Stage 3 illustrates an underfill 412 being provided between the die 404 and the carrier 800. The underfill 412 may at least partially surround the plurality of interconnects 430.[0078] Stage 4 illustrates an encapsulation layer 410 that is formed over the die 404 and the carrier 800. The encapsulation layer 410 may at least partially encapsulate the die 404 and the underfill 412.[0079] Stage 5 illustrates a state after the carrier 800 has been decoupled (e.g., removed, grinded) from the encapsulation layer 410, the underfill 412, and the plurality of interconnect 430. In some implementations, decoupling the carrier 800 may also remove portions of the encapsulation layer 410, the underfill 412, and/or the plurality of interconnects 430. For example, solder interconnects and/or the pillars of the plurality of interconnects 430 may be removed during the decoupling of the carrier 800.[0080] Stage 6 illustrates a plurality of interconnects 811 formed on the plurality of interconnects 430. In some implementations, a plating process may be used to form one or more metal layers (e.g., seed layer, metal layer) on the plurality of interconnects 430.[0081] Stage 7 illustrates a dielectric layer 812 formed over the encapsulation layer 410 and the underfill 412. Different implementations may use different material for the dielectric layer 812.[0082] Stage 8 of FIG. 8B, illustrates a plurality of interconnects 813 formed on the plurality of interconnects 811 and the dielectric layer 812. In some implementations, a plating process may be used to form one or more metal layers (e.g., seed layer, metal layer) on the plurality of interconnects 811 and the dielectric layer 812.[0083] Stage 9 illustrates a dielectric layer 814 formed over the dielectric layer 812. Different implementations may use different material for the dielectric layer 814.[0084] Stage 10 illustrates an insulation layer 464 formed over some of the plurality of interconnects 813. In some implementations, the insulation layer 464 includes a material that has a k value of at least 7. However, different implementations may use a material with a different k value. In some implementation, the insulation layer 464 has a thickness of about 50 nanometers (nm) or less.[0085] Stage 11 illustrates a dielectric layer 816 formed over the dielectric layer 814. Different implementations may use different material for the dielectric layer 816. [0086] Stage 12 illustrates a plurality of cavities 817 formed through the plurality of in interconnects 813, the insulation layer 464, the dielectric layer 812, the dielectric layer 814 and/or the dielectric layer 816. Different implementations may use different processes for forming the plurality of cavities 817. In some implementations, a laser process may be used to form the plurality of cavities 817.[0087] Stage 13 of FIG. 8C, illustrates a metal layer 819 formed in the plurality of cavities 817, over the insulation layer 464, and the dielectric layer 816. In some implementations, portions of the metal layer 819 may form the second plate 462 and pins (e.g., pin 482) of a capacitor. In some implementations, a capacitor (e.g., capacitor 400) may includes portions of the plurality of interconnects 813, the insulation layer 464 and portions of the metal layer 819 (e.g., second plate 462).[0088] Stage 14 illustrates a dielectric layer 820 formed over the dielectric layer 816 and the metal layer 819. Different implementations may use different material for the dielectric layer 820.[0089] Stage 15 illustrates a plurality of cavities 821 formed through the dielectric layer 420. In some implementations, the dielectric layer 420 includes the dielectric layer 812, the dielectric layer 814, the dielectric layer 816 and/or the dielectric layer 818. Different implementations may use different processes for forming the plurality of cavities 821. In some implementations, a laser process may be used to form the plurality of cavities 821.[0090] Stage 16 illustrates a plurality of interconnects 823 formed in the plurality of cavities 821 and over the dielectric layer 420. In some implementations, a plating process may be used to form one or more metal layers (e.g., seed layer, metal layer) in the plurality of cavities 821 and over the dielectric layer 420.[0091] Stage 17 illustrates a dielectric layer 822 and a plurality of interconnects 825 being formed over the dielectric layer 420 and the plurality of interconnects 823.[0092] Stage 18 of FIG. 8D, illustrates a dielectric layer 824 and a plurality of interconnects 827 being formed over the dielectric layer 822 and the plurality of interconnects 825.[0093] Stag 19 illustrates a solder resist layer 424 formed over the dielectric layer 422. The dielectric layer 422 may include the dielectric layer 822 and/or the dielectric layer 824.[0094] Stage 20 illustrates a plurality of solder interconnects 450 formed over the plurality of interconnects 827. In some implementations, stage 20 may illustrate an integrated device 401 that includes a capacitor 400 that includes multiple pins. The integrated device 401 includes a die 404, an encapsulation layer 410, an underfill 412, a plurality of interconnects 430. The integrated device 401 (e.g., integrated device package) may also include at least one first dielectric layer 420, at least one second dielectric layer 422, and a solder resist layer 424. The capacitor 400 is located at least partially in the at least one first dielectric layer 420. The integrated device 401 may also include a plurality of first interconnects 421 in the at least one first dielectric layer 420; and a plurality of second interconnects 423 in the at least one second dielectric layer 422. The plurality of first interconnects 421 may include the plurality of interconnects 813 and/or 823. The plurality of second interconnects 423 may include the plurality of interconnects 823, 825 and/or 827.Exemplary Flow Diagram of a Method for Fabricating an Integrated Device Comprising a Capacitor[0095] In some implementations, providing / fabricating an integrated device that includes a capacitor with multiple pins includes several processes. FIG. 9 illustrates an exemplary flow diagram of a method 900 for providing / fabricating an integrated device (e.g., integrated device package) that includes a capacitor with multiple pins. In some implementations, the method 900 may be used to fabricate the integrated device that includes a capacitor with multiple pins of FIGS. 4-7 and/or other integrated devices described in the present disclosure.[0096] It should be noted that the sequence of FIG. 9 may combine one or more processes in order to simplify and/or clarify the method for fabricating an integrated device. In some implementations, the order of the processes may be changed or modified.[0097] The method places (at 905) a die on a carrier. For example, the method may place the die 404 over a carrier 800. The die 404 may include a plurality of interconnects 430, and the method places the die 404 comprising the plurality of interconnects 430 over the carrier 800.[0098] The method forms (at 910) an encapsulation layer over the die and the carrier. For example, the method may form the encapsulation layer 410 over the die 404 and the carrier 800. In some implementations, the method may also form an underfill (e.g., underfill 412) between the die 404 and the carrier 800 prior to forming the encapsulation layer 410. [0099] The method decouples (at 915) the carrier from the die and the encapsulation layer. For example, the method may remove (e.g., detach, grind) the carrier 800 from the die 404 and the encapsulation layer 410.[00100] The method forms (at 920) a redistribution portion that includes a capacitor with multiple pins. In some implementations, the capacitor includes at least one pin that traverses through a plate of the capacitor. For example, the method may form a redistribution portion 402 that includes a dielectric layer 420 and the capacitor 400. In some implementations, a redistribution portion may include a capacitor with another design.[00101] The method couples (at 925) a solder ball to the redistribution portion. For example, the method may provide and form a plurality of solder interconnects 450 over interconnects from the redistribution portion 402.Exemplary Integrated Device Comprising a Capacitor That Includes Multiple Pins[00102] FIG. 10 illustrates an integrated device 1001 that includes a capacitor 1000 that includes multiple pins. The integrated device 1001 (e.g., integrated device package) includes a package substrate 1002, a die 1004, an encapsulation layer 1010, a plurality of interconnects 1030, and the capacitor 1000.[00103] The package substrate 1002 is a laminate substrate that includes several dielectric layers 1020. The dielectric layers 1020 may include a core layer and a prepeg layer. The capacitor 1000 is a capacitor that is formed separately from the package substrate 1002 and then placed in the package substrate 1002 during a fabrication of the package substrate 1002. For example, the capacitor 1000 may be placed in a cavity of the package substrate 1002, and configured to be coupled to interconnects from the plurality of interconnects 1023 of the package substrate 1002. The capacitor 1000 may be configured to be coupled to interconnects from the plurality of interconnects 1030.[00104] The capacitor 1000 includes a dielectric layer 1099, a first plate 1060, a second plate 1062, and an insulation layer 1064. A pin 1070 and a pin 1072 are coupled to the first plate 1060. A pin 1080 and a pin 1082 are coupled to the second plate 1062. A pin may include one or more interconnects (e.g., trace, via, pad). The first plate 1060, the second plate 1062, the insulation layer 1064, the pin 1070, the pin 1072, the pin 1080 and the pin 1082 are located within the dielectric layer 1099. [00105] FIG. 10 illustrates one design of the capacitor 1000 that includes the first plate 1060, the second plate 1062 and the insulation layer 1064. However, the capacitor 1000 may include any of the capacitor designs described in the present disclosure.Exemplary Sequence for Fabricating a Capacitor Comprising Multiple Pins[00106] In some implementations, providing / fabricating a capacitor with multiple pins includes several processes. FIG. 11 (which includes FIGS. 11A-11B) illustrates an exemplary sequence for providing / fabricating a capacitor with multiple pins. In some implementations, the sequence of FIGS. 11A-11B may be used to fabricate the capacitor with multiple pins of FIG. 10 and/or other capacitors described in the present disclosure. FIGS. 11A-11B will be described in the context of providing / fabricating the capacitor of FIG. 10. In particular, FIGS. 11A-11B will be described in the context of fabricating the capacitor 1000 of FIG. 10.[00107] It should be noted that the sequence of FIGS. 11 A-l IB may combine one or more stages in order to simplify and/or clarify the sequence for providing a capacitor with multiple pins. In some implementations, the order of the processes may be changed or modified.[00108] Stage 1 of FIG. 11 A, illustrates a carrier 1100 being provided. The carrier 1100 may be used as temporary base to fabricate an capacitor.[00109] Stage 2 illustrates a dielectric layer 1102 formed over the carrier 1100.[00110] Stage 3 illustrates a plurality of cavities 1103 formed in the dielectric layer 1102. In some implementations, a lithography process may be used to form (e.g., etch) the plurality of cavities 1103 in the dielectric layer 1102.[00111] Stage 4 illustrates at least one metal layer (e.g., seed layer, metal layer) formed in the plurality of cavities 1103. The metal layer may form a second plate 1062 of a capacitor. In some implementations, a plating process may be used to form the at least one metal layer.[00112] Stage 5 illustrates an insulation layer 1064 formed over the metal layer (e.g., second plate 1062). In some implementations, the insulation layer 1064 includes a material that has a k value of at least 7. However, different implementations may use a material with a different k value. In some implementation, the insulation layer 1064 has a thickness of about 50 nanometers (nm) or less.[00113] Stage 6 of FIG. 11B, illustrates at least one metal layer (e.g., seed layer, metal layer) formed over the insulation layer 1064. The metal layer may form a first plate 1060 of a capacitor. In some implementations, a plating process may be used to form the at least one metal layer. A capacitor 1000 may be defined by the first plate 1060, the second plate 1062 and the insulation layer 1064. In some implementations, the MIM dielectrics and top metal layer (e.g., top plate) can be patterned in one step or stage.[00114] Stage 7 illustrates a dielectric layer 1112 and at least one metal layer 1119 formed over the capacitor 1000. One or more metal layer 1119 may form a pin to and/or from the capacitor 1000. In some implementations, a plating process may be used to form the at least one metal layer.[00115] Stage 8 illustrates at least one metal layer 1121 formed over the one or more metal layer 1119 and the dielectric layer 1112. One or more metal layer 1121 may form a pin to and/or from the capacitor 1000. In some implementations, a plating process may be used to form the at least one metal layer.[00116] Stage 9 illustrates a dielectric layer 1122 formed over the one or more metal layer 1121 and the dielectric layer 1112.[00117] Stage 10 illustrates the carrier 1110 decoupled (e.g., removed, grinded away) from the dielectric layer 1102. In some implementations, stage 10 illustrates the capacitor 1000 that includes a dielectric layer 1099, a first plate 1060, a second plate 1062, and an insulation layer 1064. A pin 1070 and a pin 1072 is coupled to the first plate 1060. A pin 1080 and a pin 1082 are coupled to the second plate 1062. A pin may include one or more interconnects (e.g., trace, via, pad). The first plate 1060, the second plate 1062, the insulation layer 1064, the pin 1070, the pin 1072, the pin 1080 and the pin 1082 are located within the dielectric layer 1099. The dielectric layer 1099 may include the dielectric layer 1102, the dielectric layer 1112, and/or the dielectric layer 1122.Exemplary Integrated Device Comprising a Capacitor That Includes Multiple Pins[00118] FIG. 12 illustrates an integrated device 1201 that includes a capacitor 1200 that includes multiple pins. The capacitor 1200 includes a fin design (e.g., capacitor fin or trench design). The trench (fin) will increase the area for the capacitor 1200, which will increase the capacitance or capacitance capability of the capacitor 1200. The capacitor 1200 may be a means for capacitance. The integrated device 1201 (e.g., integrated device package) includes a redistribution portion 1202, a die 1204, an encapsulation layer 1210, and a plurality of interconnects 1230. The plurality of interconnects 1230 may include pillars and/or solder interconnects (e.g., solder balls).[00119] The redistribution portion 1202 includes at least one dielectric layer (e.g., dielectric layer 1220, dielectric layer 1222, dielectric layer 1224, dielectric layer 1226) and the capacitor 1200. The capacitor 1200 is located at least partially in the at least one dielectric layer (e.g., dielectric layer 1220). A plurality of solder ball 1290 are coupled to the redistribution portion 1202.[00120] The capacitor 1200 includes a first plate 1260, a second plate 1262 and an insulation layer 1264. The capacitor 1200 includes a fin design. The second plate 1262 includes a fin design or fin shape, such that the surface of the second plate 1262 traverses vertically and horizontally, in a repeated pattern, in the redistribution portion. The insulation layer 1264 substantially forms over a contour of the fin design of the second plate 1262. That is the insulation layer 1264 is formed such that the insulation layer 1264 substantially follows the contour of the surface of the fin design of the second plate 1262.[00121] In some implementations, the fin design provides more surface area to form a capacitor, which allows for a capacitor with a higher capacitance, while minimizing the amount of space and/or real estate that the capacitor 1200 takes up in the integrated device 1201.[00122] FIG. 12 illustrates that a pin 1270 (e.g., first pin) is coupled to the first plate 1260 (e.g., coupled to a first surface of the first plate 1260). A pin 1280 is coupled to the second plate 1262. In some implementations, the pin 1270 is configured to provide one or more electrical paths for a power signal (e.g., Vdd). In some implementations, the pin 1280 is configured to provide one or more electrical paths for a ground reference signal (e.g., Vss). Different implementations may configure the pins to provide different electrical paths for different signals. It is noted that the capacitor fin design of FIG. 12 may be implemented in any of capacitors described in the present disclosure. In some implementations, a through trench capacitor connection may be made to provide multiple parallel access to the bottom plate electrode to the chips, the die, and/or the capacitor top plate access to a power source within an integrated device and/or an integrated device package.Exemplary Sequence for Fabricating an Integrated Device Comprising a Capacitor [00123] In some implementations, providing / fabricating an integrated device that includes a capacitor with multiple pins includes several processes. FIG. 13 (which includes FIGS. 13A-13D) illustrates an exemplary sequence for providing / fabricating an integrated device (e.g., integrated device package) that includes a capacitor with multiple pins. In some implementations, the sequence of FIGS. 13A-13D may be used to fabricate the integrated device that includes a capacitor with multiple pins of FIG. 12 and/or other integrated devices described in the present disclosure. FIGS. 13A-13D will be described in the context of providing / fabricating the integrated device of FIG. 12. In particular, FIGS. 13A-13D will be described in the context of fabricating the integrated device 1201 of FIG. 12.[00124] It should be noted that the sequence of FIGS. 13A-13D may combine one or more stages in order to simplify and/or clarify the sequence for providing an integrated device. In some implementations, the order of the processes may be changed or modified.[00125] Stage 1 of FIG. 13A, illustrates a carrier 1300, a dielectric layer 1220, and a metal layer 1304. In some implementations, the dielectric layer 1220 is formed over the carrier 1300 and the metal layer 1304 is formed in the dielectric layer 1220, in a similar as described in FIG. 11 A.[00126] Stage 2 illustrates a dielectric layer 1222 formed over the dielectric layer 1220 and portions of the metal layer 1302. Stage 2 also illustrates an insulation layer 1264 formed over the dielectric layer 1220, the dielectric layer 1222 and portions of the metal layer 1304. In some implementations, portions of the metal layer 1304 may define a plate (e.g., first plate) of capacitor. In some implementations, the insulation layer 1264 includes a material that has a k value of at least 7. However, different implementations may use a material with a different k value. In some implementation, the insulation layer 1264 has a thickness of about 50 nanometers (nm) or less.[00127] Stage 3 illustrates a plurality of cavities 1305 formed through the insulation layer 1264 and the dielectric layer 1222. In some implementations, a laser process and/or a lithography process may be used to form the plurality of cavities 1305.[00128] Stage 4 illustrates a metal layer formed over the insulation layer 1264. The metal layer may be a second plate 1262 of a capacitor. The metal layer may include one or more metal layers (e.g., seed layer, metal layer). A plating process may be used to form the metal layer that defines the second plate 1262. [00129] Stage 5 illustrates a dielectric layer 1224 formed over the second plate 1262 and the dielectric layer 1222.[00130] Stage 6 of FIG. 13B, illustrates a plurality of cavities 1307 formed in the dielectric layer 1224. In some implementations, a laser process and/or a lithography process may be used to form the plurality of cavities 1307.[00131] Stage 7 illustrates a metal layer 1308 in the plurality of cavities 1307 and over the dielectric layer 1224. A plating process may be used to form the metal layer 1308. The metal layer 1308 may include one or more metal layers (e.g., seed layer, metal layer). The metal layer 1308 may form a plurality of interconnects (e.g., traces, pads, vias).[00132] Stage 8 illustrates a dielectric layer 1226 formed over the dielectric layer 1224 and the metal layer 1308. A plurality of cavities 1309 is formed in the dielectric layer 1226.[00133] Stage 9 illustrates the carrier 1300 decoupled (e.g., removed, grinded away) from the dielectric layer 1220.[00134] Stage 10 of FIG. 13C, illustrates a plurality of cavities 1311 formed in the dielectric layer 1220. In some implementations, a laser process and/or a lithography process may be used to form the plurality of cavities 1311.[00135] Stage 11 illustrates a metal layer in the plurality of cavities 1311 and over the dielectric layer 1220. The metal layer may form one or more pins 1270 coupled to a capacitor. A plating process may be used to form the metal layer that forms one or more pins 1270. The metal layer may include one or more metal layers (e.g., seed layer, metal layer).[00136] Stage 12 illustrates a die 1204 coupled to the pins 1270 through a plurality of interconnects 1230. The plurality of interconnects 1230 may include pillars and/or solder interconnects (e.g., solder balls).[00137] Stage 13 of FIG. 13D, illustrates an encapsulation layer 1210 formed over the die 1204 and the dielectric layer 1220. The encapsulation layer 1210 may at least partially encapsulate the die 1204 and the plurality of interconnects 1230.[00138] Stage 14 illustrates a plurality of solder ball 1290 coupled to the metal layer1308. The metal layer 1308 may form a plurality of interconnects (e.g., traces, pads, vias). Exemplary Electronic Devices[00139] FIG. 14 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP). For example, a mobile phone device 1402, a laptop computer device 1404, a fixed location terminal device 1406, a wearable device 1408 may include an integrated device 1400 as described herein. The integrated device 1400 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 1402, 1404, 1406, 1408 illustrated in FIG. 14 are merely exemplary. Other electronic devices may also feature the integrated device 1400 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices (e.g., watch, glasses), Internet of things (IoT) devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof.[00140] One or more of the components, processes, features, and/or functions illustrated in FIGS. 2, 3, 4, 5, 6, 7, 8A-8D, 9, 10, 11A-11B, 12, 13A-13D and/or 14 may be rearranged and/or combined into a single component, process, feature or function or embodied in several components, proceses, or functions. Additional elements, components, processes, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGS. 2, 3, 4, 5, 6, 7, 8A-8D, 9, 10, 11A-11B, 12, 13A-13D and/or 14 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, FIGS. 2, 3, 4, 5, 6, 7, 8A-8D, 9, 10, 11A-11B, 12, 13A-13D and/or 14 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer. [00141] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[00142] Also, it is noted that various disclosures contained herein may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[00143] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.
Wireless communication devices and methods of forming and operating the same are provided. The present invention provides a wireless communication device including a substrate having a support surface, wireless communication circuitry upon the support surface of the substrate, at least one antenna electrically coupled with the wireless communication circuitry, a conductive layer configured to interact with the antenna, and an insulative layer intermediate the conductive layer and the antenna. A method of forming a wireless communication device includes providing a substrate having a support surface, forming an antenna upon the support surface, conductively coupling wireless communication circuitry with the antenna, forming an insulative layer over at least a portion of the antenna, and providing a conductive layer over at least a portion of the insulative layer.
1. A radio frequency identification device comprising:a substrate having a support surface;wireless communication circuitry upon the support surface of the substrate;at least one antenna electrically coupled with the wireless communication circuitry;a conductive layer configured to interact with the at least one antenna;an insulative layer intermediate the conductive layer and the at least one antenna; andwherein the at least one antenna and conductive layer include respective peripheral edges and the peripheral edges of the at least one antenna are provided within the confines of the peripheral edges of the conductive layer.2. The device according to claim 1 wherein the wireless communication circuitry is intermediate the at least one antenna and the conductive layer.3. The device according to claim 1 further comprising a housing configured to encapsulate the wireless communication circuitry.4. The device according to claim 3 wherein the housing includes the substrate.5. The device according to claim 1 wherein the insulative layer is over substantially the entire support surface and the conductive layer is over substantially the entire insulative layer.6. The device according to claim 1 further comprising a power source having plural terminals coupled with the wireless communication circuitry.7. The device according to claim 6 further comprising an electrical connection provided through the insulative layer and operable to conductively couple the conductive layer and one of the terminals of the power source.8. The device according to claim 1 wherein the insulative layer forms a first encapsulant layer operable to envelope the wireless communication circuitry, the at least one antenna and the support surface.9. The device according to claim 8 further comprising a second encapsulant layer over the conductive layer.10. The device according to claim 9 wherein the first and second encapsulant layers and the substrate form a substantially solid housing.11. The device according to claim 1 wherein the wireless communication circuitry comprises transponder circuitry configured to transmit an identification signal responsive to receiving a polling signal.12. The device according to claim 1 further comprising a processor operable to process signals received via the at least one antenna.13. The device according to claim 1 wherein the at least one antenna defines a plane, the conductive layer is substantially planar, and the conductive layer is substantially parallel to the plane defined by the at least one antenna.14. The device according to claim 1 wherein the at least one antenna is operable to receive wireless communication signals and the conductive layer is configured to shield some of the wireless communication signals from the at least one antenna and reflect others of the wireless communication signals toward the at least one antenna.15. The device according to claim 1 wherein the conductive layer comprises a ground plane configured to electromagnetically interact with the at least one antenna.16. A remote intelligent communication device comprising:a substrate having a support surface;a conductive trace formed upon the support surface and including at least one antenna configured to at least one of transmit and receive wireless communication signals;transponder circuitry bonded to the support surface and electrically coupled with the conductive trace;a first encapsulant layer encapsulating the transponder circuitry and the at least one antenna with at least a portion of the substrate;a conductive layer positioned upon the first encapsulant layer to interact with the at least one antenna; anda second encapsulant layer over the conductive layer and forming a substantially solid housing with the substrate and the first encapsulant layer.17. The remote intelligent communication device according to claim 16 further comprising a power source coupled with the transponder circuitry.18. The remote intelligent communication device according to claim 16 further comprising an electrical connection through the first encapsulant layer coupling the conductive layer and the conductive trace.19. The remote intelligent communication device according to claim 16 further comprising a processor operable to process at least some of the wireless communication signals.20. The remote intelligent communication device according to claim 16 wherein the transponder circuitry is configured to transmit an identification signal responsive to receiving a polling signal.21. The remote intelligent communication device according to claim 16 wherein the conductive layer is configured to shield some of the wireless communication signals from the at least one antenna and reflect others of the wireless communication signals toward the at least one antenna.22. The remote intelligent communication device according to claim 21 further comprising an electrical connection conductively coupling the conductive layer with at least one terminal of the power source.23. The remote intelligent communication device according to claim 16 wherein the transponder circuitry is intermediate the at least one antenna and the conductive layer.24. The remote intelligent communication device according to claim 16 wherein the first encapsulant layer contacts the transponder circuitry and the at least one antenna.25. A radio frequency identification device configured to receive wireless communication signals comprising:a lower housing;at least one antenna upon the lower housing;transponder circuitry provided upon the lower housing and coupled with the at least one antenna;a power source having plural terminals electrically and conductively bonded with the transponder circuitry;an insulative layer over the at least one antenna; anda conductive layer over the at least one antenna.26. The radio frequency identification device according to claim 25 further comprising an electrical connection conductively coupling the conductive layer with at least one terminal of the power source.27. The radio frequency identification device according to claim 25 further comprising an upper housing portion over the conductive layer.28. The radio frequency identification device according to claim 27 wherein the lower housing, the insulative layer, and the upper housing provide a substantially solid housing.29. The radio frequency identification device according to claim 25 further comprising a processor coupled with the transponder circuitry and operable to process at least some of the wireless communication signals.30. The radio frequency identification device according to claim 25 wherein the transponder circuitry is configured to transmit an identification signal responsive to receiving a polling signal.31. The radio frequency identification device according to claim 25 wherein the conductive layer is positioned to shield some of the wireless communication signals from the at least one antenna and reflect others of the wireless communication signals toward the at least one antenna.32. The radio frequency identification device according to claim 25 wherein the transponder circuitry is intermediate the at least one antenna and the conductive layer.33. A method of operating a wireless communication device comprising:coupling an antenna with wireless communication circuitry intermediate the antenna and a conductive layer;receiving wireless communication signals using the wireless communication circuitry;transmitting wireless communication signals using the wireless communication circuitry; andinteracting the conductive layer with the antenna.34. The method of operating a wireless communication device according to claim 33 further comprising processing some of the wireless communication signals.35. The method of operating a wireless communication device according to claim 33 wherein the interacting comprises:shielding some of the wireless communication signals from the antenna; andreflecting others of the wireless communication signals toward the substrate and the antenna thereover.36. The method of operating a wireless communication device according to claim 33 wherein the transmitting comprises transmitting an identification signal responsive to receiving a polling signal.37. The method of operating a wireless communication device according to claim 33 further comprising:supplying operational power to the wireless communication circuitry using a power source; andelectrically grounding the conductive layer using the power source.38. A method of operating a radio frequency identification device comprising:providing an antenna upon a substrate;coupling transponder circuitry with the antenna;using the antenna and transponder circuitry, at least one of transmitting and receiving wireless communication signals;providing a ground plane with the transponder circuitry intermediate the antenna and the ground plane; andinteracting the ground plane with the antenna comprising:shielding some of the wireless communication signals from the antenna; andreflecting others of the wireless communication signals toward the substrate and the antenna thereover.39. The method of operating a radio frequency identification device according to claim 38 further comprising:supplying operational power to the wireless communication circuitry using a power source; andelectrically grounding the conductive layer using the power source.40. The method of operating a radio frequency identification device according to claim 38 further comprising processing at least some of the wireless communication signals.41. A method of forming a wireless communication device comprising:providing a substrate having a support surface;forming an antenna upon the support surface;conductively coupling wireless communication circuitry with the antenna;forming an insulative layer over at least a portion of the antenna; andproviding a conductive layer over at least a portion of the insulative layer.42. The method of forming a wireless communication device according to claim 41 further comprising electrically grounding the conductive layer.43. The method of forming a wireless communication device according to claim 41 further comprising providing electrical conductivity through the insulative layer.44. The method of forming a wireless communication device according to claim 41 further comprising:providing a power source; andcoupling the conductive layer with a terminal of the power source.45. The method of forming a wireless communication device according to claim 44 wherein the coupling comprises providing a conductive post.46. The method of forming a wireless communication device according to claim 44 wherein the coupling comprises dispensing conductive material.47. The method of forming a wireless communication device according to claim 41 wherein the providing the conductive layer comprises providing the conductive layer over substantially the entire support surface.48. The method of forming a wireless communication device according to claim 41 wherein the providing of the conductive layer includes positioning the conductive layer to interact with the antenna.49. The method of forming a wireless communication device according to claim 41 further comprising removing a portion of the insulative layer providing a substantially planar insulative layer having a predetermined thickness.50. The method of forming a wireless communication device according to claim 41 wherein the forming of the insulative layer comprises encapsulating the antenna, the wireless communication circuitry and at least a portion of the support surface.51. The method of forming a wireless communication device according to claim 41 further comprising encapsulating at least a portion of the conductive layer with an encapsulant.52. The method of forming a wireless communication device according to claim 51 wherein the encapsulant, the substrate, and insulative layer form a substantially solid device.53. The method of forming a wireless communication device according to claim 41 wherein the wireless communication device comprises a remote intelligent communication device.54. The method of forming a wireless communication device according to claim 41 wherein the wireless communication device comprises a radio frequency identification device.55. A method of forming a radio frequency identification device comprising:providing a substrate having a support surface;printing an antenna over the support surface configured to at least one of transmit and receive wireless communication signals;conductively bonding transponder circuitry with the antenna;first encapsulating the antenna and the transponder circuitry forming a first encapsulant layer;providing a conductive layer upon at least a portion of the encapsulant layer; andsecond encapsulating the conductive layer forming a second encapsulant layer.56. The method of forming a radio frequency identification device according to claim 55 further comprising removing some of the first encapsulant layer providing a substantially planar first encapsulant layer having a predetermined thickness.57. The method of forming a radio frequency identification device according to claim 55 further comprising providing the conductive layer over the antenna.58. The method of forming a radio frequency identification device according to claim 55 further comprising coupling a processor with the transponder circuitry.59. The method of forming a radio frequency identification device according to claim 55 further comprising electrically grounding the conductive layer.60. The method of forming a radio frequency identification device according to claim 55 further comprising conductively bonding a power source with the transponder circuitry.61. The method of forming a radio frequency identification device according to claim 55 wherein the first encapsulant layer, the second encapsulant layer, and the substrate form a substantially solid device.62. A method of forming a radio frequency identification device comprising:providing a substrate having a support surface;printing a conductive trace comprising a plurality of terminal connections and first and second antennas upon the support surface, the first antenna being configured to transmit wireless signals and the second antenna being configured to receive wireless signals;conductively bonding a battery with the terminal connections;conductively bonding transponder circuitry with the terminal connections and the first and second antennas;coupling a processor with the transponder circuitry;providing an electrical connection comprising one of dispensing conductive material and providing a conductive post;first encapsulating the first and second antennas, the battery, the transponder circuitry, the processor, the electrical connection, and at least a portion of the support surface, the first encapsulating forming a first encapsulant layer;removing some of the first encapsulant layer providing a substantially planar first encapsulant layer having a predetermined thickness;providing a ground plane upon the first encapsulant layer and over substantially the entire support surface, the providing of the ground plane including positioning the ground plane to interact with the antennas;electrically coupling the ground plane with one of the terminal connections using the electrical connection; andsecond encapsulating the ground plane forming a second encapsulant layer and a substantially solid device with the substrate and the first encapsulant layer.63. A wireless communication device comprising:a housing;an antenna coupled with the housing and configured to at least one of output wireless signals and receive wireless signals;wireless communication circuitry coupled with the housing and the antenna; anda ground plane coupled with the housing and configured to enhance wireless communications via the antenna, the wireless communication circuitry being positioned intermediate the ground plane and the antenna.64. The device according to claim 63 wherein the wireless communication circuitry comprises radio frequency identification device circuitry.65. The device according to claim 63 wherein the antenna, the ground plane, the wireless communication circuitry, and the housing provide a substantially void-free wireless communication device.66. The device according to claim 63 wherein the housing comprises a substrate and at least one encapsulant layer.67. The device according to claim 66 wherein the at least one encapsulant layer contacts at least respective portions of the antenna, the wireless communication circuitry and the ground plane.68. The device according to claim 63 wherein the housing comprises an encapsulant layer intermediate the ground plane and a substrate.69. A wireless communication device comprising:a substrate having a support surface;an antenna elevationally over the support surface and configured to at least one of output wireless signals and receive wireless signals;wireless communication circuitry elevationally over the antenna and coupled with the antenna; anda ground plane elevationally over the wireless communication circuitry and configured to interact with the antenna.70. The device according to claim 69 wherein the wireless communication circuitry comprises radio frequency identification device circuitry.71. The device according to claim 69 further comprising at least one encapsulant layer.72. The device according to claim 71 wherein the antenna, the ground plane, the wireless communication circuitry, and the at least one encapsulant layer provide a substantially void-free wireless communication device.73. The device according to claim 71 wherein the at least one encapsulant layer contacts at least respective portions of the antenna, the wireless communication circuitry and the ground plane.74. A wireless communication device comprising:an antenna configured to at least one of output wireless signals and receive wireless signals;a ground plane configured to enhance wireless communications via the antenna;wireless communication circuitry coupled with the antenna;a housing configured to encapsulate and contact at least respective portions of the antenna, the wireless communication circuitry and the ground plane; andwherein the antenna, the ground plane, the wireless communication circuitry, and the housing provide a substantially void-free wireless communication device.75. The device according to claim 74 wherein the wireless communication circuitry comprises radio frequency identification device circuitry.76. The device according to claim 74 wherein the housing comprises a substrate and at least one encapsulant layer.77. The device according to claim 74 wherein the housing comprises an encapsulant layer intermediate the ground plane and a substrate.78. The device according to claim 74 wherein the wireless communication circuitry is positioned intermediate the antenna and the ground plane.79. A method of forming a wireless communication device comprising:providing an antenna configured to at least one of output wireless signals and receive wireless signals;positioning a ground plane to enhance wireless communications via the antenna;providing wireless communication circuitry intermediate the antenna and the ground plane; andcoupling the wireless communication circuitry with the antenna.80. The method according to claim 79 wherein the providing the wireless communication circuitry comprises providing radio frequency identification device circuitry.81. The method according to claim 79 further comprising encapsulating the antenna, the ground plane and the wireless communication circuitry.82. The method according to claim 81 wherein the encapsulating comprises providing a substantially void-free wireless communication device.83. The method according to claim 81 wherein the encapsulating comprises contacting at least respective portions of the antenna, the wireless communication circuitry and the ground plane with at least one encapsulant layer.84. A method of forming a wireless communication device comprising:providing a substrate having a support surface;providing an antenna configured to at least one of output wireless signals and receive wireless signals elevationally over the support surface;positioning wireless communication circuitry elevationally over the antenna;coupling the wireless communication circuitry with the antenna; andpositioning a ground plane elevationally over the wireless communication circuitry to enhance wireless communications via the antenna.85. The method according to claim 84 wherein the providing the wireless communication circuitry comprises providing radio frequency identification device circuitry.86. The method according to claim 84 further comprising encapsulating the antenna, the ground plane and the wireless communication circuitry.87. The method according to claim 86 wherein the encapsulating comprises providing a substantially void-free wireless communication device.88. The method according to claim 86 wherein the encapsulating comprises contacting at least respective portions of the antenna, the wireless communication circuitry and the ground plane with at least one encapsulant layer.89. A method of forming a wireless communication device comprising:providing an antenna configured to at least one of output wireless signals and receive wireless signals;coupling wireless communication circuitry with the antenna;providing a ground plane to enhance wireless communications via the antenna;providing a housing encapsulating and contacting at least respective portions of the antenna, the wireless communication circuitry and the ground plane; andwherein the providing the ground plane provides the wireless communication circuitry intermediate the antenna and the ground plane.90. The method according to claim 89 wherein the coupling wireless communication circuitry comprises coupling radio frequency identification device circuitry.91. The method according to claim 89 wherein the providing the housing comprises providing a substantially void-free wireless communication device.92. A radio frequency identification device comprising:a substrate having a support surface;wireless communication circuitry upon the support surface of the substrate;at least one antenna electrically coupled with the wireless communication circuitry;a conductive layer configured to interact with the at least one antenna;an insulative layer intermediate the conductive layer and the at least one antenna; andwherein the wireless communication circuitry is intermediate the at least one antenna and the conductive layer.93. A radio frequency identification device comprising:a substrate having a support surface;wireless communication circuitry upon the support surface of the substrate;at least one antenna electrically coupled with the wireless communication circuitry;a conductive layer configured to interact with the at least one antenna;an insulative layer intermediate the conductive layer and the at least one antenna; andwherein the insulative layer is over substantially the entire support surface and the conductive layer is over substantially the entire insulative layer.94. A radio frequency identification device comprising:a substrate having a support surface;wireless communication circuitry upon the support surface of the substrate;at least one antenna electrically coupled with the wireless communication circuitry;a conductive layer configured to interact with the at least one antenna;an insulative layer intermediate the conductive layer and the at least one antenna;a power source having plural terminals coupled with the wireless communication circuitry; andan electrical connection provided through the insulative layer and operable to conductively couple the conductive layer and one of the terminals of the power source.95. A wireless communication device comprising:an antenna configured to at least one of output wireless signals and receive wireless signals;a ground plane configured to enhance wireless communications via the antenna;wireless communication circuitry coupled with the antenna;a housing configured to encapsulate and contact at least respective portions of the antenna, the wireless communication circuitry and the ground plane; andwherein the housing comprises an encapsulant layer intermediate the ground plane and a substrate.96. A wireless communication device comprising:an antenna configured to at least one of output wireless signals and receive wireless signals;a ground plane configured to enhance wireless communications via the antenna;wireless communication circuitry coupled with the antenna;a housing configured to encapsulate and contact at least respective portions of the antenna, the wireless communication circuitry and the ground plane; andwherein the wireless communication circuitry is positioned intermediate the antenna and the ground plane.97. A method of forming a wireless communication device comprising:providing an antenna configured to at least one of output wireless signals and receive wireless signals;coupling wireless communication circuitry with the antenna;providing a ground plane to enhance wireless communications via the antenna;providing a housing encapsulating and contacting at least respective portions of the antenna, the wireless communication circuitry and the ground plane; andwherein the providing the housing comprises providing a substantially void-free wireless communication device.
RELATED PATENT DATAThis patent resulted from a continuation application of and claims priority to prior application Ser. No. 08/914,305, filed on Aug. 18, 1997, entitled "Wireless Communication Devices and Methods Of Forming And Operating The Same," now abandoned, the disclosure of which is incorporated herein by reference.TECHNICAL FIELDThe present invention relates to wireless communication devices and methods of forming and operating the same.BACKGROUND OF THE INVENTIONElectronic identification systems typically comprise two devices which are configured to communicate with one another. Preferred configurations of the electronic identification systems are operable to provide such communications via a wireless medium.One such configuration is described in U.S. patent application Ser. No. 08/705,043, filed Aug. 29, 1996, assigned to the assignee of the present application and incorporated herein by reference. This application discloses the use of a radio frequency (RF) communication system including communication devices. The communication devices include interrogator and a transponder such as a tag or card.The communication system can be used in various identification and other applications. The interrogator is configured to output a polling signal which may comprise a radio frequency signal including a predefined code. The transponders of such a communication system are operable to transmit an identification signal responsive to receiving an appropriate command or polling signal. More specifically, the appropriate transponders are configured to recognize the predefined code. The transponders receiving the code subsequently output a particular identification signal which is associated with the transmitting transponder. Following transmission of the polling signal, the interrogator is configured to receive the identification signals enabling detection of the presence of corresponding transponders.Such communication systems are useable in identification applications such as inventory or other object monitoring. For example, a remote identification device is attached to an object of interest. Responsive to receiving the appropriate polling signal, the identification device is equipped to output an identification signal. Generating the identification signal identifies the presence or location of the identification device and article or object attached thereto.Such identification systems configured to communicate via radio frequency signals are susceptible to incident RF radiation. Such RF radiation can degrade the performance of the identification system. For example, application of transponders to objects comprising metal may result in decreased or no performance depending on the spacing of the transponder antenna to the nearest metal on the object.Therefore, there exists a need to reduce the effects of incident RF radiation upon the operation of communication devices of an electronic identification system.SUMMARY OF THE INVENTIONAccording to one embodiment of the invention, a wireless communication device is provided which includes a substrate, communication circuitry, antenna and a conductive layer configured to interact with the antenna. Some embodiments of the wireless communication devices include remote intelligent communication devices and radio frequency identification devices.According to additional aspects of the present invention, methods of forming a wireless communication device and a radio frequency identification device are provided. The present invention also provides methods of operating a wireless communication device and a radio frequency identification device.The conductive layer is configured to act as a ground plane in one embodiment of the invention. The ground plane shields some signals from the antenna while reflecting other signals toward the antenna. The ground plane also operates to reflect some of the signals transmitted by the device. The conductive layer is preferably coupled with a terminal of a power source within the communication device. Such coupling provides the conductive layer at a reference voltage potential.The communication circuitry comprises transponder circuitry in accordance with other aspects of the present invention. The transponder circuitry is configured to output an identification signal responsive to receiving a polling signal from an interrogator. Certain disclosed embodiments provide a processor within the communication devices configured to process the received polling signal. The processor and communication circuitry may be implemented in an integrated circuit.The wireless communication device is provided within a substantially solid, void-free housing in accordance with one aspect of the present invention. Such a housing comprises plural encapsulant layers and a substrate.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a block diagram of a wireless communication system including an interrogator and a wireless communication device embodying the invention.FIG. 2 is a front elevational view of the wireless communication device.FIG. 3 is a front elevational view of the wireless communication device at an intermediate processing step.FIG. 4 is cross-sectional view, taken along line 4-4, of the wireless communication device shown in FIG. 3 at an intermediate processing step.FIG. 5 is a cross-sectional view of the wireless communication device at a processing step subsequent to FIG. 4.FIG. 6 is a cross-sectional view of the wireless communication device at a processing step subsequent to FIG. 5.FIG. 7 is a cross-sectional view, similar to FIG. 4, of an alternative intermediate processing step.FIG. 8 is a cross-sectional view of a first embodiment of the wireless communication device.FIG. 9 is a cross-sectional view of another embodiment of the wireless communication device.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).This description of the present invention discloses embodiments of various wireless communication devices. The wireless communication devices are fabricated in card configurations (which include tags or stamps) according to first and second aspects of the present invention. The embodiments are illustrative and other configurations of a wireless communication device according to the present invention are possible. Certain embodiments of the wireless communication device according to the invention comprise radio frequency identification devices (RFID) and remote intelligent communication devices (RIC).Referring to FIG. 1, a remote intelligent communication device or wireless communication device 10 comprises part of a communication system 12. The remote intelligent communication device is capable of functions other than the identifying function of a radio frequency identification device. A preferred embodiment of the remote intelligent communication device includes a processor.The communication system 12 shown in FIG. 1 further includes an interrogator unit 14. An exemplary interrogator 14 is described in detail in U.S. patent application Ser. No. 08/806,158, filed Feb. 25, 1997, assigned to the assignee of the present application and incorporated herein by reference. The wireless communication device 10 communicates via wireless electronic signals, such as radio frequency (RF) signals, with the interrogator unit 14. Radio frequency signals including microwave signals are utilized for communications in a preferred embodiment of communication system 12. The communication system 12 includes an antenna 16 coupled to the interrogator unit 14.Referring to FIG. 2, the wireless communication device 10 includes an insulative substrate or layer of supportive material 18. The term "substrate" as used herein refers to any supporting or supportive structure, including but not limited to, a supportive single layer of material or multiple layer constructions. Example materials for the substrate 18 comprise polyester, polyethylene or polyimide film having a thickness of 4-6 mils (thousandths of an inch).Substrate 18 provides a first or lower portion of a housing for the wireless communication device 10 and defines an outer periphery 21 of the device 10. Substrate 18 includes a plurality of peripheral edges 17.Referring to FIG. 3, at least one ink layer 19 is applied to substrate 18 in preferred embodiments of the invention. Ink layer 19 enhances the appearance of the device 10 and conceals internal components and circuitry provided therein. A portion of ink layer 19 has been peeled away in FIG. 3 to reveal a portion of an upper surface 25 of substrate 18. In other embodiments, plural ink layers are provided upon upper surface 25.A support surface 20 is provided to support components and circuitry formed in later processing steps upon substrate 18. In embodiments wherein at least one ink layer 19 is provided, support surface 20 comprises an upper surface thereof as shown in FIG. 3. Alternatively, upper surface 25 of substrate 18 operates as the support surface if ink is not applied to substrate 18.A patterned conductive trace 30 is formed or applied over the substrate 18 and atop the support surface 20. Conductive trace 30 is formed upon ink layer 19, if present, or upon substrate 18 if no ink layer is provided. A preferred conductive trace 30 comprises printed thick film (PTF). The printed thick film comprises silver and polyester dissolved into a solvent. One manner of forming or applying the conductive trace 30 is to screen or stencil print the ink on the support surface 20 through conventional screen printing techniques. The printed thick film is preferably heat cured to flash off the solvent and UV cured to react UV materials present in the printed thick film.The conductive trace 30 forms desired electrical connections with and between electronic components which will be described below. In one embodiment, substrate 18 forms a portion of a larger roll of polyester film material used to manufacture multiple devices 10. In such an embodiment, the printing of conductive trace 30 can take place simultaneously for a number of the to-be-formed wireless communication devices.The illustrated conductive trace 30 includes an electrical connection 28, a first connection terminal 53 (shown in phantom in FIG. 3) and a second connection terminal 58. Conductive trace 30 additionally defines transmit and receive antennas 32, 34 in one embodiment of the invention. Antennas 32, 34 are suitable for respectively transmitting and receiving wireless signals or RF energy. Transmit antenna 32 constitutes a loop antenna having outer peripheral edges 37. Receive antenna 34 constitutes two elongated portions individually having horizontal peripheral edges 38a, which extend in opposing directions, and substantially parallel vertical peripheral edges 38b. Other antenna constructions are possible. In particular, both transmit and receive operations are implemented with a single antenna in alternative embodiments of the present invention. Both antennas 32, 34 preferably extend or lie within the confines of peripheral edges 17 and outer periphery 21 and define a plane (shown in FIG. 4).One embodiment of a wireless communication device 10 includes a power source 52, integrated circuit 54, and capacitor 55. Power source 52, capacitor 55, and integrated circuit 54 are provided and mounted on support surface 20 and supported by substrate 18. The depicted power source 52 is disposed within transmit antenna 32 of wireless communication device 10. Capacitor 55 is electrically coupled with loop antenna 32 and integrated circuit 54 in the illustrated embodiment.Power source 52 provides operational power to the wireless communication device 10 and selected components therein, including integrated circuit 54. In the illustrated embodiment, power source 52 comprises a battery. In particular, power source 52 is preferably a thin profile battery which includes first and second terminals of opposite polarity. More particularly, the battery has a lid or negative (i.e., ground) terminal or electrode, and a can or positive (i.e., power) terminal or electrode.Conductive epoxy is applied over desired areas of support surface 20 using conventional printing techniques, such as stencil or screen printing, to assist in component attachment described just below. Alternately, solder or another conductive material is employed instead of conductive epoxy. The power source 52 is provided and mounted on support surface 20 using the conductive epoxy. Integrated circuit 54 and capacitor 55 are also provided and mounted or conductively bonded on the support surface 20 using the conductive epoxy. Integrated circuit 54 can be mounted either before or after the power source 52 is mounted on the support surface 20.Integrated circuit 54 includes suitable circuitry for providing wireless communications. For example, in one embodiment, integrated circuit 54 includes a processor 62, memory 63, and wireless communication circuitry or transponder circuitry 64 (components 62, 63, 64 are shown in phantom in FIG. 3) for providing wireless communications with interrogator unit 14. An exemplary and preferred integrated circuit 54 is described in U.S. patent application Ser. No. 08/705,043, incorporated by reference above.One embodiment of transponder circuitry 64 includes a transmitter and a receiver respectively operable to transmit and receive wireless electronic signals. In particular, transponder circuitry 64 is operable to transmit an identification signal responsive to receiving a polling signal from interrogator 14. In the described embodiment, processor 62 is configured to process the received polling signal to detect a predefined code within the polling signal. Responsive to the detection of an appropriate polling signal, processor 62 instructs transponder circuitry 64 to output an identification signal. The identification signal contains an appropriate code to identify the particular device 10 transmitting the identification signal in certain embodiments. The identification and polling signals are respectively transmitted and received via antennas 32, 34 of the device 10.First and second connection terminals 53, 58 are coupled to the integrated circuit 54 by conductive epoxy in accordance with a preferred embodiment of the invention. The conductive epoxy also electrically connects the first terminal of the power source 52 to the first connection terminal 53. In the illustrated embodiment, power source 52 is placed lid down such that the conductive epoxy makes electrical contact between the negative terminal of the power source 52 and the first connection terminal 53.Power source 52 has a perimetral edge 56, defining the second power source terminal, which is provided adjacent second connection terminal 58. In the illustrated embodiment, perimetral edge 56 of the power source 52 is cylindrical, and the connection terminal 58 is arcuate and has a radius slightly greater than the radius of the power source 52, so that connection terminal 58 is closely spaced apart from the edge 56 of power source 52.Subsequently, conductive epoxy is dispensed relative to perimetral edge 56 and electrically connects perimetral edge 56 with connection terminal 58. In the illustrated embodiment, perimetral edge 56 defines the can of the power source 52. The conductive epoxy connects the positive terminal of the power source 52 to connection terminal 58. The conductive epoxy is then cured.Referring to FIG. 4-FIG. 6, a method of forming an embodiment of wireless communication device 10 is shown. In the illustrated method, an electrical connection, such as a conductive post or pin 26, is conductively bonded to electrical connection 28 using a pick and place surface mount machine 70 (shown in FIG. 4). Preferably, the integrated circuit 54 and the capacitor 55 are also placed using the surface mount machine 70. Conductive pin 26 is utilized to provide electrical conductivity between electrical connection 28, conductive trace 30, and other conductive layers (e.g., a ground plane layer described below) of the wireless communication device 10. Other methods of forming connection 26 may be utilized.Referring to FIG. 5, an encapsulant, such as encapsulating epoxy material, is subsequently formed following component attachment to provide a first encapsulant layer or insulative layer 60. In one embodiment, insulative layer 60 is provided over the entire support surface 20. Insulative layer 60 encapsulates or envelopes the antennas 32, 34, integrated circuit 54, power source 52, conductive circuitry 30, capacitor 55, and at least a portion of the support surface 20 of substrate 18. Insulative layer 60 defines an intermediate portion of a housing for the wireless communication device 10. Insulative layer 60 operates to insulate the components (i.e., antennas 32, 34, integrated circuit 54, power source 52, conductive circuitry 30 and capacitor 55) from other conductive portions of the wireless communication device 10 formed in subsequent processing steps described below.An exemplary encapsulant is a flowable encapsulant. The flowable encapsulant is applied over substrate 18 and subsequently cured following the appropriate covering of the desired components. In the illustrated embodiment, such encapsulant constitutes a two-part epoxy including fillers, such as silicon and calcium carbonate. The preferred two-part epoxy is sufficient to provide a desired degree of flexible rigidity. Such encapsulation of wireless communication device 10 is described in U.S. patent application Ser. No. 08/800,037, filed Feb. 13, 1997, assigned to the assignee of the present application, and incorporated herein by reference.Other encapsulant materials of insulative layer 60 can be used in accordance with the present invention. In addition, the thickness of insulative layer 60 can be varied. Using alternative encapsulant materials and the adjusting of the dimensions of insulative layer 60 alter the dielectric characteristics (i.e., dielectric constant) of layer 60.Referring to FIG. 6, wireless communication device 10 is illustrated at an intermediate processing step. A portion of insulative layer 60 is preferably removed. The removed portion is represented by the dimension "h" in FIG. 5. Such removal provides a substantially planar dielectric surface 65 of insulative layer 60. Dielectric surface 65 is substantially parallel to the plane 33 defined by antennas 32, 34. The portion is removed by sanding insulative layer 60 to provide planar surface 65 according to one processing method of the present invention. Insulative layer 60 is preferably sanded to a predetermined thickness, such as 90 mils. In other embodiments, the entire insulative layer 60 is utilized and removal of the upper portion of layer 60 is not implemented.In embodiments where one of connections 26, 26a is provided (alternate connection 26a is shown in FIGS. 7 and 9), sanding or partially removing insulative layer 60 exposes a top portion of the connection 26, 26a permitting electrical coupling therewith adjacent dielectric surface 65.The thickness of insulative layer 60 defines the distance between a conductive layer 22 (described below) and antennas 32, 34, provided adjacent opposing sides of layer 60. The thickness of insulative layer 60 is chosen as a function of the dielectric constant of the encapsulant and the desired frequency for communication.After provision of insulative layer 60, a conductive layer 22 is formed or applied over the dielectric surface 65 thereof. Conductive layer 22 includes peripheral edges 61. Preferably, conductive layer 22 covers or is provided over the entire insulative dielectric surface 65. Alternatively, conductive layer 22 is patterned to cover predefined portions of dielectric surface 65. In embodiments wherein conductive layer 22 is patterned, the layer 22 is preferably formed at least over antennas 32, 34. More specifically, the respective peripheral edges 37, 38 of antennas 32, 34 are provided within the confines of the peripheral edges 61 of conductive layer 22.Conductive layer 22 formed upon dielectric surface 65 is preferably substantially planar. In addition, conductive layer 22 is substantially parallel to the plane 33 defined by antennas 32, 34, as well as dielectric surface 65.In one embodiment, conductive layer 22 comprises a stencil printed polymer thick film (PTF). The polymer thick film is typically 70-73% overfilled. In an alternative embodiment, conductive layer 22 is a conductive epoxy comprising approximately 70% metal. Further alternatively, conductive layer 22 comprises copper or gold foil laminated upon the dielectric surface 65 of insulative layer 60. In still another embodiment of the present invention, metal such as gold is sputtered upon dielectric surface 65 of insulative layer 60 to form conductive layer 22.Conductive layer 22 can be configured to operate as a ground plane and interact with antennas 32, 34. In particular, conductive layer 22 can be used to form a radio frequency (RF) shield. Inasmuch as the preferred embodiment of communication device 10 communicates via wireless signals, it is desired to reduce or minimize interference, such as incident RF radiation. Conductive layer 22 interacts with antennas 32, 34 to improve the RF operation of wireless communication device 10.In one embodiment, conductive layer 22 operates to shield some wireless electronic signals from the receive antenna 34 and reflect other wireless electronic signals toward the antenna 34. Conductive layer 22 includes a first side, which faces away from antennas 32, 34 (opposite surface 65) and a second side, which faces antennas 32, 34 (adjacent surface 65). Electronic signals received on the first side of the conductive layer 22 are shielded or blocked by layer 22 from reaching the antennas 32, 34. Electronic signals received on the second side of the conductive layer 22, which pass by or around antennas 32, 34, are reflected by layer 22.Some of the wireless communication signals transmitted by communications device 10 via antenna 32 are reflected by conductive layer 22. In particular, wireless signals transmitted from antenna 32 which strike the second side of conductive layer 22 are reflected thereby.Such shielding and reflecting by conductive layer 22 provides a highly directional wireless communication device 10. The provision of conductive layer 22 within wireless communication device 10 results in robust wireless communications with interrogator 14 and provides increased reliability.The conductive layer 22 is electrically connected with power source 52 in the illustrated embodiments of the present invention. Conductive layer 22 can be electrically coupled with either the positive or negative terminal of power source 52. Coupling of conductive layer 22 with one of the terminals of power source 52 provides layer 22 at the voltage potential of the respective terminal.In one embodiment, conductive layer 22 is electrically coupled with the ground (i.e., negative) terminal of power source 52 through the integrated circuit 54. Referring specifically to FIG. 6, integrated circuit 54 includes a first pin 35 internally connected with the ground terminal of power source 52 (not shown). First pin 35 is additionally conductively bonded with electrical connection 28 of conductive trace 30. Electrical connection 28 is conductively coupled with connection pin 26. Pin 26 is connected with conductive layer 22 and provides electrical coupling of conductive layer 22 and power source 52 through insulative layer 60.Coupling of one of the power terminals of power source 52 and ground plane/conductive layer 22 provides layer 22 at a common reference voltage. In particular, electrically connecting ground plane/conductive layer 22 and the ground terminal of power source 52 via electrical connections 26, 28 electrically grounds layer 22. Alternatively, ground plane/conductive layer 22 is coupled with the power electrode of power source 52 via electrical connections 26, 28 in other embodiments of the invention. Coupling ground plane/conductive layer 22 with the power electrode of power source 52 provides layer 22 at the positive potential of power source 52.Pin 26 is coupled directly with one of the terminals of power source 52 in other embodiments of the invention (not shown), thereby bypassing integrated circuit 54. Alternatively, no electrical connection is made to ground plane/conductive layer 22. In such an embodiment, ground plane/conductive layer 22 is insulated and the voltage of layer 22 is permitted to float.Referring to FIG. 7, an alternative electrical connection 26a is shown. Electrical connection 26a also provides conductivity through insulative layer 60. Connection 26a electrically couples conductive layer 22 and electrical connection 28. In this embodiment, electrical connection 26a comprises conductive epoxy. A dispenser 72 is utilized to dispense the conductive epoxy onto connection 28 of conductive trace 30 in the depicted embodiment.Connections 26, 26a may be formed at positions other than those illustrated in the depicted embodiments of device 10. In particular, connections 26, 26a may be provided at any appropriate location to provide electrical coupling of a terminal of power source 52 and conductive layer 22.Referring to FIG. 8 and FIG. 9, completed wireless communication devices 10 are shown. Following the provision of conductive layer 22 and one, if any, of electrical connections 26, 26a, an upper housing portion 66 is preferably formed over the conductive layer 22 of the respective illustrated devices 10. In one embodiment, upper housing portion 66 comprises a second encapsulant layer which covers and/or encapsulates the conductive layer 22 of the respective devices 10. In the depicted embodiment, first and second encapsulant layers 60, 66 envelope the entire conductive layer 22. Such is desired to insulate the conductive layer 22.Second encapsulant layer 66 may comprise the two-part encapsulant utilized to form insulative first encapsulant layer 60. Following the provision of second encapsulant layer 66 upon conductive layer 22, the encapsulant is subsequently cured forming a substantially void-free housing 27 or solid mass with substrate 18 and first encapsulant layer 60. In one embodiment, housing 27 of wireless communication device 10 has a width of about 3.375 inches, a height of about 2.125 inches, and a thickness less than or equal to about 0.0625 inches.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.
A molding compound cap structure is disclosed. A process of forming the molding compound cap structure is also disclosed. A microelectronic package is also disclosed that used the molding compound cap structure. A method of assembling a microelectronic package is also disclosed. A computing system is also disclosed that includes the molding compound cap structure. The molding compound cap includes a configuration that exposes a portion of microelectronic device.
CLAIMS What is claimed is:1. An article comprising: a first die disposed upon a mounting substrate, wherein the first die includes a first die active first surface and a first die backside second surface; and a molding compound cap abutting the first die and including a third surface that originates substantially above the first die active first surface and below the first die backside second surface.2. The article according to claim 1, wherein the third surface that originates substantially above the first die active first surface includes: a meniscus that originates substantially above the first die active first surface; and a substantially planar surface that is selected from parallel planar to the first die active first surface, and located above the first die active first surface at a height that is a fraction of the die height.3. The article according to claim 1, wherein the third surface that originates substantially above the first die active first surface, includes: a meniscus that originates substantially above the first die active first surface, and wherein the meniscus is selected from a capillary action meniscus and an imposed meniscus.4. The article according to claim 1, wherein the third surface that originates substantially above the first die active first surface includes: a meniscus that originates substantially above the first die active first surface; and a substantially planar surface that is coplanar to the first die active first surface.5. The article according to claim 1, further including a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, and wherein the molding compound cap abuts the second die. 6. The article according to claim 1, further including a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die, and wherein the molding compound exposes an upper surface of the mounting substrate between the first die and the second die. 7. The article according to claim 1, further including a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die, and wherein the molding compound includes a curvilinear profile between the first die and the second die. 8. The article according to claim 1, further including: a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die; and a last die disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, wherein the molding compound cap abuts the last die.9. The article according to claim 1, further including: a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die; a last die disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, wherein the molding compound cap abuts the last die; and wherein the first die, the second die, and the last die are aπanged in a configuration selected from: the first die, the second die, and the last die are disposed in a single molding compound cap structure; the first die, the second die, and the last die are each disposed in separate molding compound cap structures; the first die and the second die are disposed in a single molding compound cap structure, and at least two occuπences of the last die are disposed in a single molding compound cap structure; and the first die and the second die are each disposed in separate molding compound cap structures, and at least two occuπences of the last die are disposed in a single molding compound cap structure.10. A package comprising: a first die disposed upon a mounting substrate, wherein the first die includes a first die active first surface and a first die backside second surface; a molding compound cap abutting the first die and including a third surface that originates substantially above the first die active first surface and below the first die backside second surface; and a heat spreader bonded to the first die backside second surface. 11. The package according to claim 10, further including: a heat sink in thermal contact with the heat spreader.12. The package according to claim 10, wherein the third surface that originates substantially above the first die active first surface, includes: a meniscus that originates substantially above the first die active first surface; and a substantially planar surface that is selected from parallel planar to the first die active first surface, and located above the first die active first surface at a height that is a fraction of the die height.13. The package according to claim 10, further including: a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die; and a last die disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, wherein the molding compound cap abuts the last die.14. A computing system comprising: a first die disposed upon a mounting substrate, wherein the first die includes a first die active first surface and a first die backside second surface; and a molding compound cap abutting the first die and including a third surface that originates substantially above the first die active first surface and below the first die backside second surface; and at least one of an input device and an output device coupled to the first die.15. The computing system according to claim 14, wherein the computing system is disposed in one of a computer, a wireless communicator, a hand-held device, an automobile, a locomotive, an aircraft, a watercraft, and a spacecraft.16. The computing system according to claim 14, wherein the microelectronic die is selected from a data storage device, a digital signal processor, a micro controller, an application specific integrated circuit, and a microprocessor.17. The computing system according to claim 14, wherein the third surface that originates substantially above the first die active first surface includes: a meniscus that originates substantially above the first die active first surface; and a substantially planar surface that is selected from parallel planar to the first die active first surface, and located above the first die active first surface at a height that is a fraction of the die height.18. The computing system according to claim 14, further including: a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die; and a last die disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, wherein the molding compound cap abuts the last die.19. The computing system according to claim 14, further including: a second die disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, wherein the molding compound cap abuts the second die; and a last die disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, wherein the molding compound cap abuts the last die.20. A processing system comprising: a mold chase including a profile that is capable of causing molding cap compound to originate on a die at a die height that is substantially above the die active surface and below the die backside surface.21. The processing system according to claim 20, wherein the profile is capable of forming a meniscus where the molding cap compound originates, wherein the meniscus is formed as one selected from a capillary action meniscus and an imposed meniscus.22. The processing system according to claim 20, wherein the profile is capable of imposing an exposed upper surface upon a mounting substrate at a position between a first die cavity in the mold chase and a second die cavity in the mold chase.23. The processing system according to claim 20, wherein the profile includes a first die cavity, a second die cavity contiguous the first die cavity, and a last die cavity contiguous the first die cavity, wherein the first die cavity, the second die cavity, and the last die cavity are aπanged in a configuration selected from: the first die cavity, the second die cavity, and the last die cavity are disposed in a single molding compound cap cavity; the first die cavity, the second die cavity, and the last die cavity are each disposed in separate molding compound cap cavities; the first die cavity and the second die cavity are disposed in a single molding compound cap cavity, and at least two occurrences of the last die cavity are disposed in a single molding compound cap cavity; and the first die cavity and the second die cavity are each disposed in separate molding compound cap cavities, and at least two occurrences of the last die are disposed in a single molding compound cap cavity.24. A process comprising: forming a molding compound cap over a first die that is disposed upon a substrate, wherein the first die includes a first die active first surface and a first die backside second surface, and wherein forming the molding compound cap includes forming a molding compound cap third surface that is above the first die active first surface and below the first die backside second surface.25. The process according to claim 24, wherein forming a molding compound cap third surface includes forming the meniscus selected from a capillary action meniscus and an imposed meniscus. 26. The process according to claim 24, further including: forming the molding compound cap over a second die that is disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, and wherein forming the molding compound cap includes forming the molding compound cap third surface above the second die active first surface and below the second die backside second surface.27. The process according to claim 24, further including: forming the molding compound cap over a last die that is disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, and wherein forming the molding compound cap includes forming the molding compound cap third surface above the last die active first surface and below the last die backside second surface.28. The process according to claim 24, wherein forming the molding compound cap includes injection molding the molding compound with a particulate.29. The process according to claim 24, further including: forming the molding compound cap over a second die that is disposed upon the mounting substrate, wherein the second die includes a second die active first surface and a second die backside second surface, and wherein forming the molding compound cap includes forming the molding compound cap third surface above the second die active first surface and below the second die backside second surface; and forming the molding compound cap over a last die that is disposed upon the mounting substrate, wherein the last die includes a last die active first surface and a last die backside second surface, and wherein forming the molding compound cap includes forming the molding compound cap third surface above the last die active first surface and below the last die backside second surface.30. The process according to claim 24, wherein forming the molding compound cap is selected from injection molding, in situ thermal curing, pick-and-place coupling the molding compound cap with the first die, and combinations thereof.
MOLD COMPOUND CAP IN A FLIP CHIP MULTI-MATRIX ARRAY PACKAGE AND PROCESS OF MAKING SAME TECHNICAL FIELDDisclosed embodiments relate to an article that includes a mounted semiconductor die disposed in a molding compound cap. The molding compound cap exposes a portion of the die.BACKGROUND INFORMATION DESCRIPTION OF RELATED ART An integrated circuit (IC) die is often fabricated into a processor, a digital signal processor (DSP), and other devices for various tasks. The increasing power consumption of such dice results in tighter thermal budgets for a thermal solution design when the die is employed in the field. Accordingly, a thermal interface is often needed to allow the die to reject heat more efficiently. Narious solutions have been used to allow the processor to efficiently reject heat. During the process of encapsulating a microelectronic device, such as a die, in molding compound, the die is often placed inside of a mold, and encapsulation material is injected into the mold cavity. Because of the current molding process, molding compound often "flashes" onto the backside of a die. The flashing phenomenon occurs frequently for a flip-chip configuration where the active surface of the die is presented against a mounting substrate such as a printed wiring board, a mother board, a mezzanine board, an expansion card, or others. The flashing of molding compound upon the back surface of the die creates problems in heat management such that the back surface often must be processed to clean off the flashing of the molding compound. An article includes a die in a molding compound. Because of thermal cycling of a die in the molding compound, where the molding compound and the backside surface of the die share a co-planar surface, excessive stress is formed at the backside corners of the die. These stresses can damage the die or its package such that a lower fabrication yield can result, or field failures of the article can result. BRIEF DESCRIPTION OF THE DRAWINGS In order to understand the manner in which embodiments are obtained, a more particular description of various embodiments briefly described above will be rendered by reference to the appended drawings. These drawings depict embodiments that are not necessarily drawn to scale and are not to be considered to be limiting of its scope. Some embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which: FIG. 1 is a side cross-section of a package according to an embodiment; FIG. 2 is a detail section taken along the line 2-2 from FIG. 1 according to an embodiment; FIG. 3 is a detail section taken along the line 2-2 from FIG. 1 according to an embodiment; FIG. 4 is a detail section taken along the line 2-2 from FIG. 1 according to an embodiment; FIG. 5 is a side cross-section of a package according to an embodiment; FIG. 6 is a top plan of the package depicted in FIG. 1 according to an embodiment; FIG. 7 is a top plan of the package depicted in FIG. 5 according to an embodiment; FIG. 8 is a side cross-section of a package according to an embodiment; FIG. 9 is a top plan of the package depicted in FIG. 8 according to an embodiment; FIG. 10 is a top plan of the package depicted in FIG. 8 according to an alternative embodiment; FIG. 11 is a side cross-section of a package according to an embodiment; FIG. 12 is a side cross-section of the package depicted in FIG. 1 during processing according to an embodiment; FIG. 13 is a side cross-section of the package depicted in FIG. 5 during processing according to an embodiment; FIG. 14 is a side cross-section of the package depicted in FIG. 8 during processing according to an embodiment; FIG. 15 is a side cross-section of the package depicted in FIG. 11 during processing according to an embodiment; FIG. 16 is a detail section taken from FIG. 15 according to an embodiment; FIG. 17 is a detail section taken from FIG. 15 according to an alternative embodiment; FIG. 18 is a side cross-section of a package according to an embodiment; FIG. 19 is a depiction of a computing system according to an embodiment; and FIG. 20 is a process flow diagram according to various embodiments. DETAILED DESCRIPTION The following description includes teπns, such as upper, lower, first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. The embodiments of a device or article described herein can be manufactured, used, or shipped in a number of positions and orientations. The terms "die" and "processor" generally refer to the physical object that is the basic workpiece that is transformed by various process operations into the desired integrated circuit device. A board is typically a resin- impregnated fiberglass structure that acts as a mounting substrate for the die. A die is usually singulated from a wafer, and wafers may be made of semiconducting, non- semiconducting, or combinations of semiconducting and non-semiconducting materials. Reference will now be made to the drawings wherein like structures are provided with like reference designations. In order to show the structure and process embodiments most clearly, the drawings included herein are diagrammatic representations of embodiments. Thus, the actual appearance of the fabricated structures, for example in a photomicrograph, may appear different while still incorporating the essential structures of embodiments. Moreover, the drawings show only the structures necessary to understand the embodiments. Additional structures known in the art have not been included to maintain the clarity of the drawings. FIG. 1 is a side cross-section of a package according to an embodiment. The package includes a first die 110 that is disposed upon a mounting substrate 112. The first die 110 is partially encapsulated in a molding compound cap 114 that abuts the first die 110. The first die 110 is electronically coupled to the mounting substrate 112 through a bump 116 that is, by way of non-limiting example, a metal ball. The first die 110 includes a first die active first surface 120 and a first die backside second surface 122. One dimension of the article depicted in FIG. 1 includes a die height (DH) that is the vertical difference between the first die backside second surface 122 and the upper surface 113 of the mounting substrate 112. Another parameter of the package, the die-standoff height (DSH), is a measurement of the vertical difference in height between the first die active first surface 120 and the upper surface 113 of the mounting substrate 112. Another parameter of the package depicted in FIG. 1, the molding- compound height (MCH), is a measurement of the vertical difference between the third surface 124 of the molding compound cap 114 and the upper surface 113 of the mounting substrate 112. FIG. 1 also depicts a second die 111 that is disposed upon the mounting substrate112. The second die 111 includes a second die active first surface 121 and a second die backside second surface 123. It is also depicted in this embodiment that the molding compound cap 114 abuts the second die 111 as well as the first die 110. In FIG. 1, a last die 109 is also depicted as disposed over the mounting substrate 112 and is likewise abutted by the molding compound cap 114. In the embodiment depicted in FIG. 1, the molding compound cap 114 is continuous and contiguous to the first die 110, the second die 111, and the last die 109. FIG. 2 is a detail section taken along the line 2-2 from FIG. 1 according to an embodiment. A first die 210 is disposed over the mounting substrate 112 (FIG. 1) and is mounted in a molding compound cap 214 similar to the depiction in FIG. 1. Additionally, a bump 216 enables electronic coupling between the first die 210 and the mounting substrate 112. The first die 210 also is illustrated with the first die active first surface 220 and the first die backside second surface 222. The molding compound cap 214 includes a third surface that includes a substantially planar portion 224 and a substantially curvilinear portion (hereinafter "third surface 224, 225) that originates substantially above the first die active first surface 220 and below the first die backside second surface 222. A meniscus 225 portion of the third surface 224, 225 forms a portion of the molding compound cap 214. Accordingly, the meniscus 225 is that portion of the molding compound cap 214 that originates substantially above the first die active first surface 220 and below the first die backside second surface 222. In the embodiment depicted in FIG. 2, the third surface 224, 225 also has a metric that is a measurement of the distance between the substantially planar portion of the third surface 224and the first die active first surface 220. This distance is referred to as the molding compound cap encroachment 226. The molding compound cap encroachment 226 can be quantified that portion of the die height that has been covered by the molding compound cap 214, as a percentage of the die height. For example, the molding compound cap encroachment 226 appears to be about 25% of the die height. The die height is the distance between the first die active first surface 220 and the first die backside second surface 222. In any event, the molding compound cap encroachment 226 is a fraction of the total die height. FIG. 3 is a detail section that can be taken along the line 2-2 from FIG. 1 according to an embodiment. A first die 310 is disposed over a mounting substrate (not pictured) and is mounted in a molding compound cap 314 similar to the depiction in FIG. 1. Additionally, a bump 316 enables electronic coupling between the first die 310 and the mounting substrate (not pictured). The first die 310 also is illustrated with the first die active first surface 320 and the first die backside second surface 322. The molding compound cap 314 includes a third surface 324, 325 that originates substantially above the first die active first surface 320 and below the first die backside second surface 322. A meniscus 325 forms a portion of the molding compound cap 314. Accordingly, the meniscus 325 is that portion of the molding compound cap 314 that originates substantially above the first die active first surface 320 and below the first die backside second surface 322. In the embodiment depicted in FIG. 3, the third surface 324, 325 also has the molding compound cap encroachment 326 metric. In FIG. 3, the molding compound cap encroachment 326 appears to be about 80% of the die height. In one embodiment, a particulate material 315 is interspersed with the molding compound cap 314. In one embodiment, the particulate material 315 is a graphite fiber. In one embodiment, the particulate material 315 is a diamond powder. In one embodiment, the particulate material 315 is a silica filler. In one embodiment, the the particulate material 315 includes inorganics that are metallic in an organic matrix of the molding compound cap 314. In this embodiment, the overall coefficient of thermal conductivity for the molding compound cap 314 and the particulate material 315 is in a range from about 0.1 W/m-K to less than or equal to about 600 W/m-K. In one embodiment, the particulate material 315 includes inorganics that are dielectrics in an organic matrix of the molding compound cap 314. In this embodiment, the overall coefficient of thermal conductivity for the molding compound cap 314 and the particulate material 315 is in a range from about 10 W/m-K to about 90 W/m-K. Although the particulate material 315 is depicted as angular and eccentric shapes, in one embodiment, the particulate material 315 can be other shapes. In one embodiment, the particulate material 315 is a substantially spherical powder that has an average diameter in a range from about 0.1 micron to about 100 micron. In one embodiment, the eccentricity of the particulate material 315, as measured by a ratio of the major diagonal axis to the minor diagonal axis, is in a range from about 1 to about 10. In one embodiment, the eccentricity is greater than 10. It can now be appreciated that the particulate material 315 that appears in the embodiments depicted in FIG. 3 can be present in any embodiment of a molding compound cap as set forth in this disclosure. FIG. 4 is a detail section that can be taken along the line 2-2 from FIG. 1 according to an embodiment. A first die 410 is disposed over a mounting substrate (not pictured) and is mounted in a molding compound cap 414 similar to the depiction in FIG. 1. Additionally, a bump 416 enables electronic coupling between the first die 410 and the mounting substrate (not pictured). The first die 410 also is illustrated with the first die active first surface 420 and the first die backside second surface 422. The molding compound cap 414 includes a third surface 424, 425 that originates substantially above the first die active first surface 420 and below the first die backside second surface 422. A meniscus 425 portion of the third surface 424, 425 forms a portion of the molding compound cap 414. Accordingly, the meniscus 425 is that portion of the molding compound cap 414 that originates substantially above the first die active first surface 420 and below the first die backside second surface 422. In the embodiment depicted in FIG. 4, the third surface 424, 425 also has the molding compound cap encroachment 426 metric. In FIG. 4, the molding compound cap encroachment 426 appears to be a negligible amount of the die height. The substantially planar portion 424 of the third surface 424, 425 is substantially co-planar with the first die active first surface 420. The meniscus 425, however, originates at a position that is substantially above the first die active first surface 420 and below the first die backside second surface 422. By review of FIGs. 2, 3, and 4, the molding compound cap encroachments 226, 326, and 426, respectively, are depicted as about 25%, 80%, and 0%. Any percentage of the die height, however, is contemplated as an embodiment, so long as the percentage is a fraction of 100% including zero percent. FIG. 5 is a side cross-section of a package according to an embodiment. A first die510 is disposed upon a mounting substrate 512, and a molding compound cap 514 abuts the first die 510. A bump 516 couples the first die 510 to the mounting substrate 512. The first die 510 includes a first die active first surface 520 and a first die backside second surface 522. The molding compound cap 514 includes a third surface 524 that originates substantially above the first die active first surface 520 and below the first die backside second surface 522. Additionally, the molding compound cap 514 is segmented as depicted in FIG. 5 such that it exposes the mounting substrate 512 by revealing an upper surface 513 of the mounting substrate 512. In addition to the first die 510, FIG. 5 depicts a second die 511 that is disposed upon the mounting substrate 512 and includes a second die active first surface 521 and a second die backside second surface 523. The molding compound cap 514, although segmented, also abuts the second die. As depicted in FIG. 5, the upper surface 513 of the mounting substrate 512 is exposed because of the segmentation of the molding compound cap 514. In any event, the molding compound cap 514 includes a third surface 524 that originates substantially above the respective active first surfaces 520, 521 of the first and second dice and below the respective backside second surfaces 522, 523 of the first and second dice. In FIG. 5, a last die 509 is also depicted as disposed over the mounting substrate 512 and is likewise abutted by the molding compound cap 514. In the embodiment depicted in FIG. 5, the molding compound cap 514 is segmented, but each portion of the segmented molding compound cap 514 is substantially contiguous to the first die 510, the second die 511, and the last die 509. FIG. 6 is a top plan view of the package depicted in FIG. 1 according to an embodiment. FIG. 6 represents in one embodiment a multi-matrix array package according to an embodiment. A plurality of dice includes a first die 110, a second die 111 , and a last die 109. In an embodiment, the first die 110, the second die 111, and the last die 109 represent multiple occurrences of the same die. As depicted in FIG. 6, the molding compound cap 114 abuts all of the dice as depicted. FIG. 7 is a top plan view of the package depicted in FIG. 5 according to an embodiment. FIG. 7 represents in one embodiment a multi-matrix array package according to an embodiment. A plurality of dice includes a first die 510, a second die 511, and a last die 509. In an embodiment, the first die 510, the second die 511, and the last die 509 represent multiple occurrences of the same die. As depicted in FIG. 7, the molding compound cap 514 abuts all of the dice as depicted. Additionally, the mounting substrate512 is exposed at its upper surface 513 because of the segmentation of the molding compound cap 514. Additionally, the third surface 524, that is the upper surface of the molding compound cap 514, is depicted in FIG. 7. In the embodiment depicted in FIG. 7, each occurrence of a die, whether it is the first die 510, the second die 511, or the last die 509, is segmented in a discrete unit separate from each other die and its accompanying occurrence of abutting molding compound cap 514. FIG. 8 is a side cross-section of a package according to an embodiment. A first die 810 is disposed upon a mounting substrate 812 and is mounted in a molding compound cap 814 that abuts the first die 810. A bump 816 allows the die 810 to be coupled to the mounting substrate 812. The first die 810 includes a first die active first surface 820 and a first die backside second surface 822. The molding compound cap 814 includes a third surface 824 that originates substantially above the first die active first surface 820 and below the first die backside second surface 822. In this embodiment, as in all other embodiments, a meniscus (not depicted) can be present according to the formation of the molding compound cap 814. Accordingly, and as represented in other embodiments in this disclosure, the meniscus is that portion of the molding compound cap 814 that originates substantially above the first die active first surface 820 and below the first die backside second surface 822. FIG. 8 also depicts a second die 811 that is disposed above mounting substrate 812 and is embedded in the molding compound cap 814. The second die 811 includes a second die active first surface 821 and a second die backside second surface 823.Although the first die 810 and the second die 811 are of different sizes and shapes, the mounting scheme of this embodiment includes an exposed portion of the upper surface 813 of the mounting substrate 812 by discrete segmentation of the molding compound cap 814. An embodiment of a plurality of dice that are of different shapes and sizes includes a single molding compound cap that is not segmented. This embodiment can be realized, for example in FIG. 1, by removing the dice 109, 110, 111, and by replacing them with the dice 810, 811 from FIG. 8. FIG. 9 is a top plan view of the package depicted in FIG. 8 according to an embodiment. FIG. 9 represents in one embodiment a multi-matrix array package according to an embodiment. A plurality of dice includes the first die 810, the second die 811, and the last die 809. In an embodiment, the first die 810, the second die 811, and the last die 809 represent multiple occurrences of the same die. As depicted in FIG. 9, the molding compound cap 814 abuts all of the dice as depicted. FIG. 9 also depicts the mounting substrate 812 and shows that the upper surface 813 of the mounting substrate 812 is exposed between discrete segments of the molding compound cap 814. Additionally, the third surface 824, that is the upper surface of the molding compound cap 814, is depicted in FIG. 9. In the embodiment depicted in FIG. 9, each occurrence of a die, whether it is the first die 810, the second die 811, or the last die 809, is segmented in a discrete unit separate from each other die and its accompanying occurrence of abutting molding compound cap 814. In one embodiment, the chip package depicted in FIG. 9, can include a main die such as a processor 810 that can be an application-specific integrated circuit (ASIC) and the second die 811 can be a specialized die such as a telecommunications and/or graphics device. By way of non-limiting example, the last die 809 as depicted in FIG. 9 can be at least one memory device. In one embodiment, the chip package depicted in FIG. 9 represents a wireless device technology such as a telephone, a personal data assistant, a personal computer, or a combination of two of the aforementioned devices. Although the first die 810 and the second die 811 are depicted as having different die heights in FIG. 8, one can read this disclosure and understand that one embodiment includes the first die 810 and the second die 811 having substantially equal heights. In one embodiment, the DH can be the same for the first die 810 and the second die 811, but the DSH of each can be different. FIG. 10 is a top plan view of the package depicted in FIG. 8 according to an alternative embodiment. FIG. 10 represents in one embodiment a multi-matrix aπay package according to an embodiment. A plurality of dice includes a first die 810, a second die 811, and a last die 809. In an embodiment, the first die 810, the second die 811, and the last die 809 represent multiple occurrences of the same die. As depicted in FIG. 10, the molding compound cap 814 abuts all of the dice as depicted. FIG. 10 also depicts the mounting substrate 812 and shows that the upper surface 813 of the mounting substrate 812 is exposed between discrete segments of the molding compound cap 814. Additionally, the third surface 824, that is the upper surface of the molding compound cap 814, is depicted in FIG. 10. In the embodiment depicted in FIG. 10, each occurrence of a die, whether it is the first die 810, the second die 811, or the last die 809, is segmented in a discrete unit separate from each other die and its accompanying occurrence of abutting molding compound cap 814. In one embodiment, the chip package depicted in FIG. 10, can include a main die such as a processor 810 that can be an application-specific integrated circuit (ASIC) and the second die 811 can be a specialized die such as a telecommunications and/or graphics device. By way of non-limiting example, the last die 809 as depicted in FIG. 10 can be at least one memory device. In one embodiment, the chip package depicted in FIG. 10 represents a wireless device technology such as a telephone, a personal data assistant, a personal computer, or a combination of two of the aforementioned devices. Although the first die 810 and the second die 811 are depicted as having different die heights in FIG. 8, one can read this disclosure and understand that one embodiment includes the first die 810 and the second die 811 having substantially equal heights. In one embodiment, the DH can be the same for the first die 810 and the second die 811, but the DSH of each can be different. In FIG. 10, all of last dice 809 are encapsulated according to an embodiment in a single discrete occurrence of the molding compound cap 814. In this embodiment, however, the first die 810 is discretely disposed in a separate occurrence of the molding compound cap 814. Likewise, the second die 811 is discretely disposed in a separate occurrence of the molding compound cap 814. One embodiment (not pictured) includes the configuration where all of last dice 809 are encapsulated together according to an embodiment in a single discrete occurrence of the molding compound cap 814, and where the first die 810 and the second die 811 are discretely encapsulated separate from the last dice 809, but the first die 810 and the second die 811 are encapsulated together in the molding compound cap 814. FIG. 11 is a side-cross section of a package according to an embodiment. A first die 1110 is disposed upon a mounting substrate 1112 and is encapsulated in a molding compound cap 1114 that abuts the first die 1110. The first die 1110 is coupled to the mounting substrate 1112 through a bump 1116. The first die 1110 includes a first die active first surface 1120 and a first die backside second surface 1122. In FIG. 11, the molding compound cap 1114 includes a substantially rectangular profile near the lateral edges of the package. Accordingly, the third surface 1124 is substantially planar. FIG. 11 also depicts a second die 1111 and a last die 1109. The second die 1111 includes a second die active first surface 1121 and a second die backside second surface 1123. Between the first die 1110 and the second die 1111 the molding compound 1114 includes a fourth surface 1125 that is substantially curvilinear. The molding compound cap 1114 between the first die 1110 and the second die 1111 originates at each die substantially above the respective first and second die active first surfaces 1120, 1121 and below the respective first and second die backside second surfaces 1122 and 1123. A meniscus (not pictured) according to the various embodiments set forth in this disclosure, abuts the outer edges of the second die 1111 and the last die 1109. FIG. 12 is a side cross-section of a package such as the package depicted in FIG. 1 during processing according to an embodiment. During processing, the molding compound cap 114 is injected into the package between a mold chase 1230 and the mounting substrate 112. In this embodiment, the mold chase 1230 is depicted as having a vertical profile that complements the profiles of the first die 110, the second die 111, and the last die 109 if more than one last die 109 is present. According to this embodiment, the problem of flashing, the phenomenon of molding compound leaking onto the backside second surface of the dice, is eliminated due to the vertical profile of the mold chase 1230. In one embodiment, processing includes injection molding or transfer molding with particulate fillers as set forth in this disclosure. In one embodiment, processing includes injection molding followed by in situ thermal curing or thermal partial curing by application of heat through the mold chase 1230. After processing, the mold chase 1230 is removed, and the package substantially as it is depicted in FIGs. 1-4 results according to various embodiments. FIG. 13 is a side cross-section of the package depicted in FIG. 5 during processing according to an embodiment. During processing, the molding compound cap 514 is injected into the package between a mold chase 1330 and the mounting substrate 512. In this embodiment, the mold chase 1330 is depicted as having a vertical profile that complements the profiles of the first die 510, the second die 511, and the last die 509 if more than one last die 509 is present. Additionally, the profile of the mold chase 1330 leaves a portion of the upper surface 513 exposed. According to this embodiment, the problem of flashing is eliminated due to the vertical profile of the mold chase 1330. In one embodiment, processing includes injection molding or transfer molding with particulate fillers as set forth in this disclosure. In one embodiment, processing includes injection molding followed by thermal curing or thermal partial curing by application of heat through the mold chase 1330. After processing, the mold chase 1330 is removed, and the package substantially as it is depicted in FIG. 5 results according to various embodiments. The mold chase 1330 includes in its vertical profile, a portion that substantially touches the upper surface 513 of the mounting substrate 512. Accordingly, an exposed portion of the mounting substrate 512 and discrete segments of the molding compound cap 514 is the result when the mold chase is removed as is depicted in FIG. 5. FIG. 14 is a side cross-section of the package depicted in FIG. 8 during processing according to an embodiment. During processing, the molding compound cap 814 is injected into the package between a mold chase 1430 and the mounting substrate 812. In this embodiment, the mold chase 1430 is depicted as having a vertical profile that complements the profiles of the first die 810, the second die 811, and the last die 809 if more than one last die 809 is present. According to this embodiment, the problem of flashing is eliminated due to the vertical profile of the mold chase 1430. After processing, the mold chase 1430 is removed, and the package substantially as it is depicted in FIG. 8 results according to an embodiment. The mold chase 1430 includes in its vertical profile, a portion that substantially touches the upper surface 813 of the mounting substrate 812. Accordingly, an exposed portion of the mounting substrate 812 and discrete segments of the molding compound cap 814 is the result when the mold chase 1430 is removed as is depicted in FIG. 8. FIG. 15 is a side cross-section of the package depicted in FIG. 11 during processing according to an embodiment. During processing, the molding compound cap 1114 is injected into the package between a mold chase 1530 and the mounting substrate 1112. In this embodiment, the mold chase 1530 is depicted as having a vertical profile that complements the profiles of the first die 1110, the second die 1111, and the last die 1109. According to this embodiment, the problem of flashing is eliminated due to the vertical profile of the mold chase 1530. After processing, the mold chase 1530 is removed, and the package substantially as it is depicted in FIG. 11 results according to an embodiment. The mold chase 1530 includes in its vertical profile, a portion that has imposed the curvilinear profile 1125 (Fig. 11) that approaches the upper surface 1113 of the mounting substrate 1112. Accordingly, an exposed portion of the mounting substrate 1112 will be the result when the mold chase is removed as is depicted in FIG. 11. The mold chase 1530 imposes both a rectangular profile upon the molding compound cap 1114 at the edges of the package and a curvilinear profile 1125 of the molding compound cap 1114 between the occurrence of the first die 1110 and the second die 1111 as well as between the first die 1110 and the last die 1109. According to an embodiment, it can be understood that the profile of the molding compound cap 1114 includes both a substantially planar surface and a meniscus. In one embodiment, the substantially planar surface and meniscus include an upper surface 1124 as depicted in FIG. 11, and as depicted in more detail in FIGs. 2-4. In one embodiment, the meniscus includes the curvilinear surface 1125 as depicted in FIG. 11. In any event, the meniscus is that portion of the molding compound cap 1114 that originates substantially above the first die active first surface 1120 and below the first die backside second surface 1122. FIG. 16 is a detail section taken from FIG. 15 according to an embodiment. The second die 1111 is depicted along with the mold chase 1530 and the molding compound cap 1114. The meniscus 1125 of the third surface 1124 has been imposed by the shape of the mold chase 1530 where it abuts a lateral surface of the second die 1111. In this embodiment, it is understood, that the meniscus 1125 is an "imposed meniscus." The meniscus 1125 is imposed by the shape of the mold chase 1530 where it abuts the edge of the second die 1111. FIG. 17 is a detail section taken from FIG. 15 according to an alternative embodiment. The second die 1111 has been over-molded by a mold chase 1531 that has a substantially planar lower profile 1129. The occurrence of the meniscus 1125, however, exists because of the wetting quality of the material of the molding compound cap 1114. As the mold chase 1531 is lifted away from the package, or otherwise during the molding process, the meniscus 1125 forms by capillary action. Accordingly, the third surface 1124 and 1125 include a substantially planar surface 1124 and curvilinear surface 1125. In this embodiment, the meniscus 1125 is referred to as a "capillary action meniscus." FIG. 18 is a side cross-section of a package according to an embodiment. A first die 1810 is disposed upon a mounting substrate 1812 according to many of the various embodiments set forth in this disclosure. A second die 1811 is disposed upon the mounting substrate 1812 next to the first die 1810. Similarly, a last die 1809 is disposed upon the mounting substrate 1812 next to the first die. A molding compound cap 1814 is depicted abutting the respective dice 1810, 1811 and 1809. The embodiment of the molding compound cap 1814 has the appearance of the embodiment of the molding compound cap 114 depicted in FIG. 1. The molding compound cap 1814, however, should be understood to include but not be limited to other embodiments. Examples include but are not limited to the molding compound cap 514 depicted in FIG. 5, the molding compound cap 814 depicted in FIG. 8, and the molding compound cap 1114 depicted in FIG. 11. Other embodiments include but are not limited to the plan view embodiments depicted in this disclosure. By reading this disclosure, one of ordinary skill in the art can understand other configurations of a molding compound cap in a package according to an embodiment. A heat spreader 1832 is disposed over the dice 1810, 1811, and 1809. The heat spreader 1832 is set upon the back side second surfaces of the dice 1810, 1811, and 1809 and bonded with a thermal interface material (TIM) 1834. The heat spreader 1832 along with the TIM 1834, represent a thermal solution refeπed to as "TIM 1." In one embodiment, the TIM 1838 and the heat spreader 1832 are referred to as an "enabling solution" that can have a stand-alone commercial applicability. In one embodiment, the TIM 1834 is indium (In). In one embodiment, the TIM 1834 is an indium-tin (InSn) alloy. In one embodiment, the TIM 1834 is an indium-silver (InAg) alloy. In one embodiment, the TIM 1834 is a tin-silver (SnAg) alloy. In one embodiment, the TIM 1834 is a tin- silver-copper (SnAgCu) alloy. In one embodiment, the TIM 1834 is a thermally conductive polymer. Disposed above the heat spreader 1832 is a heat sink 1836. The heat sink 1836 is bonded to the heat spreader 1832 with a TIM 1838. The additional heat sink 1836 and the TIM 1838 are referred to as an enabling solution that can have a commercial applicability as what is refeπed to as "TIM 2." The heat sink 1836 is depicted generically as a heat slug. The heat sink 1836, however, can be any type of heat sink according to a specific application need, including a heat pipe, a fan, a skived heat sink, or others. In one embodiment, the heat sink 1836 is bolted or otherwise fastened to the heat spreader 1832 and optionally to the mounting substrate 1812 by a fastener 1840. The fastener can be any type of connector such as a bolt, a screw, a nail, or others. FIG. 19 is a depiction of a computing system 1900 according to an embodiment. One or more of the foregoing embodiments that includes a molding compound cap as disclosed herein may be utilized in a computing system, such as a computing system 1900 of FIG. 19. The computing system 1900 includes at least one die (not pictured), which is enclosed in a microelectronic device package 1910, a data storage system 1912, at least one input device such as a keyboard 1914, and at least one output device such as a monitor1916, for example. The computing system 1900 includes a die that processes data signals such as a microprocessor, available from Intel Corporation. In addition to the keyboard 1914, the computing system 1900 can include another user input device such as a mouse 1918, for example. For the purposes of this disclosure, a computing system 1900 embodying components in accordance with the claimed subject matter may include any system that utilizes a microelectronic device package, which may include, for example, a data storage device such as dynamic random access memory, polymer memory, flash memory, and phase-change memory. The microelectronic device package can also include a die that contains a digital signal processor (DSP), a micro controller, an application specific integrated circuit (ASIC), or a microprocessor. Embodiments set forth in this disclosure can be applied to devices and apparatuses other than a computing system of a traditional computer. For example, a die can be packaged with an embodiment of the molding compound cap and placed in a portable device such as a wireless communicator or a hand-held such as a personal data assistant and the like. Another example is a die that can be packaged with an embodiment of the molding compound cap and placed in a vehicle such as an automobile, a locomotive, a watercraft, an aircraft, or a spacecraft. FIG. 20 is a process flow diagram according to various embodiments. At 2010, a die is disposed upon a mounting substrate. At 2020, the molding compound cap is formed abutting the die. Forming the molding compound cap includes forming a highest surface that is above the active surface and below the backside surface. In one embodiment, the process can proceed by injection molding. In one embodiment, the process can proceed by injection molding a molding compound cap, followed by pick-and-place disposing the molding compound cap over a die followed by optional curing. It is emphasized that the Abstract is provided to comply with 37 C.F.R. § 1.72(b) requiring an Abstract that will allow the reader to quickly ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate prefeπed embodiment. It will be readily understood to those skilled in the art that various other changes in the details, material, and aπangements of the parts and method stages which have been described and illustrated in order to explain the nature of this invention may be made without departing from the principles and scope of the invention as expressed in the subjoined claims.
A package assembly is formed by applying flux to a substrate and inspecting the applied flux to determine whether the amount applied is adequate to form reliable interconnections between a device and the substrate. Embodiments include applying a rosin based flux on a laminate substrate and inspecting the coverage of the applied flux by fluorescent spectroscopy.
What is claimed is: 1. A method of inspecting applied flux on a substrate, the method comprising:applying a soldering flux to the substrate to form a flux zone having an area approximately covering a chip pad area on the substrate, wherein the soldering flux is a rosin based composition; and inspecting the flux zone area by optical or electro-optical spectroscopy to determine the coverage and uniformity of the applied flux. 2. The method according to claim 1, comprising heating the substrate and inspecting the flux zone area by infrared spectroscopy.3. The method according to claim 1, comprising inspecting the flux zone area by fluorescence spectroscopy.4. The method according to claim 3, adding a fluorescent agent to the flux prior to applying the flux to enhance fluorescence of the flux.5. The method according to claim 4, comprising illuminating the flux zone with ultraviolet or visible light after applying the flux.6. The method according to claim 1, comprising determining the uniformity of the flux zone.7. The method according to claim 1 comprising applying the flux to form a flux zone having the area approximately equal to a chip pad area on the substrate.8. The method according to claim 1, comprising applying a no-clean flux on a ceramic substrate.9. A method of manufacturing an interconnected device assembly, the method comprising:providing a substrate having conductive contacts thereon for mounting a device, providing a device having a plurality of solder contacts thereon; applying a soldering flux to the substrate to form a flux zone having an area approximately covering a chip pad area on the substrate; inspecting the flux zone area by optical or electro-optical spectroscopy to determine the coverage and uniformity of the applied flux; contacting the device and substrate such that the solder contacts of the device are aligned with the conductive contacts on the substrate to form a substrate/assembly; and forming an electrical connection between the solder contacts of the device and the conductive contacts on the substrate. 10. The according to claim 9, comprising contacting the device and substrate in response to inspecting the flux zone having adequate coverage.11. The according to claim 9, comprising contacting the device and substrate in response to inspecting the flux zone having about 50% to about 150% of coverage.12. The according to claim 9, comprising applying additional flux to the substrate in response to inspecting the flux zone having inadequate coverage or uniformity of flux.13. The method according to claim 9, comprising providing a laminate substrate and reflowing the plurality of solder contacts on the device by heating the assembly from about 220[deg.] C. to about 270[deg.] C.
RELATED APPLICATIONThis application claims priority from U.S. Provisional Application Serial No. 60/214,855 filed Jun. 28, 2000 entitled "Determination of Flux Coverage" which is incorporated in its entirety herein by reference hereby.FIELD OF THE INVENTIONThe present invention relates generally to semiconductor packaging technology and the manufacture of package assemblies. The present invention has particular applicability to methods of inspecting flux that has been applied to a substrate during assembly of a device package.BACKGROUNDIntegrated circuit devices are typically electronically packaged by mounting one or more integrated circuit (IC) chips or dies to a substrate, sometimes referred to as a carrier. In a flip chip assembly or package, the die is "bumped" with solder to form a plurality of discrete solder balls over metal contacts on the surface of the die. The chip is then turned upside down or "flipped" so that the device side or face of the IC die can be mounted to a substrate having a corresponding array of metal contacts. Typically, the metal contacts of the substrate are coated or formed with a solder alloy. Electrical interconnection of the die to the substrate is conventionally performed by aligning the die to the substrate and reflowing the solder on the die and/or the substrate to electrically and mechanically join the parts. Directly coupling the die immediately below the substrate allows for an increased number of interconnections and improves voltage noise margins and signal speed.Typically, a flux composition is applied to either the die or the substrate to facilitate the formation of the interconnect. Flux acts as an adhesive to hold the placed components in place pending soldering and further acts to minimize metallic oxidation that occurs at soldering temperatures thereby improving the electrical and mechanical interconnection and reliability between the soldered component and substrate. Soldering fluxes fall into three broad categories: rosin fluxes, water-soluble fluxes, and no-clean fluxes. Rosin fluxes, which have a relatively long history of use, are still widely used in the electronics industry. Water-soluble fluxes, which are a more recent development and which are increasingly used in consumer electronics, are highly corrosive materials. No-clean fluxes, a very recent development, reportedly do not require removal from the circuit assemblies. The most common flux for IC die attach packaging comprises a suspension liquid of various acids suspended in an alcohol base.It has been observed that controlling the amount of applied flux is important irrespective of the type of flux employed in a particular packaging process, since enough flux must be used to effect a reliable metallurgical bond to electrically and mechanically interconnect the component to the substrate. Too much applied flux, however, can undesirably cause displacement of the placed component due to flux boiling. Excess flux further adversely impacts other circuit board manufacturing processes. For example, traces of the soldering flux residues which remain after solder reflow can lead to circuit failure, delamination of underfill, etc.Accordingly, a continual need exists for improved processes and/or assemblies for the packaging of electronic components on to substrates employing solder fluxes.SUMMARY OF THE INVENTIONAn advantage of the present invention is a high yield, high through-put process for inspecting the coverage and/or uniformity of applied flux during assembly of a device package.Additional advantages and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the invention. The advantages of the invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method of inspecting the application of flux on a substrate. The method comprises applying flux to the substrate over a preselected area, e.g. over an array of conductive contacts suitable for mounting a device to form a flux zone having an area. In an embodiment of the present invention, the flux is applied to cover approximately the same area occupied by an array of conductive contacts, e.g. an array of landing pads, on the substrate.In practicing the invention, the flux zone area is inspected by optical or electro-optical spectroscopy. Embodiment of the present invention include applying a rosin flux to a chip area on a laminate substrate and inspecting the coverage and/or uniformity of the applied flux by fluorescence and/or infrared spectroscopy.Another aspect of the present invention is a method of manufacturing an interconnected device assembly. The method comprises: providing a substrate having conductive contacts thereon for mounting a device, providing a device having a plurality, e.g. an array, of solder contacts thereon; applying a flux to the substrate to form a flux zone on the substrate; inspecting the flux zone area by optical or electro-optical spectroscopy; contacting the device and substrate such that the solder contacts of the device are aligned with the conductive contacts on the substrate to form a substrate/assembly; and forming an electrical connection between the solder contacts of the device and the conductive contacts on the substrate. The amount of flux that will be satisfactory depends on several factors, often requiring empirical determinations.By monitoring the coverage of the applied flux prior to assembling the semiconductor device and substrate, the present invention advantageously provides an essentially instant and continuous method for determining adequate coverage and/or uniformity of the applied flux during the packaging process. In an embodiment of the present invention, the flux covering an area defined by the perimeter of the array of conductive contacts on the substrate is from about 50% to about 150%, e.g. approximately 100% of the area defined by the perimeter of the array of conductive contacts on the substrate.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the present invention. Accordingly, the drawing and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a flow chart of packaging a device in accordance with the present invention.FIG. 2 illustrates a top view of a substrate having applied flux in accordance with the present invention.FIG. 3 shows a top view of a fluxed substrate being inspected in accordance with the present invention.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves the problem of random and systematic variations in the coverage and uniformity of flux applied to a substrate caused by variations in flux composition, fluctuations in process parameters, varied pattern densities, etc. by a non-contact inspection technique performed during assembly of a device and substrate. The present invention enables the manufacture of semiconductor packages, particularly flip chip package assemblies, with improved control over the fluxing process and with an attendant increase in device performance. The present invention advantageously enables in-situ process control and closed-loop control over the fluxing procedure.The various features and advantages of the present invention will become more apparent as a detailed description of the embodiments thereof is given with reference to the appended figures. For example, a flow diagram of assembling a integrated device, e.g. a IC die, and a substrate in a flip chip configuration in accordance with the present invention is illustrated in FIG. 1. The method of the present invention begins with Step 100 by providing a substrate for mounting a device. The substrate has an array of conductive contacts corresponding to the solder bumps of the device to be mounted and joined thereto and can be made of ceramic or organic materials.In an embodiment of the present invention, the substrate is constructed of a plurality of laminated dielectric and conductive layers, e.g. a bismaleimide-triazine (BT) resin or FR-4 board laminate, where individual IC chips are mounted to the top layer of the substrate. A pre-defined metallization pattern lies on each dielectric layer within the substrate. Metallization patterns on certain layers act as voltage reference planes and also provide power to the individual chips. Metallization patterns on other layers route signals between individual chips. Electrical connections to individual terminals of each chip and/or between separate layers are made through vias. Interconnect pins are bonded to metallic pads situated on the face of the substrate and are thereby connected to appropriate metallization patterns existing within the substrate. These interconnect pins route electrical signals between a multi-chip integrated circuit package and external devices. The array of conductive contacts on the face of the substrate can be coated with solder alloy to form bond pads or solder bumps corresponding to a particular device. Alternatively, the substrate can be fabricated from ceramic materials, such as silicon, alumina, glass, etc.In Step 102, a thin film substrate fluxer, such as a brush or spray fluxer available from ASYMTEX is suitably charged for fluxing operations. Flux is then applied to the substrate by either brushing or spraying the flux onto the appropriate portion of the substrate. The amount of applied flux will depend on the size of the device intended to be interconnected on the substrate, the number of terminals on the device, the type of solder employed, the type of flux employed, the reflow temperature employed, the oven atmosphere, the type of substrate, etc. Flux is applied to the substrate over the areas where a solder interconnection is to be made. Such preselected areas on the substrate is generally referred to in the art as the chip pad area. The present invention advantageously enables inspection of the chip pad area for complete coverage of flux.In accordance with the present invention, the applied flux is inspected, Step 104, to determine the coverage and/or uniformity of the applied flux in the a given chip pad area. By inspecting the applied flux, the present invention enables improved control over the formation of interconnects between the component and substrate by ensuring adequate flux coverage for the subsequent solder reflow step. The amount of flux that will be satisfactory depends on several factors, often requiring empirical determinations.Step 106, thus, indicates a decision point as to whether the applied flux sufficiently covers the flux zone such that a reliable interconnection will be formed during reflow or whether the fluxing step needs to be repeated or the substrate disposed. When the applied flux is not adequate, the substrate is disposed or cleaned of any flux, Step 112. The substrate can be cleaned with a suitable solvent for removing the insufficiently applied flux. Such solvents include aromatics, such as xylene, toluene, terpene, etc. and alcohols, such as methanol, ethanol, isopropanol, tetrahydrofuryl-2-carbionol, etc. or mixtures thereof. After cleaning or stripping the inadequately applied flux, the substrate is ready for re-application of the flux, step 102. Suitable fluxes include rosin based fluxes, available from Alphametals of New Jersey, and no-clean fluxes, available from Indium Corporation of New York.When the applied flux is adequate, a component, e.g. a semiconductor device, is provided for packaging, Step 108. The component can be any device having a solder terminal thereon as, for example, a IC made of at least one semiconductor material and having one of a variety of lead-based or lead-free solder bumps on the IC. The invention also contemplates the packaging of a resistor, capacitor, inductor, transistor, or any other electronic component in need of packaging and employing flux.Step 108 further comprises contacting the component and substrate. In this process, a conventional pick and place tool is employed to retrieve a component, precisely determine the placement of the component on the sufficiently fluxed substrate and place the aligned component on in the chip pad area of the substrate. Following assembly, the device/substrate assembly is heated to reflow the solder thereby activating the applied flux and forming an electrical interconnection between the parts, Step 110.In an embodiment of the present invention, the area covered by the flux will be approximately the same area occupied by an array of conductive contacts, e.g. landing pads, on the substrate. The perimeter of an array of conductive contacts can be used to define the area of the flux zone such that adequate coverage of flux is equal to the area defined by the perimeter of the array of conductive contacts on the substrate. In FIG. 2, an embodiment of the present invention is illustrated where substrate 20 has a thin film of flux 22 in chip pad area 24 over an array of solder pads 26. In an embodiment of the present invention, the applied flux approximately covers that entire area occupied by the conductive contacts.In practicing the present invention, the applied flux is then inspected for uniformity and coverage over the flux zone area, e.g. the chip pad area. Inspecting the applied flux can comprise any optical or electro-optical method that is able to contrast the applied flux with the substrate. For example, certain fluxes are know to contain fluorescent species. The detection and quantification of specific substances by fluorescence emission spectroscopy are founded upon the proportionality between the amount of emitted light and the amount of a fluorescent substance present. Thus, when energy in the form of light, including ultra violet and visible light, is directed at the flux applied on a substrate, fluorescent substances in the flux will absorb the energy-and then emit that energy as light having a longer wavelength than the absorbed light. The emitted light from the flux can be contrasted with the substrate as determined by a photodetector or camera. In practice, the light is directed over the entire substrate at a known wavelength and the detector can be optimize to detect for the fluorescent specie in the flux composition.In the event that a particular flux does not fluoresce or the contrast between the flux and substrate is not sufficiently defined, conventional fluorescent agents can be added to the flux composition to enhance contrast. A determination of uniformity and coverage of the flux zone containing a natural or added fluorescent agents in the flux composition can be made when the concentration of the agent is as low as several parts per million (ppm), or parts per billion (ppb), and at times as low as parts per trillion (ppt). In an embodiment of the present invention, the amount of a fluorescent specie added to the flux composition should be sufficient to provide a concentration of the specie of from about 50 ppt to about 10 ppm. The capability of measuring very low levels is an immense advantage. Such fluorescence analyses can be made in-line (i.e. during the fluxing operation), practically on an almost instant and continuous basis, with conventional equipment.In accordance with the present invention, the uniformity and/or coverage of flux applied on a substrate comprise any optical or electro-optical method that is able to contrast the applied flux with the substrate. Separately or in addition to the fluorescent method described above, the present invention also contemplates the use of electro-optical spectroscopy, e.g. an infrared sensor or camera, to distinguish the applied flux to the substrate. The contrast between the flux and substrate can be carried out employing a conventional thermographic infrared camera. Such a camera typically uses a thermographic infrared sensor to capture an infrared image. Localized changes in temperature caused by infrared irradiation are detected by the thermographic infrared sensor. The sensor detects localized changes in temperature through changes in a value of a physical property of the sensor, such as localized changes in electrical resistance, electromotive force, or electrical charge. Should the temperature difference between the flux and substrate not be sufficient for contrast, then the fluxed substrate can be heated prior to analysis.In an embodiment of the present, the fluxed substrate can be irradiated with heat prior to inspecting the flux with an infrared camera. Since the substrate and flux have different thermal conductivity, the absorption or release of heat can provide a sufficient temperature distribution to distinguish the flux on the substrate. A conventional infrared camera and temperature distribution-detecting means can then be employed to detect the two-dimensional temperature distribution of the flux coating and send an output signal to an imaging means, e.g. a computer.As illustrated in the embodiment of FIG. 3, an optical or electro-optical camera 30 is positioned over substrate 32 to inspect a film of applied flux 34 overlaying an array of solder pads (not shown for illustrative convenience). The present invention contemplates inspecting the applied flux to determine whether the coverage is satisfactory for a particular assembly. As discussed above, the amount of flux that will be satisfactory depends on several factors, often requiring empirical determinations. In practicing the present invention, the adequacy of applied flux can be determined by comparing the area covered by the flux to the area defined by the conductive contacts on the substrate, i.e. the flux zone or chip pad area. Thus, once satisfactory coverage has been determined for a given device/substrate assembly, the packaging process can be controlled such that when the coverage of the flux falls below or above predetermined values the process is interrupted and corrected according the steps shown in FIG. 1.In an embodiment of the present invention, a TAC 10 flux, available from Indium Corporation, is applied over a ceramic substrate having an array of solder pads thereon for mounting a semiconductor device, e.g. such as a bumped IC die. The flux covering the area defined by the perimeter of the array of solder pads on the substrate is from about 50% to about 150% of the area, e.g. from about 100% of the area defined by the perimeter of the array of solder pads on the substrate.When the applied flux falls within the predetermined values, the device and substrate are assembled and an electrical interconnection is formed between the device and the substrate by the application of heat. The heat can be generated by infrared radiation, a flow of dry heated gas, such as n a belt furnace, or a combination thereof, to reflow the solder and interconnect the device and substrate. In an embodiment of the present invention, the assembly is reflowed by a process of heating a laminate substrate from about 220[deg.] C. to about 270[deg.] C., by a process of a combined infrared/convection heater. When the substrate is made of a ceramic material, the electrical and mechanical interconnect between the die and substrate can be heated by reflowing the solder pads at a relatively higher temperature, such as about 350[deg.] C. to 370[deg.] C., to form the interconnections between the die and substrate to form an interconnected package.The process steps and structures described above do not form a complete process flow for manufacturing device assemblies or the packaging of integrated semiconductor devices. The present invention can be practiced in conjunction with electronic package fabrication techniques currently used in the art, and only so much of the commonly practiced process steps are included as are necessary for an understanding of the present invention. The figures representing cross-sections of portions of electronic package fabrication are not drawn to scale, but instead are drawn to illustrate the features of the present invention.While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
A method of resolving simultaneous branch predictions prior to validation of the predicted branch instruction is disclosed. The method includes processing two or more predicted branch instructions, with each predicted branch instruction having a predicted state and a corrected state. The method further includes selecting one of the corrected states. Should one of the predicted branch instructions be mispredicted, the selected corrected state is used to direct future instruction fetches.
CLAIMS What is claimed is: 1. A method of resolving simultaneous predicted branch instructions prior to validation of the predicted branch instructions, the method comprising: processing two or more predicted branch instructions, each predicted branch instruction having a predicted state and a corrected state, said predicted branch instructions entering a resolution stage simultaneously; selecting one of the corrected states from one of said predicted branch instructions; determining that at least one of the predicted branch instructions has mispredicted; and, directing future instruction fetches based on the selected corrected state. 2. The method of claim 1 wherein the predicted state further comprises a branch direction. 3. The method of claim 1 wherein the predicted state further comprises a mode of a processor. 4. The method of claim 1 wherein the predicted state further comprises a target address. 5. The method of claim 1 wherein the selected corrected state corresponds to the oldest one of the predicted branch instructions. 6. The method of claim 1 wherein the selected corrected state is randomly chosen. 7. The method of claim 1 wherein the selected corrected state is based on which pipeline last predicted. 8. The method of claim 1 wherein the selected corrected state is based on which pipeline last mispredicted. 9. The method of claim 1 wherein the selected corrected state is based a type of the predicted branch instruction. 10. A method of resolving simultaneous predicted branch instructions prior to validation of the predicted branch instructions in a plurality of pipelines, the method comprising: processing two or more predicted branch instructions, each predicted branch instruction having a predicted state and a corrected state, said predicted branch instructions entering a resolution stage in separate pipelines simultaneously; selecting one of the corrected states from one of said predicted branch instructions; determining that at least one of the predicted branch instructions has mispredicted; and, directing future instruction fetches based on the selected corrected state. 11. The method of claim 10 wherein the predicted state further comprises a branch direction. 12. The method of claim 10 wherein the predicted state further comprises a mode of a processor. 13. The method of claim 10 wherein the predicted state further comprises a target address. 14. The method of claim 10 wherein the selected corrected state corresponds to the oldest one of the predicted branch instructions. 15. The method of claim 10 wherein the selected corrected state is randomly chosen. 16. The method of claim 10 wherein the selected corrected state is based on which pipeline last predicted. 17. The method of claim 10 wherein the selected corrected state is based on which pipeline last mispredicted. 18. The method of claim 10 wherein the selected corrected state is based on a type of the predicted branch instruction. 19. A system for resolving simultaneous predicted branch instructions prior to validation of the predicted branch instructions comprising: prediction logic configured to predict multiple branch instructions, each predicted branch instruction having a predicted state and a corrected state; resolution logic configured to determine when two or more predicted branch instructions reach a resolution stage simultaneously, said resolution logic selecting one of the corrected states from one of said predicted branch instructions when at least one of said branch instructions has mispredicted; and fetch logic configured to fetch instructions based on said selected corrected state. 20. The system of claim 19 wherein the predicted state further comprises a branch direction. 21. The method of claim 19 wherein the system is a processor. 22. The system of claim 20 wherein the predicted state further comprises a mode of said processor. 23. The system of claim 19 wherein the predicted state further comprises a target address. 24. The system of claim 19 wherein the selected corrected state corresponds to the oldest one of the predicted branch instructions. 25. The method of claim 19 wherein the selected corrected state is randomly chosen. 26. The method of claim 19 wherein the selected corrected state is based on which pipeline last predicted. 27. The method of claim 19 wherein the selected corrected state is based on which pipeline last mispredicted. 28. The method of claim 19 wherein the selected corrected state is based a type of the predicted branch instruction.
METHODS AND SYSTEM FOR RESOLVING SIMULTANEOUS PREDICTED BRANCH INSTRUCTIONSBACKGROUNDFIELD OF INVENTIONThe present invention relates generally to computer systems, and more particularly to techniques for resolving simultaneous predicted branch instructions.RELEVANT BACKGROUNDAt the heart of the computer platform evolution is the processor. Early processors were limited by the technology available at that time. New advances in fabrication technology allow transistor designs to be reduced up to and exceeding 1/1000* of the size of early processors. These smaller processor designs are faster, more efficient and use substantially less power while delivering processing power exceeding prior expectations.As the physical design of the processor evolved, innovative ways of processing information and performing functions have also changed. For example, "pipelining" of instructions has been implemented in processor designs since the early 1960's. One example of pipelining is the concept of breaking execution pipelines into units, through which instructions flow sequentially in a steam. The units are arranged so that several units can be simultaneously processing the appropriate parts of several instructions. One advantage of pipelining is that the execution of the instructions is overlapped because the instructions are evaluated in parallel. Pipelining is also referred to as instruction level parallelism (ILP).A processor pipeline is composed of many stages where each stage performs a function associated with executing an instruction. Each stage is referred to as a pipe stage or pipe segment. The stages are connected together to form the pipeline. Instructions enter at one end of the pipeline and exit at the other end.Although pipeline processing continued to be implemented in processor designs, it was initially constrained to executing only one instruction per processor cycle. In order to increase the processing throughput of the processor, more recent processor designs incorporated multiple pipelines capable of processing multiple instructions simultaneously. This type of processor with multiple pipelines may be classified as a superscalar processor.Within a processor, certain types of instructions such as conditional branch instructions may be predicted. Branch prediction hardware within the processor may be designed to provide predictions for conditional branch instructions. Based on the prediction, the processor will either continue executing the next sequential instruction or be directed to a subsequent instruction to be executed.A superscalar processor utilizing branch prediction hardware may encounter and resolve two or more predicted branch instructions simultaneously within the same clock cycle in the same or separate pipelines. Commonly in such applications, the processor had to wait until it could determine the full resolution of both branch predictions in order to determine the oldest mispredicting branch before taking any remedial steps in case of a misprediction. There exists a need to decouple the selection of a corrected state from determining the oldest mispredicted branch for a high-speed processor encountering multiple branch predictions.SUMMARYThe present disclosure recognizes this need and discloses a processor which processes simultaneous branch instruction predictions by anticipating the appropriate action and taking steps towards fulfilling the appropriate action before the full resolution of all the simultaneous branch instruction predictions are available.A method of resolving simultaneous predicted branch instructions prior to validation of the predicted branch instructions is disclosed. The method first comprises processing two or more predicted branch instructions. Each predicted branch instruction has a predicted state and a corrected state. The predicted branch instructions simultaneously enter a resolution stage and one of the corrected states from one of the predicted branch instructions is selected. The method further verifies that at least one of the predicted branch instructions has mispredicted, and the selected corrected state is used to direct future instruction fetches. [0010] A method of resolving simultaneous predicted branch instructions prior to validation of the predicted branch instructions in a plurality of pipelines first comprises processing two or more predicted branch instructions. Each predicted branch instruction has a predicted state and a corrected state. The predicted branch instructions enter a resolution stage in separate pipelines simultaneously and one of the corrected states is selected from one of the predicted branch instructions. The method further verifies that at least one of the predicted branch instructions has mispredicted and the selected corrected state is used to direct future instruction fetches.A system for resolving simultaneous predicted branch instructions prior to the validation of the predicted branch instructions comprises prediction logic configured to predict multiple branch instructions. Each predicted branch instruction has a predicted state and a corrected state. The system also has resolution logic configured to determine when two or more of the predicted branch instructions reach a resolution stage simultaneously. The resolution logic then selects one of the corrected states from one of the predicted branch instructions. The system also has fetch logic configured to fetch future instructions based on the selected corrected state.A more complete understanding of the present invention, as well as further features and advantages of the invention, will be apparent from the following detailed description and the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 shows a high level logic hardware block diagram of a processor using one embodiment of the present invention.Figure 2 shows a lower level logic block diagram of a superscalar processor utilizing one embodiment of the present invention.Figure 3 shows a flow chart of a resolution stage in a pipeline of the processor ofFigure 1.Figure 4 shows a flow chart of a multiple simultaneous branch resolution flow ofFigure 3. DETAILED DESCRIPTIONThe detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention. Acronyms and other descriptive terminology may be used merely for convenience and clarity and are not intended to limit the scope of the invention.In a superscalar processor, the processor's internal resources are designed to facilitate parallel processing. Several facets of the design include instruction pre- fetching, branch processing, resolution of data dependencies involving register values, initiation of instructions and the like. Because processors operate faster than most memory devices, the program instructions are unable to be read directly from memory fast enough to properly utilize the full potential of the processor.An instruction cache is a specialized memory designed to bridge the speed gap between traditional memory and the processor. Instructions fetched from memory are placed in the faster instruction cache which is able to be read at processor clock speeds. Fetched instructions may be the next sequential instructions in the program or a target of a predicted taken branch. When the next instruction is the target of a predicted branch, the processor attempts to predict where the branch will go and fetch the appropriate instructions in advance. If the branch prediction is incorrect, the processor corrects its instruction processing by purging instructions fetched down the predicted branch path, and resumes fetching instructions down the corrected branch path. This process is described in greater detail in the discussion of Figures 2, 3 and 4.Figure 1 shows a high level view of a superscalar processor 100 utilizing one embodiment as hereinafter described. The processor 100 has a central processing unit (CPU) 102 that is connected via a dedicated high speed bus 104 to an instruction cache 106. The CPU also has another separate high speed bus 110 that connects to a data cache 108. The instruction cache 106 and data cache 108 are also connected via a general purpose bus 116 to input/output ports (I/O) 112 and memory 114.[0021 ] Within the processor 100, an Instruction Fetch Unit (IFU) 122 controls the loading of instructions from memory 114 into the instruction cache 106. Once the instruction cache 106 is loaded with instructions, the CPU 102 is able to access the instructions via the high speed bus 104. The instruction cache 106 may be a separate memory structure as shown in Figure 1 , or may be integrated as an internal component of the CPU 102. The integration may hinge on the size of the instruction cache 106 as well as the complexity and power dissipation of the CPU 102.Instructions may be fetched and decoded from the instruction cache 106 several instructions at a time. Within the instruction cache 106 instructions are grouped into sections known as cache lines. Each cache line may contain multiple instructions. The number of instructions fetched may dependent upon the required fetch bandwidth as well as the number of instructions in each cache line. In one embodiment, the CPU 102 loads four instructions from the instruction cache 106 into an upper pipeline 250 in the IFU 122 during each clock cycle. Within the upper pipeline 250, the instructions are analyzed for operation type and data dependencies. After analyzing the instructions, the processor 100 may distribute the instructions from the upper pipe 250 to lower functional units or pipelines 210 and 220 for execution.The instructions may be sent to lower pipelines 210 or 220 depending on the instruction function, pipe availability, instruction location within the group of instructions loaded from the instruction cache 106 and the like. Within the lower pipelines 210 and 220, the instructions are processed in parallel based on available resources rather than original program sequence. This type of processing is often referred to as dynamic instruction scheduling.Lower pipelines 210 and 220 may contain various Execution Units (EU) 118 such as arithmetic logic units, floating point units, store units, load units and the like. For example, an EU 118 such as an arithmetic logic unit may execute a wide range of arithmetic functions, such as integer addition, subtraction, simple multiplication, bitwise logic operations (e.g. AND, NOT, OR, XOR), bit shifting and the like. After an instruction finishes executing, the CPU 102 takes the instruction results and reorders them into the proper sequence so the instruction results can be used to correctly update the processor 100.Most programs executed by the processor 100 may include conditional branch instructions. The actual branching behavior of the conditional branch instruction is not known until the instruction is executed deep in the lower pipeline 210 or 220. To avoid a stall that might result from waiting for the final execution of the branch instruction and subsequently having to fetch instructions based on the results of the branch instruction, the processor 100 may employ some form of branch prediction. Using branch predictions the processor 100 may predict the branching behavior of conditional branch instructions in the upper pipe 250. Based on the predicted branch evaluation, the processor 100 speculatively fetches and prepares to execute instructions from a predicted address - either the branch target address (e.g. if the branch is predicted taken) or the next sequential address after the branch instruction (e.g. if the branch is predicted not taken).One example of a conditional branch instruction is the simple assembler instruction jump not equal (JNE). When the JNE instruction is executed, a particular value may be loaded into a register and if the value were equal to zero, the conditional branch is not taken and the next instruction in sequence is fetched and executed. However if the value in the register is not equal to zero, the conditional branch is considered taken and the next instruction fetched is located at a target address associated with the JNE instruction. The target address could have been previously associated with the JNE instruction on a previous execution of the JNE instruction.When predicting instructions several conditions or "states" may be predicted.For example, branch direction, target addresses or a particular state of the processor 100 such as processor mode may be predicted. Predicting the processor mode may entail predicting what mode the processor 100 will be in after the execution of the branch instruction. For example, in the advanced RISC processor architecture, instructions may be executed in either ARM mode or Thumb mode.One possible way of predicting the direction of a conditional branch is to utilize a branch history table. A branch history table may be a simple look up table that stores the history of a number of the previous branches. One branch history table may store the 1024 directions of previous conditional branches. A complex algorithm may be written to make a branch prediction based a hierarchy of prediction techniques (multilevel branch predictors).Figure 2 displays a lower level functional block diagram 200 of the upper pipe250 and two lower pipelines 210 and 220 within the processor 100, processing instructions in accordance with one aspect of the present invention. The different logic blocks (or stages) within the functional hardware block diagram 200 may contain hardware, firmware or a combination of both. The functional hardware block diagram 200 consists of the upper pipe 250 and the two lower pipelines 210 and 220. As previously mentioned, the stages for upper pipe 250 may reside within the IFU 122. Within the upper pipe 250 is a fetch stage 202, an instruction cache stage 204, an instruction decode stage 206. Also associated with the upper pipe 250 is branch prediction logic 208.The first stage in the upper pipe 250 is the fetch stage 202. The fetch stage 202 controls the selection of the next group of instructions to be retrieved. After the processor 100 powers up, the fetch stage 202 determines that initialization instructions are to be retrieved and loaded. As is described in connection with the discussions of Figure 3 and Figure 4, the fetch stage 202 also may receive feedback from the lower pipelines 210 and 220. The feedback may influence the selection of future instructions and in what order the instructions are to be executed.In the instruction cache stage 204, the instruction address selected during the fetch stage 202 is used to access the instruction cache 106 to determine if the instructions at that address are present. If there is an instruction cache hit, the CPU 102 retrieves instructions from the instruction cache 106 into the upper pipe 250, allowing the processor 100 to fetch instructions at processor speed without going back to memory 114. If there is an instruction cache miss (i.e. instructions to be fetched are not available from the instruction cache 106), the IFU 122 retrieves the instructions from memory 114, loads them into the instruction cache 106, and transfers them to the CPU 102. After the instructions are retrieved during the instruction cache stage 204, the instructions are analyzed during the instruction decode stage 206. [0032] During the instruction decode stage 206, information pertaining to the various instructions is analyzed and processed. For example, within the instruction decode stage 206, the processor 100 may determine the type of instruction (e.g. move, store, load, jump, and the like). If the instruction is a conditional branch instruction, branch prediction logic 208 will be invoked. The instruction decode stage 206 communicates with the branch prediction logic 208 informing the branch prediction logic 208 that it has encountered a branch instruction.As part of the branch prediction, the branch prediction logic 208 provides a predicted state. Information stored in the predicted state, may include a predicted branch direction, a predicted target address, or a predicted state of the processor 100. This information may be stored in a register, group of registers, or memory location associated with the branch instruction. In one aspect of the present invention, the predicted state may contain only the predicted branch direction. In another embodiment, the predicted state may contain information relating only to the predicted branch direction and the predicted target address. In a further embodiment, the predicted state may contain information relating to the predicted target address and predicted processor mode. In yet another embodiment, the predicted state may contain information for the predicted branch direction, the predicted target address and the predicted processor mode.When a branch direction is predicted the predicted state may contain information predicting the branch as taken or not taken. In one embodiment, the prediction state may be a single bit. For example, a "1" stored in a bit location within a register or memory location associated with the predicted state may indicate the branch as predicted taken. Conversely, if a "0" were stored at that bit location within the register or memory location, the branch may be predicted as not taken.If a target address is predicted, the predicted state may contain a target address indicating the location where the next instruction is to be fetched. The size of the target address may be dependent on the architecture of the processor 100. In one embodiment, the target address may be a 32- bit address identifier stored in a register associated with the predicted state. [0036] When a processor mode is predicted, the predicted state may contain information relating to the predicted mode the processor 100 will be in once the conditional branch is executed. For example, based on the prediction for processor mode, the processor may take steps such as performing the instruction decode differently (i.e. ARM decode of the instruction versus Thumb decode of the instruction). The predicted state for processor mode may be a single bit value stored in a register or memory location.Complementing the predicted state, the branch prediction logic 208 also calculates and stores a corrected state associated with the predicted branch instruction. The corrected state contains information in case the prediction is incorrect. Information stored as part of the corrected state may contain a recovery address and the previous state of the processor. The corrected state may be used by the processor 100 to recover the proper instruction order sequence in case of a branch misprediction.As a result of the prediction made by the branch prediction logic 208, information is provided to the fetch logic within the fetch stage 202 to direct subsequent instruction fetches. The predicted state is used by the fetch logic to retrieve the appropriate instructions based on the prediction. For example, if the predicted state contains a target address, the fetch logic retrieves the next instruction from the instruction cache 106 located at that target address. Should the instruction not be available in the instruction cache 106, the fetch logic loads the instruction from memory 114 into the instruction cache 104 and then loads the instruction into the upper pipe 250.It is not uncommon to encounter another branch instruction requiring a prediction before the initial branch prediction has been resolved. In this instance, the processor 100 keeps track of each of the predictions that are performed by the branch prediction logic 208. This tracking includes identifying which prediction came first. One way of tracking the "age" of the prediction is to utilize an instruction order value associated with each conditional branch instruction. As each predicted state is assigned, the instruction order value is also assigned, stored or carried with the branch instruction. Once the prediction logic has performed the prediction or has determined that the current instruction in the instruction decode stage 206 requires no prediction, the instruction is passed on to the appropriate lower pipeline 210 and 220. [0040] As described previously, the lower pipelines 210 and 220 may be associated with certain types of instructions. For example, a pipeline may be designed only to execute instructions of an arithmetic nature or handle all of the load/store functionality. In order to send a predicted branch instruction to a pipeline, the pipeline has to be designed to handle branch instructions. As shown in Figure 2, both lower pipelines 210 and 220 are configured to handle branch instructions. The lower pipelines 210 and 220 may also be designed to execute multiple instructions during each processor cycle. Thus, within the lower pipelines 210 and 220, multiple branch instructions may be executed during the same processor cycle.Once the instructions enter the appropriate lower pipeline 210 or 220, the instructions, such as branch instructions, may be rearranged to facilitate a more efficient execution. If a branch instruction reaches the lower pipeline 210 or 220, and needs further information or data to continue execution, the processor 100 may execute another instruction or group of instructions before executing the branch instruction. In this case, the branch instruction may be held in a reservation station (not shown) until the information necessary to facilitate execution is available. For example, the branch instruction may be held in the reservation station if the branch instruction branches to a target address stored in a particular register and the target address is not yet available. The value of the target address may be determined as a function of another subsequently executed instruction. The branch instruction is held until the subsequent instruction executes, updates the particular register and the target address becomes available. After the target address becomes available, the branch instruction is released for further execution. Instructions executed in this manner are executed in parallel based on available resources rather than original program sequence. After the instructions have executed in the lower pipelines 210 and 220, the results are collected and are reordered into the proper sequence so the processor 100 may be updated correctly.Within the reservation station, several instructions may be held at the same time, each instruction waiting for further information, processor resources, and the like. Commonly, multiple instructions may be released from the reservation station during the same processor cycle by the processor 100. Thus, it is possible that multiple branch instructions may be released from the reservation station during the same processor cycle. [0043] The processor 100 continues to monitor the instructions as they are executed in the lower pipelines 210 and 220. When a branch instruction has been released from the reservation station or is ready for final execution, the processing of the prediction associated with the branch instruction is performed by resolution logic 225 within a resolution stage 215 of each lower pipeline 210 and 220. The resolution stage 215 will be described in connection with the discussion of Figure 3.The resolution logic 225 verifies the correctness of the predicted state and selects the corrected state in the event of a misprediction. For example, if the predicted state is a target address and the target address does not match the actual target address determined, a mispredict occurs. In the case of a misprediction, the resolution logic 225 provides feedback to the fetch stage 202 including information identifying the instructions needing to be flushed as well as the corrected state. The flushed instructions are instructions previously fetched based on the incorrect prediction. After the appropriate instructions are flushed, the fetch logic starts re-fetching instructions based on the corrected state. If the resolution logic 225 determines that a prediction was correct it takes no action and the instructions speculatively fetched (based on the prediction) are subsequently executed.Figure 3 shows a flow chart describing the process flow 300 associated with the resolution stage 215 in either lower pipeline 210 and 220. The process flow 300 begins at start block 302 when the predicted branches instructions have had all their dependencies resolved. A branch dependency is resolved when all the operands upon which the prediction is based are available. This resolution occurs in either of the lower pipelines 210 and 220 in the resolution stage 215.At decision block 312, a decision is made whether there are multiple predicted branches entering the branch resolution stage 215 simultaneously. As discussed previously, multiple branch instructions may enter the resolution stage 215 during the same processor cycle in the same lower pipeline 210 or 220. One aspect of the present invention resolves multiple branch predictions simultaneously in the same lower pipeline 210 or 220. In an alternative embodiment, a branch prediction entering the resolution stage 215 in lower pipeline 210, and a branch prediction entering the resolution stage 215 of lower pipeline 220 may be resolved simultaneously. Should the processor 100 have additional lower pipelines, another embodiment of the present invention may resolve multiple branch predictions in one of the lower pipelines.The processor 100 monitors both lower pipelines 210 and 220 to make this assessment. If the processor 100 determines that there are multiple branch predictions entering the resolution stage 215 simultaneously, the process flow 300 is directed to a multiple simultaneous branch resolution flow 320. The multiple simultaneous branch resolution flow 320 determines how to resolve two or more predicted branch instructions simultaneously entering the branch resolution stage 215 and is discussed further in connection with Figure 4. If only one branch prediction is entering the resolution stage 215, the process flow 300 continues to decision block 314.At decision block 314, the branch prediction results are analyzed to determine if the branch was correctly predicted. If the prediction for the conditional branch instruction was correctly predicted at decision block 314 (e.g. the predicted target address of a conditional branch instruction matches a resolved target address), the remaining instructions in the lower pipelines 210 and 220 as well as the upper pipe 250 have been correctly predicted, and the process flow 300 is then directed to finish block 350.If, at decision block 314, the branch prediction results show that a mispredict has occurred (e.g. the predicted target address does not match the resolved target address), all instructions younger than the mispredicted instruction (based on the instruction order value of the branch prediction) are flushed from the upper pipe 250 and the lower pipelines 210 and 220 as indicated at block 316. The process flow 300 proceeds to block 318 where the branch's corrected state information is then fed to the fetch logic within the fetch stage 202. The fetch logic fetches instructions based on the branch's corrected state.Figure 4 illustrates a multiple simultaneous branch resolution process flow 320 in further detail. The multiple simultaneous branch resolution process flow 320 begins when two or more predicted branches reach the resolution stage 215 during the same processor cycle. When two or more predicted branches enter the resolution phase 215 simultaneously, the processor 100 handles the resolution of both branches during the same processor cycle. This resolution includes determining if either branch has mispredicted and taking the appropriate actions such as redirecting a pipeline.As shown in Figure 4, the multiple simultaneous branch resolution process flow320 first chooses one of the resolving branch's corrected state at block 402. The selection of one of the corrected states occurs before the multiple simultaneous branch resolution process flow 320 determines if a mispredict has occurred. By selecting one of the corrected states early in the multiple simultaneous branch resolution process flow 320, additional time may be saved by anticipating a mispredict. If a mispredict has not occurred, no additional processing time has been lost by this selection.The selection of the particular branch's corrected state can be based on several factors. In one embodiment, the selection of the corrected state is based on the relative ages of the resolving branches and the oldest branch instruction is selected. In an alternative embodiment, the selection of the corrected state may be based on which of the lower pipelines 210 or 220 mispredicted most recently. In yet another embodiment, the selection may be based on from which lower pipeline 210 or 220 the last prediction came. In a further embodiment, the type of instruction may be used as a basis for selecting the corrected state. Alternatively, the corrected state may be chosen at random. Regardless of the selection process, the selected corrected state will be used to steer the fetch logic for future instruction fetches in the case of a mispredict.The selection of the corrected state may have an impact on the execution speed of the processor 100. Depending on the processor design, each of the aforementioned aspects may have certain speed advantages. For example, choosing a corrected state based on the lower pipeline 210 or 220 that last made a prediction may perform faster than determining which of the lower pipelines 210 or 220 last mispredicted. The validity of the corrected state is verified later in the multiple simultaneous branch resolution process flow 320.One advantage of a processor 100 utilizing the embodiments described previously is that the processor 100 anticipates that the oldest branch prediction was mispredicted. Accordingly, the processor 100 takes the appropriate action such as flushing the upper pipe 250 and the lower pipelines 210 and 220, instead of reducing the processor frequency to accommodate the extra time needed to determine exactly which branch has mispredicted in a single cycle. If the processor 100 chooses the correct mispredicting branch a higher clock frequency may be achieved since the processor 100 will start flushing and refetching instructions before waiting to determine which branch prediction corresponds to the oldest mispredicting branch. The increased clock rate afforded to the processor 100 far outweighs any clock cycles lost due to choosing the wrong corrected state.After the corrected state information has been selected at block 402 the multiple simultaneous branch resolution process flow 320 proceeds to decision block 404. At decision block 404, the resolution logic 225 determines if a mispredict for any branch instruction has occurred. If no misprediction has occurred at decision block 404, the multiple simultaneous branch resolution process flow 320 ends at block 450. In this case, both predictions were correct and the instructions loaded into the upper pipe 250 and lower pipelines 210 and 220 are valid and no correction is necessary.If a mispredict has occurred, the multiple simultaneous branch resolution process flow 320 proceeds to block 406 where all of the instructions in the upper pipe 250 are flushed. Since all instructions in the upper pipe 250 are still in program order they are younger than either resolving branch. Because instructions in the upper pipe 250 are younger than either resolving branch instruction, they were fetched down the mispredicted path and are flushed.After the upper pipe 250 has been flushed at block 406, the multiple simultaneous branch resolution process flow 320 continues to block 408. At block 408, the fetch logic uses the corrected state of the selected branch to redirect fetching in the upper pipe 250. The multiple simultaneous branch resolution process flow 320 continues to decision block 410. At decision block 410, the choice of corrected state information is verified by checking if the mispredicted branch instruction corresponds to the branch instruction whose corrected state had been selected. If the verification is successful at decision block 410, at block 418 the processor 100 flushes the more recent instructions from the lower pipelines 210 and 220 based on the instruction order value. From block 418, the multiple simultaneous branch resolution process flow 320 ends and proceeds to block 450. [0058] If at block 410 the verification failed (i.e. the oldest mispredicting branch was not selected), all instructions are flushed again from the upper pipe 250 at block 412. The flushing of instructions at block 412 effectively removes the instructions fetched at block 408 from the upper pipe 250. The second branch's corrected state information is then fed to the fetch logic at block 414. The fetch logic fetches instructions based on the second branch's corrected state. All of the instructions younger than the second branch prediction (based on instruction order value) are flushed from the lower pipelines 210 and 220 at step 416. After the instructions are flushed from the lower pipelines 210 and 220 the multiple simultaneous branch resolution process flow 320 ends at step 450.The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art appreciate that any arrangement, which is calculated to achieve the same purpose, may be substituted for the specific embodiments shown and that the invention has other applications in other environments. This application is intended to cover any adaptations or variations of the present invention. The following claims are in no way intended to limit the scope of the invention to the specific embodiments described herein.
An interconnect structure with a plurality of low dielectric constant insulating layers acting as etch stops is disclosed. The low dielectric constant materials act as insulating layers through which trenches and vias are subsequently formed by employing a timed etching. Since the low dielectric constant materials are selected so that the etchant available for each one has only a small etch rate relative to the other low dielectric constant materials, the plurality of low dielectric constant materials act as etch stops during the fabrication of interconnect structures. This way, the etch stop layers employed in the prior art are eliminated and the number of fabrication steps is reduced.
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of forming an interconnect structure, said method comprising the steps of:forming a second opening through a second insulating layer which is provided over and in contact with a first insulating layer, said first and second insulating layers comprising a low dielectric constant material; etching a first opening through said first insulating layer, said first opening being in communication with said second opening; and providing a first conductive material in said first and second openings. 2. The method of claim 1, wherein said first insulating layer is formed of organic material.3. The method of claim 2, wherein said organic material is selected from the group consisting of polyimide, spin-on-polymers, flare, polyarylethers, parylene, polytetrafluoroethylene, benzocyclobutene and SILK.4. The method of claim 2, wherein said first insulating layer is formed of SILK.5. The method of claim 1, wherein said first insulating layer is formed of a low dielectric constant inorganic material.6. The method of claim 5, wherein said inorganic material is selected from the group consisting of fluorinated silicon oxide, hydrogen silsesquioxane and NANOGLASS.7. The method of claim 5, wherein said first insulating layer is formed of NANOGLASS.8. The method of claim 1, wherein said first insulating layer is formed by deposition to a thickness of about 4,000 to 30,000 Angstroms.9. The method of claim 8, wherein said first insulating layer is formed by deposition to a thickness of about 12,000 to 20,000 Angstroms.10. The method of claim 1, wherein said second insulating layer is formed of a low dielectric constant organic material.11. The method of claim 1, wherein said low dielectric constant organic material is selected from the group consisting of polyimide, spin-on-polymers, flare, polyarylethers, parylene, polytetrafluoroethylene, benzocyclobutene and SILK.12. The method of claim 10, wherein said second insulating layer is formed of SILK.13. The method of claim 1, wherein said second insulating layer comprises a low dielectric constant inorganic material.14. The method of claim 13, wherein said inorganic material is selected from the group consisting of fluorinated silicon oxide, hydrogen silsesquioxane and NANOGLASS.15. The method of claim 13, wherein said second insulating layer is formed of NANOGLASS.16. The method of claim 1, wherein said second insulating layer is formed by deposition to a thickness of about 1,000 to 2,000 Angstroms.17. The method of claim 16, wherein said second insulating layer is formed by deposition to a thickness of about 500 Angstroms.18. The method of claim 1, wherein said first and second insulating layers are formed of different materials which can be selectively etched relative to each other.19. The method of claim 18, wherein said step of forming said first opening is achieved by time etching said first insulating layer with a first etch chemistry.20. The method of claim 19, wherein said step of forming said second opening is achieved by etching said second insulating layer with a second etch chemistry.21. The method of claim 1, wherein said first conductive material is blanket deposited.22. The method of claim 1, wherein said first conductive material is formed of a material selected from the group consisting of copper, copper alloy, gold, gold alloy, silver, silver alloy, tungsten, tungsten alloy, aluminum, and aluminum alloy.23. The method of claim 1 further comprising the step of chemical mechanical polishing said first conductive material.24. The method of claim 1 further comprising the step of forming a barrier layer before said step of providing said first conductive material.25. The method of claim 1 further comprising the steps of:forming a fourth opening through a fourth insulating layer which is provided over a third insulating layer, said third insulating layer being formed over said conductive material, said third and fourth insulating layers comprising a low dielectric constant material; etching a third opening through said third insulating layer, said third opening being in communication with said fourth opening; and providing a second conductive material in said third and fourth openings. 26. The method of claim 25, wherein said third insulating layer is formed of organic material.27. The method of claim 26, wherein said organic material is selected from the group consisting of polyimide, spin-on-polymers, flare, polyarylethers, parylene, polytetrafluoroethylene, benzocyclobutene and SILK.28. The method of claim 26, wherein said third insulating layer is formed of SILK.29. The method of claim 25, wherein said third insulating layer is formed of a low dielectric constant inorganic material.30. The method of claim 29, wherein said inorganic material is selected from the group consisting of fluorinated silicon oxide, hydrogen silsesquioxane and NANOGLASS.31. The method of claim 29, wherein said third insulating layer is formed of NANOGLASS.32. The method of claim 25, wherein said third insulating layer is formed by deposition to a thickness of about 4,000 to 30,000 Angstroms.33. The method of claim 32, wherein said third insulating layer is formed by deposition to a thickness of about 12,000 to 20,000 Angstroms.34. The method of claim 25, wherein said fourth insulating layer is formed of a low dielectric constant organic material.35. The method of claim 34, wherein said low dielectric constant organic material is selected from the group consisting of polyimide, spin-on-polymers, flare, polyarylethers, parylene, polytetrafluoroethylene, benzocyclobutene and SILK.36. The method of claim 35, wherein said fourth insulating layer is formed of SILK.37. The method of claim 25, wherein said fourth insulating layer is formed of a low dielectric constant inorganic material.38. The method of claim 37, wherein said inorganic material is selected from the group consisting of fluorinated silicon oxide, hydrogen silsesquioxane and NANOGLASS.39. The method of claim 37, wherein said fourth insulating layer is formed of NANOGLASS.40. The method of claim 25, wherein said fourth insulating layer is formed by deposition to a thickness of about 1,000 to 2,000 Angstroms.41. The method of claim 40, wherein said fourth insulating layer is formed by deposition to a thickness of about 500 Angstroms.42. The method of claim 25, wherein said third and fourth insulating layers are formed of different materials which can be selectively etched relative to each other.43. The method of claim 42, wherein said step of forming said third opening is achieved by time etching said third insulating layer with said second etch chemistry.44. The method of claim 43, wherein said step of forming said fourth opening is achieved by etching said second insulating layer with said first etch chemistry.45. The method of claim 25, wherein said second conductive material is blanket deposited.46. The method of claim 25, wherein said second conductive material is formed of a material selected from the group consisting of copper, copper alloy, gold, gold alloy, silver, silver alloy, tungsten, tungsten alloy, aluminum, and aluminum alloy.47. The method of claim 25 further comprising the step of chemical mechanical polishing said second conductive material.48. The method of claim 25 further comprising the step of forming a barrier layer before said step of providing said second conductive material.
FIELD OF THE INVENTIONThe present invention relates to semiconductor devices and methods of making such devices. More particularly, the invention relates to a method of providing an etch stop in damascene interconnect structures.BACKGROUND OF THE INVENTIONThe integration of a large number of components on a single integrated circuit (IC) chip requires complex interconnects. Ideally, the interconnect structures should be fabricated with minimal signal delay and optimal packing density. The reliability and performance of integrated circuits may be affected by the qualities of their interconnect structures.Advanced multiple metallization layers have been used to accommodate higher packing densities as devices shrink below sub-0.25 micron design rules. One such metallization scheme is a dual damascene structure formed by a dual damascene process. The dual damascene process is a two-step sequential mask/etch process to form a two-level structure, such as a via connected to a metal line situated above the via.As illustrated in FIG. 1, a known dual damascene process begins with the deposition of a first insulating layer 14 over a first level interconnect metal layer 12, which in turn is formed over or within a semiconductor substrate 10. A second insulating layer 16 is next formed over the first insulating layer 14. An etch stop layer 15 is typically formed between the first and second insulating layers 14, 16. The second insulating layer 16 is patterned by photolithography with a first mask (not shown) to form a trench 17 corresponding to a metal line of a second level interconnect. The etch stop layer 15 prevents the upper level trench pattern 17 from being etched through to the first insulating layer 14.As illustrated in FIG. 2, a second masking step followed by an etch step are applied to form a via 18 through the etch stop layer 15 and the first insulating layer 14. After the etching is completed, both the trench 17 and the via 18 are filled with metal 20, which is typically copper (Cu), to form a damascene structure 25, as illustrated in FIG. 3. If desired, a second etch stop layer, such as stop layer 29 of FIG. 4, may be formed between the substrate 10 and the first insulating layer 14 during the formation of a dual damascene structure 26.Damascene processes such as the ones described above pose significant problems. One of the problems is caused by the use of one or more etch stop layers. The etch stop layers 15, 25 prevent the damascene patterns 17, 18 from extending into or through the underlying layers 14, 10. Although the advantages of using the etch stop layers are significant, the process is complex since separate depositions are required for the etch stop layers.In addition, the most commonly used etch stop material, silicon nitride (Si3N4), has a rather high dielectric constant (k) (approximately 7), which does not satisfy anymore the requirement of resistance-capacitance delay regarding the parasitic capacitance generated by an intermetal insulating layer. As integrated circuits become denser, it is increasingly important to minimize stray capacitance between the metal layers. This is accomplished by using intermetal insulating layers that have a low dielectric constant, such as, for example, organic dielectric materials. Silicon nitride does not satisfy the requirement of small stray capacitance of advanced damascene structures.Accordingly, there is a need for an improved damascene process which reduces production costs and increases productivity. There is also a need for a damascene process that does not require etch stop layers, as well as a method for decreasing the stray capacitance between the metal layers of damascene structures.SUMMARY OF THE INVENTIONThe present invention provides a method for fabricating a damascene multilevel interconnect structure in a semiconductor device. According to one aspect of the invention, the use of high dielectric etch stop material may be avoided, so as to reduce or minimize stray capacitance.In an exemplary embodiment, a plurality of low dielectric constant materials are selected with similar methods of formation, as well as with similar capacities to withstand physical and thermal stress. The low dielectric constant materials act as insulating layers through which trenches and vias are subsequently formed by employing a timed etching. Since the low dielectric constant materials are selected so that the etchant available for each one has only a small etch rate relative to the other low dielectric constant materials, the plurality of low dielectric constant materials act as etch stops during the fabrication of damascene structures. This way, the etch stop layers employed in the prior art are eliminated and the number of fabrication steps is reduced.Additional advantages of the present invention will be more apparent from the detailed description and accompanying drawings, which illustrate preferred embodiments of the invention.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a cross sectional view of a semiconductor device at a preliminary stage of production.FIG. 2 is a cross sectional view of the semiconductor device of FIG. 1 at a subsequent stage of production.FIG. 3 is a cross sectional view of the semiconductor device of FIG. 2 at a subsequent stage of production.FIG. 4 is a cross sectional view of another semiconductor device.FIG. 5 is a cross sectional view of a semiconductor device at a preliminary stage of production and in accordance with a method of the present invention.FIG. 6 is a cross sectional view of the semiconductor device of FIG. 5 at a subsequent stage of production.FIG. 7 is a cross sectional view of the semiconductor device of FIG. 6 at a subsequent stage of production.FIG. 8 is a cross sectional view of the semiconductor device of FIG. 7 at a subsequent stage of production.FIG. 9 is a cross sectional view of the semiconductor device of FIG. 8 at a subsequent stage of production.FIG. 10 is a cross sectional view of the semiconductor device of FIG. 9 at a subsequent stage of production.FIG. 11 is a cross sectional view of the semiconductor device of FIG. 10 at a subsequent stage of production.FIG. 12 is a cross sectional view of the semiconductor device of FIG. 11 at a subsequent stage of production.FIG. 13 is a cross sectional view of the semiconductor device of FIG. 12 at a subsequent stage of production.FIG. 14 is a cross sectional view of the semiconductor device of FIG. 13 at a subsequent stage of production.FIG. 15 is a cross sectional view of the semiconductor device of FIG. 14 at a subsequent stage of production.FIG. 16 is a cross sectional view of the semiconductor device of FIG. 15 at a subsequent stage of production.FIG. 17 is a cross sectional view of the semiconductor device of FIG. 16 at a subsequent stage of production.FIG. 18 is a cross sectional view of the semiconductor device of FIG. 17 at a subsequent stage of production.FIG. 19 is a cross sectional view of a semiconductor device of FIG. 18 at a subsequent stage of production.FIG. 20 is a cross sectional view of a semiconductor device of FIG. 19 at a subsequent stage of production.FIG. 21 is a cross sectional view of a semiconductor device of FIG. 20 at a subsequent stage of production.FIG. 22 is a cross sectional view of a semiconductor device of FIG. 21 at a subsequent stage of production.FIG. 23 is a cross sectional view of a semiconductor device of FIG. 22 at a subsequent stage of production.FIG. 24 illustrates a computer system having a memory cell with a dual damascene structure according to the present invention.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSIn the following detailed description, reference is made to various specific embodiments in which the invention may be practiced. These embodiments are described with sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be employed, and that structural and electrical changes may be made without departing from the spirit or scope of the present invention.The term "substrate" used in the following description may include any semiconductor-based structure that has a semiconductor surface. The term should be understood to include silicon, silicon-on insulator (SOI), silicon-on sapphire (SOS), doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. The semiconductor need not be silicon-based. The semiconductor could be silicon-germanium, germanium, or gallium arsenide. When reference is made to a "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in or on the base semiconductor or foundation.The term "metal" is intended to include not only elemental metal, but also metal with other trace metals or in various alloyed combinations with other metals as known in the art, as long as such alloy retains the physical and chemical properties of die metal.The present invention provides a method for fabricating a damascene interconnect structure in which a plurality of low dielectric constant materials are selected with similar methods of formation, as well as with similar capacities to withstand physical and thermal stress, and through which metallization trenches and vias are formed by employing a timed etching.Referring now to the drawings, where like elements are designated by like reference numerals, FIG. 5 depicts a portion of a semiconductor substrate 50 on or within which a conducting layer 52 has been formed. The conducting layer 52 represents a lower metal interconnect layer or device level which is to be later interconnected with an upper metal interconnect layer. The conducting layer 52 may be formed of copper (Cu), but other conductive materials, such as tungsten (W) silver (Ag) gold (Au) or aluminum (Al) and their alloys, may be used also.Referring now to FIG. 6, a first intermetal insulating layer 55 is formed overlying the substrate 50 and the conducting layer 52. In a preferred embodiment of the present invention, the first intermetal insulating layer 55 is blanket deposited by spin coating to a thickness of about 4,000 Angstroms to 30,000 Angstroms, more preferably about 12,000 to 20,000 Angstroms. The first intermetal insulating layer 55 may be cured at a predefined temperature, depending on the nature of the material. Other known deposition methods, such as sputtering by chemical vapor deposition (CVD), plasma enhanced CVD (PECVD), or physical vapor deposition (PVD), may be used also for the formation of the first intermetal insulating layer 55, as desired. The first intermetal insulating layer 55 is desirably selected so that the etchant for this layer does not attack the underlying substrate material to any great extent.The first intermetal insulating layer 55 may be formed of a low dielectric constant organic material such as, for example, polyimide, spin-on-polymers (SOP), flare, polyarylethers, parylene, polytetrafluoroethylene, benzocyclobutene (BCB) or SILK. Alternatively, the first intermetal insulating layer 55 may be formed of an inorganic material with a low dielectric constant such as, for example, fluorinated silicon oxide (FSG), hydrogen silsesquioxane (HSQ) or NANOGLASS. The present invention is not limited, however, to the above-listed materials and other organic and inorganic materials with low dielectric constant may be used, especially ones whose dielectric constant (k) is lower than that of silicon oxide (SiO2), which is approximately 4.0.Next, as illustrated in FIG. 7, a thin second intermetal insulating layer 57 is formed overlying the first intermetal insulating layer 55 and below a metal layer that will be formed subsequently. The thin second intermetal insulating layer 57 may be formed, for example, by spin coating to a thickness of about 100 Angstroms to about 2,000 Angstroms, more preferably of about 500 Angstroms. Following deposition, the second intermetal insulating layer 57 is cured at a predefined temperature, depending, again, on the nature and specific characteristics of the insulating material. Other deposition methods, such as the ones mentioned above with reference to the formation of the first intermetal insulating layer 55, may be used also.The material of choice for the second intermetal insulating layer 57 is also a low dielectric constant organic or inorganic material, with a dielectric constant lower than 4.0, as the ones listed above with reference to the first intermetal insulating layer 55. However, as discussed in more detail below, the two insulating layers 55, 57 are preferably compatible with each other in the sense that each of them may be capable of withstanding stress levels which will be later induced as a result of various processes and during the use of the IC device. Further, each material should be capable of withstanding the maximum temperature required in the processing of the other one.In a preferred embodiment of the present invention, two compatible materials for the two intermetal insulating layers 55, 57 are SILK (organic material with k of approximately 2.65 at 100 kHz) and NANOGLASS (inorganic material with k of approximately 3.5 at 100 kHz). Both SILK and NANOGLASS can be applied by spin coating and both are capable of withstanding similar stress levels, as well as the processing temperature of each other. Further, both SILK and NANOGLASS may be individually etched by a respective etchant which, while readily etching one insulating material, will have only a very small, negligible etch rate for the other insulating material.Another example of two compatible low dielectric constant materials is a foamed polyimide (as the organic component with k in the range of 2.0 to 3.0, depending upon the degree of porosity) and hydrogen silsesquioxane (HSQ) (as the inorganic component with k in the range of 2.3 to 3.0). However, other combinations may also be employed. Further, two low dielectric constant organic materials, as well as two low dielectric constant inorganic materials may be used also, as long as both materials retain compatible physical and chemical properties. Thus, the present invention is not limited to the use of the above-mentioned combinations, and other compatible low dielectric constant materials may be used also, especially those whose dielectric constants are lower than 4.0.Referring now to FIG. 8, a first photoresist layer 58 is formed over the second intermetal insulating layer 57. The first photoresist layer 58 is then patterned with a mask (not shown) having images of trench patterns 59 (FIG. 8). Thus, trenches 65 may be formed, as shown in FIG. 9, by etching through the photoresist layer 58 and into the second intermetal insulating layer 57 by using a second etchant. The second etchant may be selected in accordance with the characteristics of the second insulating material 57. The second etchant (not shown) selectively etches the second insulating material 57 until it reaches the first insulating material 55.In the preferred embodiment of the present invention, which employs the SILK/NANOGLASS combination, the second etchant (for etching through the second intermetal insulating NANOGLASS layer 57) may contain a chlorine (Cl) plasma. The first etchant (which will selectively etch the first intermetal insulating SILK layer 55) may employ oxygen (O<2>) plasma.After the formation of trenches 65 through the first intermetal insulating layer 55 and the removal of the first photoresist layer 58, vias 56 (FIG. 13) may be formed by photolithography. As such, a second photoresist layer 67 (FIG. 10) is formed over the first and second intermetal insulating layers 55, 57, and then patterned with a mask (not shown) having images of via patterns 63 (FIG. 10). The via patterns 63 are then etched by employing a timed etch into the second intermetal insulating layer 57 to form vias 56 (FIG. 11).The etching of vias 56 is accomplished by employing a time etching with a first etchant (which may include an O2 plasma) to etch part of the first intermetal insulating layer 55, for example about half of the first insulating layer 55, to obtain vias 56a, as shown in FIG. 11. Subsequent to the formation of vias 56a, the second photoresist layer 67 is removed so that the first etchant is further used to completely etch through the first intermetal insulating layer 55 and complete the formation of vias 56 (FIG. 13) and define the trenches in layer 55, with the pattern, etched previously, in level 57 serving as a mask, see FIG. 13.Next, a barrier layer, if needed, 72 (FIG. 14) is formed on the vias 56 and the trenches 65, as well as over the second intermetal insulating layer 57 by CVD, PVD, sputtering or evaporation, to a thickness of about 50 Angstroms to about 200 Angstroms, more preferably of about 100 Angstroms. Preferred materials for the barrier layer 72 are metals such as titanium (Ti), zirconium (Zr), tungsten (W), or hafnium (Hf), or metal compounds such as tantalum nitride (TaN), which may be applied by blanket deposition. If desired, the barrier layer 72 may be formed of refractory metal silicides such as TiSi or ZrSi. In any event, the barrier layer 72 suppresses the diffusion of the metal atoms from the subsequently deposited conductive material (FIG. 14), while offering a low resistivity and low contact resistance between the metal of the metal layer 52 and the barrier layer 72, and between the subsequently deposited conductive material (FIG. 14) and the barrier layer 72. As known in the art, the material for the barrier layer 72 is selected according to the type of metallurgy and/or insulators employed.As also illustrated in FIG. 14, a conductive material 80 is next deposited to fill in both vias 56 and trenches 65. In the preferred embodiment, the conductive material 80 comprises either copper, tungsten, aluminum, gold, silver or aluminum-copper and their alloys, but it must be understood that other materials may be used also. In any event, the conductive material 80 may be blanket deposited by a known PVD, CVD, or a combination of these techniques to fill in both vias 56 and trenches 65. Alternatively, the conductive material 80 may be deposited by a plating technique.If necessary, a second barrier layer may be deposited on top of the conductive material 80. For example, in the case of aluminum or aluminum-copper alloy structures, a layer of titanium (Ti) or zirconium (Zr) is often used both above and below the aluminum alloy layer to improve electromigration resistance of the lines. In any event, after the deposition of the conductive material 80, excess metal formed above the surface of the second insulating material 57 may be removed by either an etching or a polishing technique to form first metallization structures 81illustrated in FIG. 15. In a preferred embodiment of the present invention, chemical mechanical polishing (CMP) is used to polish away excess conductive material above the second insulating material 57 and the trench level. This way, the second insulating material 57 acts as a polishing stop layer when CMP is used.Subsequent the formation of the first metallization structures 81 (FIG. 15), a second timed etch is employed to complete the process of forming a second damascene interconnect structure 100 (FIG. 23). As such, a second pair of intermetal insulating layers of low dielectric constant materials is formed over the first and second intermetal insulating layers 55, 57, In an exemplary embodiment of the invention, the second pair of intermetal insulating layers includes same low dielectric constant materials as those forming the first and second intermetal insulating layers 55, 57. For example, in the SILK/NANOGLASS combination described above, the second pair of intermetal insulating layers will comprise first a layer of NANOGLASS and then a layer of SILK. This embodiment is exemplified in more detail below.Accordingly, as illustrated in FIG. 16, a third intermetal insulating layer 57a is formed overlying the first metallization structures 81 and portions of the second intermetal insulating layer 57. In a preferred embodiment of the present invention, the third intermetal insulating layer 57a is formed of a low dielectric constant material similar to that of the second intermetal insulating layer 57. Thus, in the exemplary embodiment of the invention which employs the SILK/NANOGLASS combination described above, the third intermetal insulating layer 57a may be formed of NANOGLASS and may be blanket deposited by spin coating to a thickness of about 4,000 Angstroms to 30,000 Angstroms, more preferably about 12,000 to 20,000 Angstroms. The third intermetal insulating layer 57a may be also cured at a predefined temperature, depending on the nature of the material.Next, as illustrated in FIG. 17, a thin fourth intermetal insulating layer 55a is formed overlying the third intermetal insulating layer 57a. The thin fourth intermetal insulating layer 55a may be formed, for example, by spin coating to a thickness of about 100 Angstroms to about 2,000 Angstroms, more preferably of about 500 Angstroms. Following deposition, the fourth intermetal insulating layer 55a is cured at a predefined temperature, depending, again, on the nature and specific characteristics of the insulating material. Other deposition methods, such as the ones mentioned above with reference to tie formation of the intermetal insulating layers 55, 57, 57a may be used also.The material of choice for the fourth intermetal insulating layer 57a is also a low dielectric constant organic or inorganic material, with a dielectric constant lower than 4.0, as the ones listed above with reference to the first and second intermetal insulating layers 55, 57. For example, in the exemplary embodiment of the invention which employs the SILK/NANOGLASS combination described above, the fourth intermetal insulating layer 55a may be formed of SILK, which is the material of choice for the first insulating layer 55.Subsequent to the formation of the third and fourth intermetal insulating layers 57a, 55a, the processing steps for the formation of a second metallization structure 83 (FIG. 23) proceed according to those described above with reference to the formation of the first metallization structure 81 (FIGS. 8-15). As such, a third photoresist layer 68 (FIG. 17) is formed over the fourth intermetal insulating layer 55a, and then patterned with a mask (not shown) having images of a trench pattern 69 (FIG. 17). Thus, a pattern of the trench 85 may be formed, as shown in FIG. 18, by etching through the photoresist layer 68 and into the fourth intermetal insulating layer 55a. The etching may be accomplished by employing the first etchant previously used for the etching of vias 56 (FIG. 13) through the first intermetal insulating layer 55. For example, in the preferred embodiment of the present invention which employs the SILK/NANOGLASS combination, the first etchant for selectively etching the first and fourth intermetal insulating SILK layers 55, 57 may employ oxygen (O<2>) plasma.After the formation of a pattern of the trench 85 through the fourth intermetal insulating layer 55a and the removal of the third photoresist layer 68, vias 76 (FIG. 21) may be formed by photolithography, in ways similar to those for the formation of vias 56 (FIGS. 10-13). Accordingly, a fourth photoresist layer 77 (FIG. 19) is formed over the third and fourth intermetal insulating layers 57a, 55a and then patterned with a mask (not shown) having images of via patterns 73 (FIG. 19). The via patterns 73 are then etched by employing a second time etching into the third intermetal insulating layer 57a to form vias 76 of FIG. 20.The etching of vias 76 is accomplished by employing a time etching with the second etchant (which may include a chlorine plasma) to etch part of the third intermetal insulating layer 57a, for example about half of the third insulating layer 57a, to obtain vias 76a, as shown in FIG. 20. Subsequent to the formation of vias 76a, the fourth photoresist layer 77 is removed so that the second etchant is further used to completely etch through the third intermetal insulating layer 57a and complete the formation of vias 76 (FIG. 21) and define the trenches in layer 57a. A barrier layer 74 is next formed on the vias 76 and the third intermetal insulating layer 57a, as shown in FIG. 22. The barrier layer 74 may be formed by CVD, PVD, sputtering or evaporation, to a thickness of about 50 Angstroms to about 200 Angstroms, more preferably of about 100 Angstroms. Preferred materials for the barrier layer 74 are metals such as titanium (Ti), zirconium (Zr), tungsten (W), or hafnium (Hf), metal compounds such as tantalum nitride (TaN), refractory metal silicides such as titanium silicide (TiSi) or zirconium silicide (ZrSi), among others.Referring now to FIG. 22, a conductive material 82 is deposited to fill in both vias 76 and trench 85. The conductive material 82 may be formed of copper, aluminum, or tungsten, among others, and may be deposited or plated, depending on the desired method of formation. In any event, excess metal formed above the surface of the fourth intermetal insulating layer 55a is removed by either an etching or a polishing technique to form a second metallization structure 83 (FIG. 23) and to complete the formation of a damascene interconnect structure 100 illustrated in FIG. 23. In a preferred embodiment of the present invention, chemical mechanical polishing (CMP) is used to polish away excess conductive material above the fourth insulating material 55a and the trench level. This way, the fourth insulating material 55a acts as a polishing stop layer when CMP is used.Although only two damascene interconnect structures 100 are shown in FIG. 23, it must be readily apparent to those skilled in the art that in fact any number of such damascene interconnect structures may be formed on the substrate 50. Further, although the exemplary embodiment described above refers to only two pairs of low dielectric constant insulating layers, it must be understood that any number of such pairs may be employed, depending on the desired level of metallization.Also, although the exemplary embodiment described above refers to the formation of a damascene interconnect structure 100, the invention is further applicable to other types of metallization structures, for example, single, double or triple damascene structures, or subtractive metallization structures, depending on the number of low dielectric constant insulating layers formed over the substrate 50. Further, the invention is not limited to the use of SILK and NANOGLASS, but may be used with other compatible organic and/or inorganic materials with dielectric constants lower than 4.0.In addition, further steps to create a functional memory cell may be carried out. Thus, additional multilevel interconnect layers and associated dielectric layers could be formed to create operative electrical paths from the damascene interconnect structure 100 to a source/drain region (not shown) of the substrate 50.A typical processor-based system 400 which includes a memory circuit 448, for example a DRAM, containing interconnect structures according to the present invention is illustrated in FIG. 24. A processor system, such as a computer system, generally comprises a central processing unit (CPU) 444, such as a microprocessor, a digital signal processor, or other programmable digital logic devices, which communicates with an input/output (I/O) device 446 over a bus 452. The memory 448 communicates with the system over bus 452.In the case of a computer system, the processor system may include peripheral devices such as a floppy disk drive 454 and a compact disk (CD) ROM drive 456 which also communicate with CPU 444 over the bus 452. Memory 448 is preferably constructed as an integrated circuit, which includes one or more damascene interconnect structures 100. If desired, the memory 448 may be combined with the processor, e.g. CPU 444, in a single integrated circuit.The above description and drawings are only to be considered illustrative of exemplary embodiments which achieve the features and advantages of the present invention. Modification and substitutions to specific process conditions and structures can be made without departing from the spirit and scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims.
Gate-all-around integrated circuit structures having nanoribbon sub-fin isolation by backside Si substrate removal etch selective to source and drain epitaxy, are described. For example, an integrated circuit structure includes a plurality of horizontal nanowires above a sub-fin. A gate stack is over the plurality of nanowires and the sub-fin. Epitaxial source or drain structures are on opposite ends of the plurality of horizontal nanowires; and a doped nucleation layer at a base of the epitaxial source or drain structures adjacent to the sub-fin. Where the integrated circuit structure comprises an NMOS transistor, doped nucleation layer comprises a carbon-doped nucleation layer. Where the integrated circuit structure comprises a PMOS transistor, doped nucleation layer comprises a heavy boron-doped nucleation layer.
An integrated circuit structure, comprising:a plurality of horizontal nanowires above a sub-fin;a gate stack over the plurality of horizontal nanowires and the sub-fin;epitaxial source or drain structures on opposite ends of the plurality of horizontal nanowires; anda doped nucleation layer at a base of the epitaxial source or drain structures adjacent to the sub-fin.The integrated circuit structure of claim 1, wherein the integrated circuit structure comprising an NMOS transistor and wherein the doped nucleation layer comprises a carbon-doped nucleation layer.The integrated circuit structure of claim 2, wherein the carbon-doped nucleation layer comprises carbon-doped silicon and phosphorous.The integrated circuit structure of claim 1, wherein the integrated circuit structure comprising a PMOS transistor and wherein the doped nucleation layer comprises a boron-doped nucleation layer.The integrated circuit structure of claim 4, wherein the boron-doped nucleation layer comprises heavy boron-doped silicon and germanium.The integrated circuit structure of claim 1, 2, 3, 4 or 5, wherein internal gate spacers are on either side of the gate stack between the gate stack and the epitaxial source or drain structures.The integrated circuit structure of claim 3, wherein the epitaxial source or drain structures are non-discrete epitaxial source or drain structures.The integrated circuit structure of claim 3, wherein the epitaxial source or drain structures are discrete epitaxial source or drain structures.A method of fabricating an integrated circuit structure, the method comprising:forming a plurality of horizontal nanowires above a sub-fin;forming a gate stack over the plurality of horizontal nanowires and the sub-fin;forming epitaxial source or drain structures on opposite ends of the plurality of horizontal nanowires; andforming a doped nucleation layer at a base of the epitaxial source or drain structures adjacent to the sub-fin.The method of claim 9, wherein the integrated circuit structure comprising an NMOS transistor and wherein the doped nucleation layer comprises a carbon-doped nucleation layer.The method of claim 10, wherein the carbon-doped nucleation layer comprises carbon-doped silicon and phosphorous.The method of claim 9, wherein the integrated circuit structure comprising a PMOS transistor and wherein the doped nucleation layer comprises a boron-doped nucleation layer.The method of claim 12, wherein the boron-doped nucleation layer comprises heavy boron-doped silicon and germanium.The method of claim 9, 10, 11, 12 or 13, wherein internal gate spacers are on either side of the gate stack between the gate stack and the epitaxial source or drain structures.The method of claim 11, wherein the epitaxial source or drain structures are non-discrete epitaxial source or drain structures.
TECHNICAL FIELDEmbodiments of the disclosure are in the field of integrated circuit structures and processing and, in particular, nanoribbon sub-fin isolation by backside Si substrate removal etch selective to source and drain epitaxy.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.In the manufacture of integrated circuit devices, multi-gate transistors, such as tri-gate transistors, have become more prevalent as device dimensions continue to scale down. In conventional processes, tri-gate transistors are generally fabricated on either bulk silicon substrates or silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred due to their lower cost and because they enable a less complicated tri-gate fabrication process. In another aspect, maintaining mobility improvement and short channel control as microelectronic device dimensions scale below the 10 nanometer (nm) node provides a challenge in device fabrication. Nanowires used to fabricate devices provide improved short channel control.Scaling multi-gate and nanowire transistors has not been without consequence, however. As the dimensions of these fundamental building blocks of microelectronic circuitry are reduced and as the sheer number of fundamental building blocks fabricated in a given region is increased, the constraints on the lithographic processes used to pattern these building blocks have become overwhelming. In particular, there may be a trade-off between the smallest dimension of a feature patterned in a semiconductor stack (the critical dimension) and the spacing between such features.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a top-down angled cross-sectional view representing a starting structure in a method of fabricating a gate-all-around integrated circuit structure having a depopulated channel structure, in accordance with an embodiment of the present disclosure.Figures 2A-2C illustrate gate cut cross-sectional views of an NMOS region including gate-all-around integrated circuit structure having a doped nucleation layer and various backside removal processing steps.Figures 3A-3C illustrate gate cut cross-sectional views of a PMOS region including gate-all-around integrated circuit structure having a doped nucleation layer and various backside removal processing steps.Figures 4A-4J illustrates cross-sectional views of various operations in a method of fabricating a gate-all-around integrated circuit structure, in accordance with an embodiment of the present disclosure.Figure 5 illustrates an IC device assembly including components having one or more integrated circuit structures described herein.Figure 6 illustrates a computing device in accordance with one implementation of the disclosure.DESCRIPTION OF THE EMBODIMENTSGate-all-around integrated circuit structures having depopulated channel structures, and methods of fabricating gate-all-around integrated circuit structures having depopulated channel structures, are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be appreciated that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.Certain terminology may also be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings to which reference is made. Terms such as "front", "back", "rear", and "side" describe the orientation and/or location of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import.Embodiments described herein may be directed to front-end-of-line (FEOL) semiconductor processing and structures. FEOL is the first portion of integrated circuit (IC) fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in the semiconductor substrate or layer. FEOL generally covers everything up to (but not including) the deposition of metal interconnect layers. Following the last FEOL operation, the result is typically a wafer with isolated transistors (e.g., without any wires).Embodiments described herein may be directed to back-end-of-line (BEOL) semiconductor processing and structures. BEOL is the second portion of IC fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are interconnected with wiring on the wafer, e.g., the metallization layer or layers. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL part of the fabrication stage contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers may be added in the BEOL.Embodiments described below may be applicable to FEOL processing and structures, BEOL processing and structures, or both FEOL and BEOL processing and structures. In particular, although an exemplary processing scheme may be illustrated using a FEOL processing scenario, such approaches may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be illustrated using a BEOL processing scenario, such approaches may also be applicable to FEOL processing.One or more embodiments described herein are directed to nanowire or nanoribbon sub-fin isolation by a backside substrate removal etch selective to source and drain epitaxy using a doped nucleation layer at the base of the source and drain epitaxy. Unless stated explicitly otherwise, reference to a nanowire structure can include nanowire structure and/or nanoribbon structures. Advantages for implementing embodiments described herein can include reducing the risk of EPI etch out and providing for flexibility in device performance metrics.To provide context, Figure 1 is an example of a selective backside removal approach for channel depopulation. Figure 1 illustrates a top-down angled cross-sectional view representing a starting structure in a method of fabricating a gate-all-around integrated circuit structure having a depopulated channel structure.A starting structure 100 includes an integrated circuit structure supported face-down, e.g., on a carrier, following a backside reveal process performed to remove a substrate and to form revealed sub-fins. The starting structure 100 includes planarized sub-fins 104, such as silicon sub-fins that were formerly protruding from a silicon substrate. The sub-fins 104 protrude through shallow trench isolation (STI) structures 106, such as silicon oxide STI structures. A liner 103, such as a silicon nitride liner, may separate the sub-fins 104 from the STI structure 106, as is depicted. Each sub-fin 104 is over a corresponding one or more stacks of nanowires 108. Although in some embodiments the stacks of nanowires 108 may comprise Si (wire or ribbon) and SiGe (sacrificial) layers, other pairs of semiconductor materials which can be alloyed and grown epitaxially could be implemented, for example, InAs and InGaAs, or SiGe and Ge.A gate electrode 112, such as a metal gate electrode, is around the nanowires 108. The gate electrode 112 is separated from the nanowires 108 and from the sub-fins 104 by a gate dielectric layer 110, such as a high-k gate dielectric layer. Conductive trench contract structures 116 can be neighboring with gate structures 110/112 and can be coupled to overlying epitaxial source or drain structures 118, as is depicted. In one embodiment, neighboring conductive trench contract structures 116 and gate structures 110/112 are separated from one another by dielectric spacers 114, such as silicon nitride spacers. In one embodiment, as is depicted, epitaxial source or drain structures 118 are separated and/or isolated from the sub-fins 104 by a dielectric spacer 117, such as a silicon nitride spacer.Further processing can include removal of a carrier from the front-side (bottom side), supporting the backside (top side) by another carrier, and performing further processing on the front side, such as interconnect metallization formation over the gate electrodes 112 and conductive trench contact structures 116. It is also to be appreciated that similar processes and structures may be applied to semiconductor fins instead of stacks of nanowires.Typically, the conductive trench contract structures 116 are formed on the front side of a substrate, as shown, where trenches are first etched through one or more dielectric layers/levels under the epitaxial source or drain structures 118, and then filled with a conductive material, e.g., metal, in contact with the epitaxy. As future technology nodes grow smaller, there is a need to save room by connecting the conductive trench contract structures from the backside of the wafer/substrate instead of the front side to reveal the epitaxial source or drain structures for coupling with the conductive trench contract structures. This process can be referred to as sub-fin isolation by a backside substrate removal. Typically, some of the substrate is removed through etching. One problem is that the backside substrate etch may imposes the risk of also etching out a portion of the epitaxial source or drain structures 118.In accordance with embodiments described herein, various nucleation layers are formed at the base of the epitaxial source or drain structures to gain etch selectivity to better control removal of the silicon at sub-fin during a backside removal approach. In embodiments, a different nucleation layer is formed for NMOS and PMOS transistors. For example, a high B-doped nucleation layer is used for PMOS and a C-doped nucleation layer is used for NMOS to improve etch selectivity.Figures 2A-2C illustrate gate cut cross-sectional views of an NMOS region 200 including gate-all-around integrated circuit structure 201 having a doped nucleation layer during various backside removal processing steps. Similarly, Figures 3A-3C illustrate gate cut cross-sectional views of a PMOS region 300 including gate-all-around integrated circuit structure 301 having a doped nucleation layer during various backside removal processing steps.Referring to Figure 2A , the gate-all-around integrated circuit structure 201 includes a plurality of horizontal nanowires 214 above a sub-fin (not shown in this view) of a substrate 202. A gate stack 220 (such as a gate electrode and gate dielectric stack) is over the plurality of nanowires 214, around individual nanowires 214, and over the sub-fin. Epitaxial source or drain structures 224 are included at opposite first and second ends of the plurality of nanowires 214. In one such embodiment, the epitaxial source or drain structures are non-discrete epitaxial source or drain structures, structural examples of which are described below. In another such embodiment, the epitaxial source or drain structures are discrete epitaxial source or drain structures, structural examples of which are also described below. External gate spacers 222A and internal gate spacers 222B are on either side of the gate stack 220 between the gate stack 220 and the epitaxial source or drain structures 224, where the external gate spacers 222A are above the internal gate spacers 222B. Spacer extensions (not shown) can be included between the epitaxial source or drain structures 224 and the substrate 202. The spacer extensions can be continuous with or discrete from the internal gate spacers 222B. In addition, the internal gate spacers 222B can be continuous with or discrete from the external gate spacers 222A.In accordance with the disclosed embodiments, Figure 2A further shows that a doped nucleation layer 225 is formed at a base of the epitaxial source or drain structures 224 and adjacent to the sub-fin to control silicon substrate 202 etch during a backside reveal. The doped nucleation layer 225 is to gain etch selectivity from removing intrinsic silicon at the sub-fin of the NMOS integrated circuit structure 201.In embodiments, the doped nucleation layer 225 may comprise a carbon-doped nucleation layer. In one embodiment, the carbon-doped nucleation layer may have a carbon doping concentration of approximately 1E19/cm3 to 1E20/cm3. In one embodiment, the carbon-doped nucleation layer comprises carbon-doped silicon and phosphorous. In one embodiment, the carbon-doped silicon and phosphorous nucleation layer has approximately a 1E20/cm3 phosphorous concentration with a carbon doping of less than 1%.Figure 2B shows the integrated circuit structure 201 after the substrate is etched away during the backside reveal process; and Figure 2C shows integrated circuit structure 201 after the removed substrate is replaced by formation of a dielectric layer 226.Similarly, Figure 3A shows PMOS region 300 including a gate-all-around integrated circuit structure 301 having a plurality of horizontal nanowires 314 above a sub-fin (not shown in this view) of a substrate 302. A gate stack 320 (such as a gate electrode and gate dielectric stack) is over the plurality of nanowires 314, around individual nanowires 314, and over the sub-fin. Epitaxial source or drain structures 324 are included at opposite first and second ends of the plurality of nanowires 314. In one such embodiment, the epitaxial source or drain structures are non-discrete epitaxial source or drain structures, structural examples of which are described below. In another such embodiment, the epitaxial source or drain structures are discrete epitaxial source or drain structures, structural examples of which are also described below. External gate spacers 322A and internal gate spacers 322B are on either side of the gate stack 320 between the gate stack 320 and the epitaxial source or drain structures 324, where the external gate spacers 322A are above the internal gate spacers 322B. Spacer extensions (not shown) can be included between the epitaxial source or drain structures 324 and the substrate 302. The spacer extensions can be continuous with or discrete from the internal gate spacers 322B. Also, the internal gate spacers 322B can be continuous with or discrete from the external gate spacers 322A.In accordance with the disclosed embodiments, Figure 3A further shows that a doped nucleation layer 325 is formed at a base of the epitaxial source or drain structures 324 and adjacent to the sub-fin to control silicon substrate 302 etch during a backside reveal. The doped nucleation layer 325 is to gain etch selectivity from removing intrinsic silicon at the sub-fin of the PMOS integrated circuit structure 301.In embodiments, the doped nucleation layer 325 may comprise a boron-doped nucleation layer. In one embodiment, the boron-doped nucleation layer may have a heavy boron doping concentration of approximately 1E20/cm3 to 1E21/cm3. In one embodiment, the heavy boron-doped nucleation layer comprises boron-doped silicon and germanium.Figure 3B shows the integrated circuit structure 301 after the substrate is etched away during the backside reveal process. For PMOS, usually silicon removal can be selective to SeGe, which is P Epi. In this step, the silicon is etched selective to SeGe, and the heavy B-doped nucleation layer makes the etch even more selective because typically boron slows the etch rate of silicon during an isotropic etch.Figure 3C shows integrated circuit structure 301 after the removed substrate is replaced by formation of a dielectric layer 326. Due to the presence of the doped nucleation layer, the silicon etch is made more selective and less likely to etch the epitaxial source and drain structures.As described above, in order to enable access to both conductive contact structures of source and drain contact structures, integrated circuit structures described herein may be fabricated using a back-side reveal of front-side structures fabrication approach. In some exemplary embodiments, reveal of the back-side of a transistor or other device structure entails wafer-level back-side processing. In contrast to a conventional TSV-type technology, a reveal of the back-side of a transistor as described herein may be performed at the density of the device cells, and even within sub-regions of a device. Furthermore, such a reveal of the back-side of a transistor may be performed to remove substantially all of a donor substrate upon which a device layer was disposed during front-side device processing. As such, a microns-deep TSV becomes unnecessary with the thickness of semiconductor in the device cells following a reveal of the back-side of a transistor potentially being only tens or hundreds of nanometers.Reveal techniques described herein may enable a paradigm shift from "bottom-up" device fabrication to "center-out" fabrication, where the "center" is any layer that is employed in front-side fabrication, revealed from the back-side, and again employed in back-side fabrication. Processing of both a front-side and revealed back-side of a device structure may address many of the challenges associated with fabricating 3D ICs when primarily relying on front-side processing.A reveal of the back-side of a transistor approach may be employed for example to remove at least a portion of a carrier layer and intervening layer of a donor-host substrate assembly. The process flow begins with an input of a donor-host substrate assembly. A thickness of a carrier layer in the donor-host substrate is polished (e.g., CMP) and/or etched with a wet or dry (e.g., plasma) etch process. Any grind, polish, and/or wet/dry etch process known to be suitable for the composition of the carrier layer may be employed. For example, where the carrier layer is a group IV semiconductor (e.g., silicon) a CMP slurry known to be suitable for thinning the semiconductor may be employed. Likewise, any wet etchant or plasma etch process known to be suitable for thinning the group IV semiconductor may also be employed.In some embodiments, the above is preceded by cleaving the carrier layer along a fracture plane substantially parallel to the intervening layer. The cleaving or fracture process may be utilized to remove a substantial portion of the carrier layer as a bulk mass, reducing the polish or etch time needed to remove the carrier layer. For example, where a carrier layer is 400-900 µm in thickness, 100-700 µm may be cleaved off by practicing any blanket implant known to promote a wafer-level fracture. In some exemplary embodiments, a light element (e.g., H, He, or Li) is implanted to a uniform target depth within the carrier layer where the fracture plane is desired. Following such a cleaving process, the thickness of the carrier layer remaining in the donor-host substrate assembly may then be polished or etched to complete removal. Alternatively, where the carrier layer is not fractured, the grind, polish and/or etch operation may be employed to remove a greater thickness of the carrier layer.Next, exposure of an intervening layer is detected. Detection is used to identify a point when the back-side surface of the donor substrate has advanced to nearly the device layer. Any endpoint detection technique known to be suitable for detecting a transition between the materials employed for the carrier layer and the intervening layer may be practiced. In some embodiments, one or more endpoint criteria are based on detecting a change in optical absorbance or emission of the back-side surface of the donor substrate during the polishing or etching performed. In some other embodiments, the endpoint criteria are associated with a change in optical absorbance or emission of byproducts during the polishing or etching of the donor substrate back-side surface. For example, absorbance or emission wavelengths associated with the carrier layer etch byproducts may change as a function of the different compositions of the carrier layer and intervening layer. In other embodiments, the endpoint criteria are associated with a change in mass of species in byproducts of polishing or etching the back-side surface of the donor substrate. For example, the byproducts of processing may be sampled through a quadrupole mass analyzer and a change in the species mass may be correlated to the different compositions of the carrier layer and intervening layer. In another exemplary embodiment, the endpoint criteria is associated with a change in friction between a back-side surface of the donor substrate and a polishing surface in contact with the back-side surface of the donor substrate.Detection of the intervening layer may be enhanced where the removal process is selective to the carrier layer relative to the intervening layer as non-uniformity in the carrier removal process may be mitigated by an etch rate delta between the carrier layer and intervening layer. Detection may even be skipped if the grind, polish and/or etch operation removes the intervening layer at a rate sufficiently below the rate at which the carrier layer is removed. If an endpoint criteria is not employed, a grind, polish and/or etch operation of a predetermined fixed duration may stop on the intervening layer material if the thickness of the intervening layer is sufficient for the selectivity of the etch. In some examples, the carrier etch rate: intervening layer etch rate is 3:1-10:1, or more.Upon exposing the intervening layer, at least a portion of the intervening layer may be removed. For example, one or more component layers of the intervening layer may be removed. A thickness of the intervening layer may be removed uniformly by a polish, for example. Alternatively, a thickness of the intervening layer may be removed with a masked or blanket etch process. The process may employ the same polish or etch process as that employed to thin the carrier, or may be a distinct process with distinct process parameters. For example, where the intervening layer provides an etch stop for the carrier removal process, the latter operation may employ a different polish or etch process that favors removal of the intervening layer over removal of the device layer. Where less than a few hundred nanometers of intervening layer thickness is to be removed, the removal process may be relatively slow, optimized for across-wafer uniformity, and more precisely controlled than that employed for removal of the carrier layer. A CMP process employed may, for example employ a slurry that offers very high selectively (e.g., 100:1-300:1, or more) between semiconductor (e.g., silicon) and dielectric material (e.g., SiO) surrounding the device layer and embedded within the intervening layer, for example, as electrical isolation between adjacent device regions.For embodiments where the device layer is revealed through complete removal of the intervening layer, back-side processing may commence on an exposed back-side of the device layer or specific device regions there in. In some embodiments, the back-side device layer processing includes a further polish or wet/dry etch through a thickness of the device layer disposed between the intervening layer and a device region previously fabricated in the device layer, such as a source or drain region.In some embodiments where the carrier layer, intervening layer, or device layer back-side is recessed with a wet and/or plasma etch, such an etch may be a patterned etch or a materially selective etch that imparts significant non-planarity or topography into the device layer back-side surface. As described further below, the patterning may be within a device cell (i.e., "intra-cell" patterning) or may be across device cells (i.e., "inter-cell" patterning). In some patterned etch embodiments, at least a partial thickness of the intervening layer is employed as a hard mask for back-side device layer patterning. Hence, a masked etch process may preface a correspondingly masked device layer etch.The above described processing scheme may result in a donor-host substrate assembly that includes IC devices that have a back-side of an intervening layer, a back-side of the device layer, and/or back-side of one or more semiconductor regions within the device layer, and/or front-side metallization revealed. Additional back-side processing of any of these revealed regions may then be performed during downstream processing.It is to be appreciated that, as used throughout the disclosure, a sub-fin, a nanowire, a nanoribbon, or a fin described herein may be a silicon sub-fin, a silicon nanowire, a silicon nanoribbon, or a silicon fin. As used throughout, a silicon layer or structure may be used to describe a silicon material composed of a very substantial amount of, if not all, silicon. However, it is to be appreciated that, practically, 100% pure Si may be difficult to form and, hence, could include a tiny percentage of carbon, germanium or tin. Such impurities may be included as an unavoidable impurity or component during deposition of Si or may "contaminate" the Si upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon layer or structure may include a silicon layer or structure that contains a relatively small amount, e.g., "impurity" level, non-Si atoms or species, such as Ge, C or Sn. It is to be appreciated that a silicon layer or structure as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that, as used throughout the disclosure, a sub-fin, a nanowire, a nanoribbon, or a fin described herein may be a silicon germanium sub-fin, a silicon germanium nanowire, a silicon germanium nanoribbon, or a silicon germanium fin. As used throughout, a silicon germanium layer or structure may be used to describe a silicon germanium material composed of substantial portions of both silicon and germanium, such as at least 5% of both. In some embodiments, the amount of germanium is greater than the amount of silicon. In particular embodiments, a silicon germanium layer or structure includes approximately 60% germanium and approximately 40% silicon (Si40Ge60). In other embodiments, the amount of silicon is greater than the amount of germanium. In particular embodiments, a silicon germanium layer or structure includes approximately 30% germanium and approximately 70% silicon (Si70Ge30). It is to be appreciated that, practically, 100% pure silicon germanium (referred to generally as SiGe) may be difficult to form and, hence, could include a tiny percentage of carbon or tin. Such impurities may be included as an unavoidable impurity or component during deposition of SiGe or may "contaminate" the SiGe upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon germanium layer or structure may include a silicon germanium layer or structure that contains a relatively small amount, e.g., "impurity" level, non-Ge and non-Si atoms or species, such as carbon or tin. It is to be appreciated that a silicon germanium layer or structure as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that embodiments described herein may be implemented to fabricate nanowire and/or nanoribbon structures having a different number of active wire/ribbon channels. It is to be appreciated that embodiments described herein may involve backside removal approaches to achieve such structures. Embodiments described herein may be implemented to enable the fabrication of nanowire/nanoribbon-based CMOS architectures.In an embodiment, in order to engineer different devices having different drive-current strengths, a selective depopulation flow can be patterned with lithography so that ribbons and wires (RAW) are de-populated only from specific devices. In another embodiment, the entire wafer may be de-populated uniformly so all devices have same number of RAW.As mentioned above, nanowire release processing may be performed through a replacement gate trench. Examples of such release processes are described below. Additionally, in another aspect, backend (BE) interconnect scaling can result in lower performance and higher manufacturing cost due to patterning complexity. Embodiments described herein may be implemented to enable front and backside interconnect integration for nanowire transistors. Embodiments described herein may provide an approach to achieve a relatively wider interconnect pitch. The result may be improved product performance and lower patterning costs. Embodiments may be implemented to enable robust functionality of scaled nanowire or nanoribbon transistors with low power and high performance.One or more embodiments described herein are directed dual epitaxial (EPI) connections for nanowire or nanoribbon transistors using partial source or drain (SD) and asymmetric trench contact (TCN) depth. In an embodiment, an integrated circuit structure is fabricated by forming source-drain openings of nanowire/nanoribbon transistors which are partially filled with SD epitaxy. A remainder of the opening is filled with a conductive material. Deep trench formation on one of the source or drain side enables direct contact to a backside interconnect level.In an exemplary process flow, Figures 4A-4J illustrates cross-sectional views of various operations in a method of fabricating a gate-all-around integrated circuit structure, in accordance with an embodiment of the present disclosure.Referring to Figure 4A , a method of fabricating an integrated circuit structure includes forming a starting stack 400 which includes alternating silicon germanium layer 404 and silicon layers 406 above a fin 402, such as a silicon fin. The silicon layers 406 may be referred to as a vertical arrangement of silicon nanowires. A protective cap 408 may be formed above the alternating silicon germanium layer 404 and silicon layers 406, as is depicted.Referring to Figure 4B , a gate stack 410 is formed over the vertical arrangement of nanowires 406. Portions of the vertical arrangement of nanowires 406 are then released by removing portions of the silicon germanium layer 404 to provide recessed silicon germanium layers 404' and cavities 412, as is depicted in Figure 4C .It is to be appreciated that the structure of Figure 4C may be fabricated to completion without first performing the deep etch and asymmetric contact processing described below in association with Figure 4D . In either case (e.g., with or without asymmetric contact processing), in an embodiment, a fabrication process involves use of a process scheme that provides a gate-all-around integrated circuit structure having a depopulated channel structure, an example of which is described above in association with Figures 1 , 2A-2C and 3A-3B .Referring to Figure 4D , upper gate spacers 414 are formed at sidewalls of the gate structure 410. Cavity spacers 416 are formed in the cavities 412 beneath the upper gate spacers 414. A deep trench contact etch is then performed to form trenches 418 and to formed recessed nanowires 406'.A sacrificial material 420 is then formed in the trenches 418, as is depicted in Figure 4E . Although not shown, a doped nucleation layer 225/325 may be formed at this point in the process according to the disclosed embodiments.Referring to Figure 4F , a first epitaxial source or drain structure (e.g., left-hand features 422) is formed at a first end of the vertical arrangement of nanowires 406'. A second epitaxial source or drain structure (e.g., right-hand features 422) is formed at a second end of the vertical arrangement of nanowires 406'. An inter-layer dielectric (ILD) material 424 is then formed at the sides of the gate electrode 410 and adjacent to the source or drain structures 422, as is depicted in Figure 4G .Referring to Figure 4H , a replacement gate process is used to form a permanent gate dielectric 428 and a permanent gate electrode 426. In an embodiment, subsequent to removal of gate structure 410 and form a permanent gate dielectric 428 and a permanent gate electrode 426, the recessed silicon germanium layers 404' are removed to leave upper active nanowires or nanoribbons 406'. In an embodiment, the recessed silicon germanium layers 404' are removed selectively with a wet etch that selectively removes the silicon germanium while not etching the silicon layers. Etch chemistries such as carboxylic acid/nitric acid/HF chemistry, and citric acid/nitric acid/HF, for example, may be utilized to selectively etch the silicon germanium. Halide-based dry etches or plasma-enhanced vapor etches may also be used to achieve the embodiments herein.Referring again to Figure 4H , one or more of the bottommost nanowires or nanoribbons 406' may ultimately be targeted for removal at location 499, e.g., by an approach described in association with Figures 1 , 2A-2C and 3A-3B . The permanent gate dielectric 428 and a permanent gate electrode 426 are formed to surround the nanowires or nanoribbons 406' and the targeted nanowire or nanoribbons 499.Referring to Figure 4I , the ILD material 424 is then removed. The sacrificial material 420 is then removed from one of the source drain locations (e.g., right-hand side) to form trench 432, but is not removed from the other of the source drain locations to form trench 430.Referring to Figure 4J , a first conductive contact structure 434 is formed coupled to the first epitaxial source or drain structure (e.g., left-hand features 422). A second conductive contact structure 436 is formed coupled to the second epitaxial source or drain structure (e.g., right-hand features 422). The second conductive contact structure 436 is formed deeper along the fin 402 than the first conductive contact structure 434. In an embodiment, although not depicted in Figure 4J , the method further includes forming an exposed surface of the second conductive contact structure 436 at a bottom of the fin 402.In an embodiment, the second conductive contact structure 436 is deeper along the fin 402 than the first conductive contact structure 434, as is depicted. In one such embodiment, the first conductive contact structure 434 is not along the fin 402, as is depicted. In another such embodiment, not depicted, the first conductive contact structure 434 is partially along the fin 402.In an embodiment, the second conductive contact structure 436 is along an entirety of the fin 402. In an embodiment, although not depicted, in the case that the bottom of the fin 402 is exposed by a backside substrate removal process, the second conductive contact structure 436 has an exposed surface at a bottom of the fin 402.It is to be appreciated that the structures resulting from the above exemplary processing schemes may be used in a same or similar form for subsequent processing operations to complete device fabrication, such as PMOS and/or NMOS device fabrication.In an embodiment, the fins (and, possibly nanowires) are composed of a crystalline silicon germanium layer which may be doped with a charge carrier, such as but not limited to phosphorus, arsenic, boron, gallium or a combination thereof.In an embodiment, trench isolation region, and trench isolation regions (trench isolations structures or trench isolation layers) described throughout, may be composed of a material suitable to ultimately electrically isolate, or contribute to the isolation of, portions of a permanent gate structure from an underlying bulk substrate or isolate active regions formed within an underlying bulk substrate, such as isolating fin active regions. For example, in one embodiment, trench isolation region is composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate line may be composed of a gate electrode stack which includes a gate dielectric layer and a gate electrode layer. In an embodiment, the gate electrode of the gate electrode stack is composed of a metal gate and the gate dielectric layer is composed of a high-k material. For example, in one embodiment, the gate dielectric layer is composed of a material such as, but not limited to, hafnium oxide, hafnium oxy-nitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, strontium titanate, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, or a combination thereof. Furthermore, a portion of gate dielectric layer may include a layer of native oxide formed from the top few layers of the substrate fin. In an embodiment, the gate dielectric layer is composed of a top high-k portion and a lower portion composed of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer is composed of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxy-nitride. In some implementations, a portion of the gate dielectric is a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate.In one embodiment, the gate electrode layer is composed of a metal layer such as, but not limited to, metal nitrides, metal carbides, metal silicides, metal aluminides, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium, platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode layer is composed of a non-workfunction-setting fill material formed above a metal workfunction-setting layer. The gate electrode layer may consist of a P-type workfunction metal or an N-type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode layer 550 may consist of a stack of two or more metal layers, where one or more metal layers are workfunction metal layers and at least one metal layer is a conductive fill layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV. In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In further implementations of the disclosure, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode layer may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.Spacers associated with the gate electrode stacks may be composed of a material suitable to ultimately electrically isolate, or contribute to the isolation of, a permanent gate structure from adjacent conductive contacts, such as self-aligned contacts. For example, in one embodiment, the spacers are composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate contact and overlying gate contact via may be composed of a conductive material. In an embodiment, one or more of the contacts or vias are composed of a metal species. The metal species may be a pure metal, such as tungsten, nickel, or cobalt, or may be an alloy such as a metal-metal alloy or a metal-semiconductor alloy (e.g., such as a silicide material).It is to be appreciated that not all aspects of the processes described above need be practiced to fall within the spirit and scope of embodiments of the present disclosure. Also, the processes described herein may be used to fabricate one or a plurality of semiconductor devices. The semiconductor devices may be transistors or like devices. For example, in an embodiment, the semiconductor devices are a metal-oxide semiconductor (MOS) transistors for logic or memory, or are bipolar transistors. Also, in an embodiment, the semiconductor devices have a three-dimensional architecture, such as a tri-gate device, an independently accessed double gate device, or a FIN-FET. One or more embodiments may be particularly useful for fabricating semiconductor devices at a sub-10 nanometer (10 nm) technology node.In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by conventional techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as is also used throughout the present description, hardmask materials, capping layers, or plugs are composed of dielectric materials different from the interlayer dielectric material. In one embodiment, different hardmask, capping or plug materials may be used in different regions so as to provide different growth or etch selectivity to each other and to the underlying dielectric and metal layers. In some embodiments, a hardmask layer, capping or plug layer includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. Other hardmask, capping or plug layers known in the arts may be used depending upon the particular implementation. The hardmask, capping or plug layers maybe formed by CVD, PVD, or by other deposition methods.Referring to Figure 5 , an IC device assembly 500 includes components having one or more integrated circuit structures described herein. The IC device assembly 500 includes a number of components disposed on a circuit board 502 (which may be, e.g., a motherboard). The IC device assembly 500 includes components disposed on a first face 540 of the circuit board 502 and an opposing second face 542 of the circuit board 502. Generally, components may be disposed on one or both faces 540 and 542. In particular, any suitable ones of the components of the IC device assembly 500 may include a number of transistor architectures utilizing IC structures having a doped nucleation layer at the base of epitaxial source and drain structures, such as disclosed herein.In some embodiments, the circuit board 502 may be a printed circuit board (PCB) including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 502. In other embodiments, the circuit board 502 may be a non-PCB substrate.The IC device assembly 500 illustrated in Figure 5 includes a package-on-interposer structure 536 coupled to the first face 540 of the circuit board 502 by coupling components 516. The coupling components 516 may electrically and mechanically couple the package-on-interposer structure 536 to the circuit board 502, and may include solder balls, male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.The package-on-interposer structure 536 may include an IC package 520 coupled to an interposer 504 by coupling components 518. The coupling components 518 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 516. Although a single IC package 520 is shown, multiple IC packages may be coupled to the interposer 504. It is to be appreciated that additional interposers may be coupled to the interposer 504. The interposer 504 may provide an intervening substrate used to bridge the circuit board 502 and the IC package 520. The IC package 520 may be or include, for example, a die or any other suitable component. Generally, the interposer 504 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the interposer 504 may couple the IC package 520 (e.g., a die) to a ball grid array (BGA) of the coupling components 516 for coupling to the circuit board 502. In the embodiment illustrated in Figure 5 , the IC package 520 and the circuit board 502 are attached to opposing sides of the interposer 504. In other embodiments, the IC package 520 and the circuit board 502 may be attached to a same side of the interposer 504. In some embodiments, three or more components may be interconnected by way of the interposer 504.The interposer 504 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In some implementations, the interposer 504 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials. The interposer 504 may include metal interconnects 510 and vias 508, including but not limited to through-silicon vias (TSVs) 506. The interposer 504 may further include embedded devices, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 504. The package-on-interposer structure 536 may take the form of any of the package-on-interposer structures known in the art.The IC device assembly 500 may include an IC package 524 coupled to the first face 540 of the circuit board 502 by coupling components 522. The coupling components 522 may take the form of any of the embodiments discussed above with reference to the coupling components 516, and the IC package 524 may take the form of any of the embodiments discussed above with reference to the IC package 520.The IC device assembly 500 illustrated in Figure 5 includes a package-on-package structure 534 coupled to the second face 542 of the circuit board 502 by coupling components 528. The package-on-package structure 534 may include an IC package 526 and an IC package 532 coupled together by coupling components 530 such that the IC package 526 is disposed between the circuit board 502 and the IC package 532. The coupling components 528 and 530 may take the form of any of the embodiments of the coupling components 516 discussed above, and the IC packages 526 and 532 may take the form of any of the embodiments of the IC package 520 discussed above. The package-on-package structure 534 may be configured in accordance with any of the package-on-package structures known in the art.Figure 6 illustrates a computing device 600 in accordance with one implementation of the disclosure. The computing device 600 houses a board 602. The board 602 may include a number of components, including but not limited to a processor 604 and at least one communication chip 606. The processor 604 is physically and electrically coupled to the board 602. In some implementations the at least one communication chip 606 is also physically and electrically coupled to the board 602. In further implementations, the communication chip 606 is part of the processor 604.Depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to the board 602. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 606 enables wireless communications for the transfer of data to and from the computing device 600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 606 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 600 may include a plurality of communication chips 606. For instance, a first communication chip 606 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 606 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 604 of the computing device 600 includes an integrated circuit die packaged within the processor 604. In some implementations of the disclosure, the integrated circuit die of the processor includes one or more transistor architectures utilizing IC structures having a doped nucleation layer at the base of epitaxial source and drain structures, in accordance with implementations of embodiments of the disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 606 also includes an integrated circuit die packaged within the communication chip 606. In accordance with another implementation of embodiments of the disclosure, the integrated circuit die of the communication chip includes one or more transistor architectures utilizing IC structures having a doped nucleation layer at the base of epitaxial source and drain structures, in accordance with implementations of embodiments of the disclosure.In further implementations, another component housed within the computing device 600 may contain an integrated circuit die that includes one or more transistor architectures utilizing IC structures having a doped nucleation layer at the base of epitaxial source and drain structures, in accordance with implementations of embodiments of the disclosure.In various implementations, the computing device 600 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 600 may be any other electronic device that processes data.Thus, embodiments described herein include transistor architectures utilizing IC structures having a doped nucleation layer at the base of epitaxial source and drain structures.The above description of illustrated implementations of embodiments of the disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize.These modifications may be made to the disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit the disclosure to the specific implementations disclosed in the specification and the claims. Rather, the scope of the disclosure is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example embodiment 1: An integrated circuit structure comprises a plurality of horizontal nanowires above a sub-fin. A gate stack is over the plurality of nanowires and the sub-fin. Epitaxial source or drain structures are on opposite ends of the plurality of horizontal nanowires; and a doped nucleation layer at a base of the epitaxial source or drain structures adjacent to the sub-fin.Example embodiment 2: The integrated circuit structure of embodiment 1, wherein the integrated circuit structure comprising an NMOS transistor and wherein the doped nucleation layer comprises a carbon-doped nucleation layer.Example embodiment 3: The integrated circuit structure of embodiment 2, wherein the carbon-doped nucleation layer comprises carbon-doped silicon and phosphorous.Example embodiment 4: The integrated circuit structure of embodiment 1, wherein the integrated circuit structure comprising a PMOS transistor and wherein the doped nucleation layer comprises a boron-doped nucleation layer.Example embodiment 5: The integrated circuit structure of embodiment 4, wherein the boron-doped nucleation layer comprises heavy boron-doped silicon and germanium.Example embodiment 6: The integrated circuit structure of embodiment 1, 2, 3, 4, or 5, wherein internal gate spacers are on either side of the gate stack between the gate stack and the epitaxial source or drain structures.Example embodiment 7: The integrated circuit structure of embodiment 3, wherein the epitaxial source or drain structures are non-discrete epitaxial source or drain structures.Example embodiment 8: The integrated circuit structure of embodiment 3, wherein the epitaxial source or drain structures are discrete epitaxial source or drain structures.Example embodiment 9: A computing device, comprising a board, and a component coupled to the board. The component includes an integrated circuit structure comprising a plurality of horizontal nanowires above a sub-fin. A gate stack over the plurality of horizontal nanowires and the sub-fin. Epitaxial source or drain structures on are opposite ends of the plurality of horizontal nanowires. A doped nucleation layer is at a base of the epitaxial source or drain structures adjacent to the sub-fin.Example embodiment 10: The computing device of embodiment 9, further comprising: a memory coupled to the board.Example embodiment 11: The computing device of embodiment 9 or 10, further comprising: a communication chip coupled to the board.Example embodiment 12: The computing device of embodiment 9, 10 or 11, further comprising: a battery coupled to the board.Example embodiment 13: The computing device of embodiment 9, 10, 11 or 12, wherein the component is a packaged integrated circuit die.Example embodiment 14: An integrated circuit structure comprises an NMOS region. The NMOS region a first gate structure above a first sub-fin. The first epitaxial source or drain structures on opposite sides of the first gate structure. A carbon-doped nucleation layer at a first base of the first epitaxial source or drain structures adjacent to the first sub-fin. A PMOS region comprises a second gate structure above a second sub-fin. Second epitaxial source or drain structures are on opposite sides of the second gate structure. A boron-doped nucleation layer is at a second base of the second epitaxial source or drain structures adjacent to the second sub-fin.Example embodiment 15: The integrated circuit structure of embodiment 14, wherein the carbon-doped nucleation layer comprises carbon-doped silicon and phosphorous.Example embodiment 16: The integrated circuit structure of embodiment 14 or 15, wherein the boron-doped nucleation layer comprises a heavy boron-doped nucleation layer.Example embodiment 17: The integrated circuit structure of embodiment 14, 15 or 16, wherein the boron-doped nucleation layer comprises boron-doped silicon and germanium.Example embodiment 18: A computing device comprises a board and a component coupled to the board. The component includes an integrated circuit structure comprising an NMOS region and a PMOS region. The NMOS region comprises a first gate structure above a first sub-fin. First epitaxial source or drain structures are on opposite sides of the first gate structure. In a carbon-doped nucleation layer is at a first base of the first epitaxial source or drain structures adjacent to the first sub-fin. The PMOS region a second gate structure above a second sub-fin. Second epitaxial source or drain structures are on opposite sides of the second gate structure. A boron-doped nucleation layer at a second base of the second epitaxial source or drain structures adjacent to the second sub-fin.Example embodiment 19: The computing device of embodiment 18, further comprising: a memory coupled to the board.Example embodiment 20: The computing device of embodiment 18 or 19, further comprising: a communication chip coupled to the board.
Methods are disclosed, such as those involving increasing the density of isolated features in an integrated circuit (200). In one or more embodiments, a method is provided for forming an integrated circuit (200) with a pattern of isolated features having a final density of isolated features that is greater than a starting density of isolated features in the integrated circuit (200) by a multiple of two or more. The method can include forming a pattern of pillars (122) having a density X, and forming a pattern of holes (140) amongst the pillars (122), the holes (140) having a density at least X. The pillars (122) can be selectively removed to form a pattern of holes (141) having a density at least 2X. In some embodiments, plugs (150) can be formed in the pattern of holes (141). such as by epitaxial deposition on the substrate (300). in order to provide a pattern of pillars having a density 2X. In other embodiments, the pattern of holes (141) can be transferred to the substrate (100) by etching.
1. A method comprising: providing a substrate; forming a first set of pillars on the substrate; and depositing spacer material on the first set of pillars to form a first pattern of holes, wherein at least one of the holes is located between pillars of the first set, and wherein, after depositing, spacer material fills a space between a first pillar of the first set and a nearest neighboring pillar of the first set. 2. The method of Claim 1, wherein the first set of pillars comprises at least one column and at least one row, the at least one column being oriented transverse to the at least one row, each of the at least one column and the at least one row comprising a plurality of pillars. 3. The method of Claim 2, wherein the first pattern of holes comprises at least three columns and at least three rows. 4. The method of Claim 1, wherein the first set of pillars comprises pillars having a generally circular cross section. 5. The method of Claim 1, wherein the first pattern of holes comprises holes having a generally circular cross section. 6. The method of Claim 1 , wherein the spacer material is an insulating material. 7. The method of Claim 1 , wherein the spacer material is a semiconducting material or a conducting material. 8. The method of Claim 1 , wherein forming a first set of pillars comprises: forming a first hard mask layer over the substrate; forming a selectively definable layer over the first hard mask layer, the selectively definable layer comprising a pattern of pillars; trimming the pillars of the selectively definable layer, and etching the first hard mask layer through the selectively definable layer to transfer the pattern of trimmed pillars to the first hard mask layer. 9. The method of Claim 8, wherein trimming the pillars of the selectively definable layer comprises wet etching the selectively definable layer.forming a second hard mask layer over the first hard mask layer before forming the selectively definable layer, wherein the selectively definable layer is formed over the second hard mask layer; and etching the second hard mask layer through the selectively definable layer before etching the first hard mask layer. 1 1. The method of Claim 1, further comprising, after depositing the spacer material, isotropically etching the spacer material to increase a width of the holes. 12. The method of Claim 1 1 , wherein, after isotropically etching, the width of the holes is between about 50% and about 150% of a width of the pillars. 13. The method of Claim 1. further comprising, after depositing the spacer material, anisotropically etching the spacer materia! to expose the pillars of the first set. 14. The method of Claim 13, further comprising, after exposing the pillars of the first set, selectively etching the first set of pillars to form a second pattern of holes, the second pattern of holes comprising the holes of the first pattern of holes and the holes created by selectively etching the first set of pillars. 15. The method of Claim 14, further comprising forming a second set of pillars by depositing pillars into the second pattern of holes. 16. A method comprising: providing a substrate; forming a plurality of pillars on the substrate, the pillars having a density X; and blanket depositing material on the pillars to form a pattern of holes on a level of the pillars, the holes having a density at least X. 17. The method of Claim 16, wherein forming the plurality of pillars comprises forming pillars having a generally circular cross section. 18. The method of Claim 16, wherein the plurality of pillars comprise transparent carbon. 19. The method of Claim 16, wherein forming the plurality of pillars comprises etching the pillars using a mask. 21 . The method of Claim 16, wherein the holes of the pattern have a generally circular cross section. 22. The method of Claim 16, further comprising removing the plurality of pillars to form a pattern of holes of density at least IX. 23. The method of Claim 22, further comprising forming plugs in the pattern of holes of density at least 2X. 24. The method of Claim 23, wherein forming plugs comprises epitaxially depositing plugs on the substrate inside the holes. 25. A method comprising: providing a substrate; forming a set of pillars on the substrate, wherein the pillars have a width of about ( — yβλ 7 , and wherein a first pillar is separated from a second pillar by a distance of about 2 - , and wherein the first pillar is separated from a third pillar by a distance of aboutdepositing material on the set of pillars to form a pattern of holes, wherein the pattern comprises a hole between the first pillar and the third pillar, wherein Y is a real number greater than zero. 26. The method of Claim 25, wherein forming a set of pillars comprises forming pillars having a generally circular cross section. 27. The method of Claim 25. wherein depositing comprises filling a space between the first pillar and the second pillar. 28. The method of Claim 25, wherein the pattern comprises holes having a generally circular cross section. 30. A method comprising: providing a set of pillars on a substrate, the pillars arranged in two or more rows and two or more columns; blanket depositing spacer material on the set of pillars to form a pattern of holes adjacent the pillars; isotropically etching the spacer material to enlarge the width of the holes; and anisotropically etching the spacer material to expose the pillars. 31. The method of Claim 30, wherein the set of pillars has a density X and depositing spacer material forms a pattern of holes defined by the spacer material, wherein the holes have a density at least X. 32. The method of Claim 31 , further comprising selectively removing the pillars to form a pattern of holes having a density at least 2X. 33. The method of Claim 30, wherein the pillars have a generally circular cross section. 34. The method of Claim 30, wherein, after isotropically etching, the holes have a generally circular cross section. 35. The method of Claim 30, wherein isotropically etching the spacer material is performed before anisotropically etching the spacer material.
METHOD FOR FORMING HIGH DENSITY PATTERNS BACKGROUND OF THE INVENTION Field of the Invention [0001] Embodiments of the invention relate to semiconductor processing, and more particularly to masking techniques. Description of the Related Art [0002] There is a constant demand for faster and smaller integrated circuits, faster and smaller integrated circuits may be made by reducing the sizes and separation distances between the individual elements or electronic devices forming an integrated circuit. This process of increasing the density of circuit elements across a substrate is typically referred to as "scaling." As a result of the demand for faster and smaller integrated circuits, there is a constant need for methods of scaling to form isolated features with a high density. BRIEF DESCRIPTION OF THE DRAWINGS [0003] The appended drawings are schematic, not necessarily drawn to scale, and are meant to illustrate and not to limit embodiments of the invention. [0004] Figure IA is a flow chart illustrating a process in accordance with one or more embodiments of the invention. [0005] Figure IB is another flow chart illustrating a process in accordance with one or more embodiments of the invention. [0006] Figure 2 illustrates a cross-sectional side view of a partially formed integrated circuit in accordance with one or more embodiments of the invention. [0007] Figure 2A illustrates a top view of a partially formed integrated circuit in accordance with one or more embodiments of the invention. [0008] Figure 2B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 2A along the sectional line 2B shown in Figure 2A.Figure 2A after the pattern of pillars has been trimmed in accordance with one or more embodiments of the invention. [0010] Figure 3B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 3 A along the sectional line 3B shown in Figure 3 A. 10011] Figure 4A illustrates a top view of the partially formed integrated circuit of Figure 3A after transferring the pattern of pillars to underlying masking layers in accordance with one or more embodiments of the invention. [0012] Figure 4B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 4A along the sectional line 4B shown in Figure 4A. [0013] Figure 5A illustrates a top view of the partially formed integrated circuit of Figure 4A after one of the masking layers has been removed in accordance with one or more embodiments of the invention. [0014] Figure 5B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 5A along the sectional line 5B shown in Figure 5A. [0015] Figure 6A illustrates a top view of the partially formed integrated circuit of Figure 5A during deposition of a spacer material on pillars in accordance with one or more embodiments of the invention. J0016] Figure 6B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 6A along the sectional line 6B shown in Figure 6A. [0017] Figure 7A illustrates a top view of the partially formed integrated circuit of Figure 6A after deposition of the spacer material in accordance with one or more embodiments of the invention. [0018] Figure 7B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 7A along the sectional line 7B shown in Figure 7A. [0019] Figure 8A illustrates a top view of the partially formed integrated circuit of Figure 7A after etching the spacer material in accordance with one or more embodiments of the invention. [0020] Figure 8B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 8A along the sectional line 8B shown in Figure 8A.Figure 8A after further etching the spacer material in accordance with one or more embodiments of the invention. [0022] Figure 9B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 9A along the sectional line 9B shown in Figure 9A. [0023] Figure 1OA illustrates a top view of the partially formed integrated circuit of Figure 9A after etching the pillars in accordance with one or more embodiments of the invention. [0024] Figure 1OB illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 1 OA along the sectional line 1OB shown in Figure 1 OA. [0025] Figure HA illustrates a top view of the partially formed integrated circuit of Figure 1OA after forming plugs in accordance with one or more embodiments of the invention. [0026] Figure HB illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 11 A along the sectional line 1 IB shown in Figure 1 IA. [0027] Figure 12A illustrates a top view of the partially formed integrated circuit of Figure 1 IA after removing the spacer material in accordance with one or more embodiments of the invention. [0028] Figure Ϊ2B illustrates a cross-sectional side view of the partially formed integrated circuit of Figure 12A along the sectional line 12B shown in Figure 12A. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0029] Embodiments described herein provide methods of forming patterns of isolated features having a high density. In one or more embodiments, a method is provided for foπning an integrated circuit with a pattern of features having a final density of features that is greater than a starting density of features in the integrated circuit by a multiple of two or more. The method can include forming a pattern of isolated pillars having a density X. The method can further include forming spacers around the pillars, such as by blanket depositing spacer material on and around the pillars and then isotropically etching the spacer materials to form a pattern of holes having a density at least about X. The pillars can beIn some embodiments, plugs can be formed in the pattern of holes in the mask, such as by epitaxial deposition on the substrate, in order to provide a pattern of pillars having a density at least 2X. In other embodiments, the pattern of holes in the mask can be etched into the substrate to provide a pattern of holes in the substrate. [0030] Reference will now be made to the figures, in which like numerals refer to like parts throughout. (0031] Figure IA illustrates a general sequence of process steps according to some embodiments of the invention. In step 1 of Figure IA, a plurality of pillars are formed on a substrate, such as by etching into a layer or stack of layers formed over the substrate or by forming material over a substrate in a pattern that defines a plurality of pillars. For example, the pillars can be formed by photolithography, by selectively exposing photoresist to light and then developing the photoresist to leave a pattern of pillars formed by the photoresist. As used herein, "forming" a structure includes performing steps to make the structure or providing the structure already premade. In step 3, spacer material is formed on and around the pillars to fill spaces between the pillars while leaving a pattern of openings between the pillars. In step 5, the spacer material is etched to form a pattern of holes completely open to an underlying material, the holes having a density at least as great as the density of the pattern of pillars. In step 7, the pillars are removed to form further holes, thus providing a pattern of holes with a density at least twice as great as the pattern of pillars that were previously formed on the substrate. [0032] Figures 1B-12B illustrate schematically a detailed sequence of process steps according to some embodiments of the invention. In step 10, a substrate 100 is provided and a first hard mask layer 3 10 is formed thereover. (Figure 2 illustrates a partially formed integrated circuit 200 after step 12 has been carried out.) The substrate 100 may include one or more of a variety of suitable workpieces for semiconductor processing. For example, the substrate can include a silicon wafer. In one or more embodiments, the first hard mask layer 1 10 includes amorphous carbon, e.g., transparent carbon, which has been found to have excellent etch selectivity with other materials of the illustrated imaging or masking stack. Methods for forming amorphous carbon are disclosed in A. Helmbold, D.2006/0211260, published September 2I5 2006, entitled "PITCH REDUCED PATTERNS RELATIVE TO PHOTOLITHOGRAPHY FEATURES," the entire disclosures of which are hereby incorporated herein by reference. In the illustrated embodiment, a second hard mask layer 112 is also formed over the first hard mask layer 110 to protect the first hard mask layer 110 during etching in later steps and/or to enhance the accuracy of forming patterns by photolithography. In one or more embodiments, the second hard mask layer 112 includes an anti-reflective coating (ARC), such as DARC or BARC/DARC, which can facilitate photolithography by preventing undesired light reflections. [0033] In step 12, a selectively definable layer 120 is formed on the second hard mask layer 112. The selectively definable layer 120 can be formed using a photoresist in accordance with well-known processes for providing masks in semiconductor fabrication. For example, the photoresist can be any photoresist compatible with 157 nm, 193 nm. 248 nm or 365 nm wavelength systems, 193 nm wavelength immersion systems, extreme ultraviolet systems (including 13.7 nm wavelength systems) or electron beam lithographic systems. In addition, maskless lithography, or maskless photolithography, can be used to define the selectively definable layer 120. Examples of preferred photoresist materials include argon fluoride (ArF) sensitive photoresist, i.e., photoresist suitable for use with an ArF light source, and krypton fluoride (KrF) sensitive photoresist, i.e., photoresist suitable for use with a KrF light source. ArF photoresists are preferably used with photolithography systems utilizing relatively short wavelength light, e.g., 193 nm. KrF photoresists are preferably used with longer wavelength photolithography systems, such as 248 nm systems. In other embodiments, the selectively definable layer 120 and any subsequent resist layers can be formed of a resist that can be patterned by nano-imprint lithography, e.g. , by using a mold or mechanical force to pattern the resist. Figures 2A and 2B illustrate a partially formed integrated circuit 200 after step 12 has been carried out. As shown in Figures 2A and 2B, the selectively definable layer 120 can include a mask pattern, the pattern including a plurality of pillars 121 having a substantially circular cross-section. The width of the pillars 121 in the selectively definable layer 120 is A. The pillars 121 can be patterned using a photolithographic technique. In one or more embodiments, A can be substantially equal tothe pillars 121 can be formed with width A larger than the minimum formable feature size formed by photolithography and subsequently trimmed, in order to enhance the accuracy of the patterns formed by photolithography. It will be appreciated that photolithographic techniques typically can more easily and accurately form features having sizes above the size limit of the technique. [0034] As shown in Figure 2A, the distance between centers of nearest neighboring pillars 121, such as between pillars 121 a and 121b, is B. In the illustrated embodiment, B is substantially equal to twice the width A, which has advantages for forming a pattern of holes arranged in rows and columns as described herein. In embodiments where the width A is greater than one half of the distance B, the pillars 121 of the selectively definable layer 120 are trimmed during the trimming step 14 in order to achieve the dimensions C, D, and E as described hereinbelow. Although the mask pattern shown in Figures 2 A and 2B includes pillars 121 with their centers located at the corner points of a square, other patterns are also possible, as will be described more fully hereinbelow. [0035] Figures 3A and 3B illustrate the partially formed integrated circuit 200 after step 14 of Figure IB has been carried out. In step 14. the selectively definable layer 120 is trimmed, such as by subjecting the selectively definable layer 120 to CVCl2 or (VHBr plasma. Figure 3B shows that after the trimming step 14, the pillars 121 of the selectively definable layer 120 have a width C, which is less than the width A, Thus, the trimming step 14 can advantageously provide a feature size that is less than the minimum feature size formable using the lithographic technique used to pattern the selectively definable layer 120. In one or more embodiments, the width C is substantially equal to [0036] Figure 3B also shows that after the trimming step 14, the distance between two distant pillars 121 of the selectively definable layer 120, such as between pillars 121 a and 121 c, is E. In one or more embodiments, the distance E is substantially equal toneighboring pillars 121 of the selectively definable layer 120, such as between pillars 121a and 121b, is D. In one or more embodiments, the distance D is substantially equal to Y is used herein as a multiplier having a dimension of distance to clarify the relationship between various dimensions in the pattern of one or more embodiments. Although C is substantially equal toin Figures 3A and 3B, Y can be any real number greater than zero, including the minimum feature size formable using known lithographic techniques, and does not necessarily bear any relationship to the width A of the pillars 121 after step 12. [0038] Selectively definable layers 120 having a pattern of these dimensions can produce a pattern of spacer-defined holes in later steps that is advantageously aligned with the pattern of pillars 121 in the selectively definable layer 120. In particular, the pattern of the selectively definable layer 120 shown in Figure 3 A can be described as a set of pillars 121 formed in columns and rows, in which the leftmost pillar 121a is positioned in a first column and a second row, the uppermost pillar 121 b is positioned in the second column and the first row, the lowermost pillar 121d is positioned in the second column and a third row. and the rightmost pillar 121c is positioned in the third column and the second row. When the mask pattern is formed using the dimensions described above, the holes formed in later steps can advantageously be positioned in open positions in the same columns and rows, such that the pattern of holes is aligned with the pattern of pillars. Figure 8A1 described more fully below, shows a pattern of holes 140 with a hole 140a positioned in the first column and the first row, another hole 14Od positioned in the first column and the third row, another hole 140c positioned in the second column and the second row, another hole 140b positioned in the third column and the first row, and another hole 14Oe positioned in the third column and the third row. -1-definable layer 120 is transferred to the second hard mask layer 1 12, such as by anisotropically etching the second hard mask layer 112 through the selectively definable layer 120. [0040] Figures 4A and 4B illustrate the partially formed integrated circuit 200 after step 20 of Figure IB has been carried out. In step 20, pillars 122 are formed in the first hard mask layer 1 10 by anisotropically etching the first hard mask layer U O through the selectively definable layer 120 and the second hard mask layer 1 12. As shown in Figures 4A and 4B, the pillars 122 formed in step 20 can have substantially the same pattern as the pattern in the selectively definable layer 120. The selectively definable layer 120 can be removed during or after the etching step 20. In embodiments including the second hard mask layer 112, the second hard mask layer 1 12 may be removed in step 22, such as by carrying out a wet strip etch. In other embodiments, the selectively definable layer 120 is removed by the same etch used to define pillars 122 in the first hard mask layer 1 10. Figures 5A and 5B illustrate the partially formed integrated circuit 200 after removing the selectively definable layer 120. [0041] In step 30 of Figure I B5 spacer material 130 (Figures 6A. 6B) is deposited on the pillars 122. Figures 6A and 6B illustrate the partially formed integrated circuit 200 while step 30 of Figure IB is being carried out. The spacer material can include an insulating material, such as an oxide, e.g.. silicon oxide, particularly a material that is selectively etchable with respect to the material of the pillars 122 and other exposed surfaces. Examples of other spacer materials include silicon nitride, AI2O3, TiN, etc. In one or more embodiments, depositing step 30 includes uniformly depositing spacer material 130 on the pillars 122 and the substrate 100. such as by blanket depositing the spacer material 130 by chemical vapor deposition. [0042] Figures 6A and 6B show that as spacer material 130 is deposited on the pillars 122, the spacer material 130 fills a space between neighboring pillars 122 when the spacer material 130 forms a layer having a thickness F. In one or more embodiments, the thickness F is substantially equal to [0043] Preferably, spacer material 130 continues to be deposited beyond filling the space between the nearest neighboring pillars 122, such that the spacer material 130 surrounding the nearest neighboring pillars 122 converge and form voids with substantially circular cross-sections. Advantageously, due to corners having a relatively higher surface area for interacting with precursors, it has been found that the rate of deposition at the corners formed by the convergence is greater than at other parts between the pillars 122, causing the corners of the open space between the pillars 122 to become rounded. [0044] Figures 7A and 7B illustrate the partially formed integrated circuit 200 after depositing step 30 has been carried out. As shown in Figures 7A and 7B, sufficient spacer material 130 has been deposited to form holes 140 with a substantially circular cross- section. The holes 140 occur in a pattern that is aligned with the pattern of the pillars 122, as described above, and the density of the holes is greater than the density of the pillars 122 in the illustrated portion of the partially formed integrated circuit. [0045] In order to achieve a rounded cross- section for the holes 140, it may be necessary to deposit so much spacer material 130 that the width of the holes 140 is smaller than the width C of the pillars. In step 32 of Figure IB, the spacer material 130 can be trimmed, such as by isotropically etching to uniformly expand the width of the holes 140. Figures 8A and 8B illustrate the partially formed integrated circuit 200 after step 32 of Figure IB has been carried out. As shown in Figure 8B, after any etching to expand the holes 140, the layer of the spacer material 130 has a thickness G and the holes 140 have been expanded to form holes 141 having a width H. In one or more embodiments, the width H and the thickness G are both substantially equal to the width C of the pillars 122, advantageously providing a pattern of holes 141 and pillars 122 of substantially the same size. Steps 30 and 32 of Figure IB can be repeated as desired in order to achieve holes 141 of the desired shapes and sizes. [0046] In step 34 of Figure IB, spacer material 130 (Figures 9A, 9B) is anisotropically etched to expose the upper surfaces of the pillars 122 and the substrate 100. Figures 9A and 9B illustrate the partially formed integrated circuit 200 after step 34 of Figurematerial 130 between the holes 141 and the pillars 122 remain substantially the same as before step 34. In some embodiments, the order of steps 32 and 34 can be reversed, such that the spacer material 130 is anisotropically etched before being trimmed by, e.g., an isotropic etch. In such embodiments, holes having different widths may be formed. [0047] In step 40 of Figure IB, the pillars 122 (Figures 9A, 9B) are etched, such as by selectively etching the first hard mask layer 110 relative to the spacer material 130 to remove the pillars 122. Figures 1OA and 1OB illustrate the partially formed integrated circuit 200 after step 40 of Figure IB has been carried out. At this stage, a pattern of holes 141 has been achieved that has a density greater than or equal to about twice the density of the features that were formed in the selectively definable layer 120. Moreover, the holes 141 have a smaller feature size than the pillars 121 first formed by photolithography in the selectively definable layer 120, and the holes 141 occur in a pattern that is aligned with the pattern of pillars 121 in the selectively definable layer 120. [0048] In step 50 of Figure IB, plugs 150 (Figures 1 IA, HB) are formed in the holes 141. Figures HA and HB illustrate the partially formed integrated circuit 200 after step 50 of Figure IB has been carried out. Plugs 150 can be formed of the same material as the substrate 100. The spacer material 130 is chosen to be selectively etchable relative to the material forming the plugs 150. In one or more embodiments, the plugs 150 are formed of polysilicon and the spacer material 130 is formed of silicon oxide. Depositing step 50 can be carried out in accordance with well-known deposition processes, including but not limited to chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), or spin coating. In some embodiments, plugs 150 (Figures 1 1 A and 1 IB) can be formed by epitaxial growth. [0049] In step 60, the spacer material 130 (Figures 1 IA, HB) is removed, such as by selectively etching the spacer material 130. In processes using spin coating, CVD or PECVD in step 50 to deposit the plugs 150, it may be necessary to first planarize the surface, such as by a chemical mechanical polishing process, or perform a plug material etch back process in order to expose the spacer material 130.after step 60 has been carried out. A pattern of plugs 150 has been formed on the substrate 100 with a density greater than or equal to about twice the density of the pillars that were formed on the selectively definable layer 120. Moreover, the plugs 150 have a smaller feature size than the pillars 121 first formed on the selectively definable layer 120, and the plugs 150 occur in a pattern that is aligned with the pattern of pillars 121 in the selectively definable layer 120. [0051] While the method described above can provide a pattern of plugs with a density greater than or equal to about twice the density of the features that were formed on the selectively definable layer 120, the method may also be repeated to produce a pattern with a density of features that is greater than or equal to about four times the density of the original pattern. The method may then be repeated to achieve a pattern with a density of features that is greater than or equal to about eight times the density of the original pattern, and so on until the desired density is reached. For example, it will be appreciated that the plugs 150 or pillars patterned in the substrate 100, using the layer 130 (Figs. 1OA and 10B) as a mask, can be used as the pillars 122 in subsequent repeats of the method. For example, after forming these pillars steps 30-60 may be repeated. Thus, isolated features having a density 2n can be formed, where n is the number of times the method of Figs. IA and IB is repeated. [0052] Many variations of the embodiments described herein are possible. For example, while the holes 141 and pillars 122 have the same size in the method described above, it may be desirable in some applications to form holes that are larger or smaller than the pillars. Accordingly, the thickness of the spacer material can be adjusted to achieve the desired result. [0053] Additionally, while the method described above provides pillars and holes with a generally circular cross section, other shapes are also possible. For example, the pillars and holes can have a cross section that is generally in the shape of a square, a rectangle, or an ellipse. [0054] Further, while the method described above provides holes 140 in a pattern that is aligned with the pattern of the pillars 122. it is also possible to place the holes in other locations relative to the pillars by beginning with a pattern of pillars other than the oneof a square. One example of another pattern that can be used is a pattern of three pillars, which can be used to form a hole between the three pillars. [0055] Moreover, the embodiments described above can be used to selectively create patterns with higher density in some regions of the integrated circuit but not in other regions. In regions where a new, higher density pattern is to be formed, features can be spaced apart at a distance sufficiently small that it can be filled by the thickness of the spacer material. In regions where a higher density pattern is not desired, features can be spaced apart at too great a distance to be filled by the spacer material and/or selectively a protective mask can be used to prevent transfer of a pattern formed by the spacer material to the substrate 110, or to prevent deposition in the same openings formed by the spacer material 130. In this way, a high density pattern can be selectively provided in some regions of the integrated circuit but not in others. [0056] In addition, it will be appreciated the use of an imaging stack including photoresist, an ARC, and amorphous carbon can be advantageously be applied to facilitate deposition of the spacer material. Temperatures typically used for chemical vapor deposition of the spacer material may undesirably deform photoresist, thus, amorphous carbon is used to form the pillars upon which the spacer material is deposited. In other embodiments where low temperature deposition processes (e.g., atomic layer deposition) are used to deposit the spacer material, the ARC and amorphous carbon layers may be omitted and the spacer material can be deposited on pillars formed of photoresist. [0057] In accordance with the embodiments described above, a method is provided. Such a method might include, for example, providing a substrate and forming a first set of pillars on the substrate. The method can further include depositing spacer material on the first set of pillars to form a first pattern of holes, wherein at least one of the holes is located between pillars of the first set and wherein, after depositing, spacer material fills a space between a first pillar of the first set and a nearest neighboring pillar of the first set. [0058] In other embodiments, a method is provided. The method can include providing a substrate and forming a plurality of pillars on the substrate, the pillars having aa pattern of holes on a level of the pillars, the holes having a density at least X. [0059] In other embodiments, a method is provided. The method can include providing a substrate and forming a set of pillars on the substrate, wherein the pillars have a width of aboutand wherein a first pillar is separated from a second pillar by a distance of aboutand wherein the first pillar is separated from a third pillar by a distance of about [0060] The method can further include depositing material on the set of pillars. The method can further include etching the material to form a pattern of holes, wherein the pattern comprises a hole between the first pillar and the third pillar. J0061] In other embodiments, a method is provided. The method can include providing a set of pillars on a substrate, the pillars arranged in two or more rows and two or more columns. The method can further include blanket depositing spacer material on the set of pillars to form a pattern of holes adjacent the pillars. The method can further include isotropically etching the spacer material to enlarge the width of the holes. The method can further include anisotropically etching the spacer material to expose the pillars. [0062] It will be appreciated by those skilled in the art that various other omissions, additions, and modifications may be made to the methods and structures described above without departing from the scope of the invention. All such changes are intended to fall within the scope of the invention, as defined by the appended claims.
Dynamic power supply voltage adjustment in a computing device may involve two stages. In a first stage, a first method for adjusting a power supply voltage may be disabled. While the first method remains disabled, a request to adjust the power supply voltage from an initial value to a target value using a second method may be received. The second method may be initiated in response to the request if a time interval has elapsed since a previous request to adjust the power supply voltage. In a second stage, the first method may be enabled when it has been determined that the power supply voltage has reached the target value.
CLAIMSWhat is claimed is:1. A method for dynamic power supply voltage adjustment in a computing device, comprising: disabling a first method for adjusting a power supply voltage; receiving, while the first method is disabled, a request to adjust the power supply voltage from an initial value to a target value using a second method; initiating, while the first method is disabled, the second method in response to the request and an indication a time interval has elapsed since a previous request to adjust the power supply voltage; determining, while the first method is disabled, whether the power supply voltage has reached the target value; and enabling the first method in response to determining the power supply voltage has reached the target value.2. The method of claim 1, further comprising starting a timer contemporaneously with initiating the second method, wherein the indication the time interval has elapsed comprises an output of the timer.3. The method of claim 1, further comprising receiving the indication the time interval has elapsed from a voltage regulator system.4. The method of claim 1, wherein the second method comprises a voltage regulator stepping the power supply voltage through a plurality of intermediate voltage levels during the time interval.5. The method of claim 1, further comprising initiating the first method after enabling the first method.6. The method of claim 1, wherein the first method comprises a fine voltage adjustment method, and the second method comprises a coarse voltage adjustment method.7. The method of claim 1, wherein the first method comprises Core Power Reduction (CPR), and the second method comprises Dynamic Clock and Voltage Scaling (DCVS).8. The method of claim 1, wherein initiating the second method comprises a system-on-a-chip (SoC) signaling a power management integrated circuit (PMIC).9. The method of claim 8, wherein determining whether the power supply voltage has reached the target value comprises determining whether the PMIC provides a signal to the SoC indicating the power supply voltage has reached the target value.10. A system for dynamic power supply voltage adjustment in a computing device, comprising: first voltage adjustment logic configured to be enabled to provide voltage adjustment requests to a voltage regulator system and to be disabled from providing voltage adjustment requests to the voltage regulator system; second voltage adjustment logic configured to receive, while the first voltage adjustment logic is disabled, a request to adjust a power supply voltage from an initial value to a target value, the second voltage adjustment logic further configured to determine whether a time interval has elapsed since a previous request to adjust the power supply voltage, and if the time interval has elapsed, to provide the request to the voltage regulator system; and core logic configured to determine, while the first voltage adjustment logic is disabled, whether the power supply voltage has reached the target value, and to enable the first voltage adjustment logic if the power supply voltage has reached the target value, the core logic further configured to disable the first voltage adjustment logic before providing the request to the second voltage adjustment logic.11. The system of claim 10, further comprising a timer configured to time the time interval, the time interval starting contemporaneously with the request to the voltage regulator system.12. The system of claim 10, wherein the second voltage adjustment system is configured to determine whether the time interval has elapsed based on a signal from the voltage regulator system.13. The system of claim 10, wherein the request comprises an indication to the voltage regulator system to step the power supply voltage through a plurality of intermediate voltage levels during the time interval.14. The system of claim 10, wherein the first voltage adjustment logic is further configured to provide a voltage adjustment request to the voltage regulator system while the first voltage adjustment logic is enabled.15. The system of claim 10, wherein the first voltage adjustment logic is configured to initiate fine voltage adjustments, and the second voltage adjustment logic is configured to initiate coarse voltage adjustments.16. The system of claim 10, wherein the first voltage adjustment logic comprises Core Power Reduction (CPR) logic, and the second voltage adjustment logic comprises Dynamic Clock and Voltage Scaling (DCVS) logic.17. The system of claim 10, wherein the core logic, the first voltage adjustment logic, and the second voltage adjustment logic are included in a system-on-a-chip (SoC), and the voltage regulator system is included in a power management integrated circuit (PMIC).18. The system of claim 17, wherein the core logic is configured to determine whether the power supply voltage has reached the target value in response to a signal provided by the PMIC.1919. A system for dynamic power supply voltage adjustment in a computing device, comprising: means for activating first voltage adjustment logic when the first voltage adjustment logic is enabled and for refraining from activating the first voltage adjustment logic when the first voltage adjustment logic is disabled; means for receiving, while the first voltage adjustment logic is disabled, a request to adjust a power supply voltage from an initial value to a target value, for determining whether a time interval has elapsed since a previous request to adjust the power supply voltage, and if the time interval has elapsed, for activating second voltage adjustment logic to provide the request to a voltage regulator system; and means for determining whether the power supply voltage has reached the target value and for enabling the first voltage adjustment logic if the power supply voltage has reached the target value.20. The system of claim 19, further comprising means for starting timing the time interval contemporaneously with activating the second voltage adjustment logic.21. The system of claim 19, wherein the means for receiving the request and determining whether the time interval has elapsed comprises means for receiving a signal from the voltage regulator system responsive to the previous request.22. The system of claim 19, wherein activating the second voltage adjustment logic comprises providing an indication to a voltage regulator to step the power supply voltage through a plurality of intermediate voltage levels during the time interval.23. The system of claim 19, the means for determining and enabling is further for activating the first voltage adjustment logic after enabling the first voltage adjustment logic.24. The system of claim 19, wherein the first voltage adjustment logic comprises Core Power Reduction (CPR) logic, and the second voltage adjustment logic comprises Dynamic Clock and Voltage Scaling (DCVS) logic.2025. A computer-readable medium for dynamic power supply voltage adjustment in a computing device, the computer-readable medium comprising a non-transitory computer-readable medium having stored thereon in computer-executable form instructions that when executed by a processing system of the computing device configure the processing system to: disable a first method for adjusting a power supply voltage; receive, while the first method is disabled, a request to adjust the power supply voltage from an initial value to a target value using a second method; initiate, while the first method is disabled, the second method in response to the request and an indication a time interval has elapsed since a previous request to adjust the power supply voltage; determine, while the first method is disabled, whether the power supply voltage has reached the target value; and enable the first method in response to determining the power supply voltage has reached the target value.26. The computer-readable medium of claim 25, further comprising instructions for starting a timer contemporaneously with initiating the second method, wherein the indication the time interval has elapsed comprises an output of the timer.27. The computer-readable medium of claim 25, further comprising instructions for receiving the indication the time interval has elapsed from a voltage regulator system.28. The computer-readable medium of claim 25, wherein the second method comprises a voltage regulator stepping the power supply voltage through a plurality of intermediate voltage levels during the time interval.29. The computer-readable medium of claim 25, further comprising instructions for initiating the first method after enabling the first method.2130. The computer-readable medium of claim 25, wherein the first method comprisesCore Power Reduction (CPR), and the second method comprises Dynamic Clock and Voltage Scaling (DCVS).22
TWO-STAGE DYNAMIC POWER SUPPLY VOLTAGE ADJUSTMENTDESCRIPTION OF THE RELATED ART[0001] Portable computing devices (“PCD”s) are becoming necessities for people on personal and professional levels. These devices may include mobile phones, tablet computers, palmtop computers, portable digital assistants (“PDA”s), portable game consoles, and other portable electronic devices. PCDs commonly contain integrated circuits or systems-on-a-chip (“SoC”s) that include numerous components or subsystems designed to work together to deliver functionality to a user. For example, an SoC may contain any number of processing engines, such as central processing units (“CPU”s), graphical processing units (“GPU”s), digital signal processors (“DSP”s), neural processing units (“NPU”s), wireless transceiver units (also referred to as modems), etc.[0002] As a PCD is powered by a battery, power management is a significant consideration. Effective PCD power management helps provide long battery time, among other benefits. A number of techniques are known to dynamically adjust a power supply voltage to attempt to maximize battery time.[0003] Dynamic clock and voltage scaling (“DCVS”) is a technique or method by which the frequency and voltage at which a processor is operated are adjusted dynamically, i.e., in real time in response to changes in operating conditions, to deliver a desired balance or tradeoff between power consumption and performance level. When lower power consumption is of higher priority than higher performance, a power controller may decrease the clock frequency and voltage, and when higher performance is of higher priority than lower power consumption, the power controller may increase the clock frequency and voltage.[0004] Core power reduction or “CPR” (also known as adaptive voltage scaling) is another technique or method for dynamically adjusting a power supply voltage. CPR relates to exploiting variations in semiconductor fabrication parameters that may enable a particular chip to operate properly at a lower voltage than a value specified by the manufacturer. DCVS and CPR may be used in conjunction with each other to provide relatively coarse and relatively fine voltage adjustments, respectively.[0005] A voltage regulator may respond to a command from a power controller to change a power supply voltage by changing the voltage at its output to a new value. The output of the voltage regulator is coupled to a power supply rail that supplies electronic components of the chip. However, the supply rail voltage does not reach the new value instantaneously. Rather, the amount of current being drawn by the load causes the supply rail voltage to change exponentially to the new value. Some power supply voltage adjustment methods, including CPR, cannot provide accurate results unless the supply rail voltage is stable or settled at the time the method is performed. A common solution to this potential problem is for the power controller to only issue a command to the voltage regulator to adjust the supply voltage if a time interval, sufficient to ensure the supply rail voltage has settled since a previous adjustment, has elapsed since the power controller last issued such a command. This time interval or delay is based on worst-case load, process comers, or other conditions. Basing a control method on a worst-case estimate may be inefficient or otherwise disadvantageous.SUMMARY OF THE DISCLOSURE[0006] Systems, methods, computer-readable media, and other examples are disclosed for dynamic power supply voltage adjustment in a computing device.[0007] An exemplary method for dynamic power supply voltage adjustment in a computing device may include disabling a first method for adjusting a power supply voltage and, while the first method is disabled, receiving a request to adjust the power supply voltage from an initial value to a target value using a second method. The exemplary method may further include initiating the second method in response to the request if a time interval has elapsed since a previous request to adjust the power supply voltage using the second method. The exemplary method may still further include determining whether the power supply voltage has reached the target value, and enabling the first method if the power supply voltage has reached the target value.[0008] An exemplary system for dynamic power supply voltage adjustment in a computing device may include core logic, first voltage adjustment logic, and second voltage adjustment logic. The first voltage adjustment logic may be configured to be enabled and disabled. The second voltage adjustment logic may be configured to receive, while the first voltage adjustment logic is disabled, a request to adjust a power supply voltage from an initial value to a target value. The second voltage adjustment logic may be further configured to determine whether a time interval has elapsed since a previous request to adjust the power supply voltage, and if the time interval has elapsed, to provide the request to the voltage regulator system. The core logic may be configured to determine, while the first voltage adjustment logic is disabled, whether the power supply voltage has reached the target value, and if the power supply voltage has reached the target value, to enable the first voltage adjustment logic. The core logic may be further configured to disable the first voltage adjustment logic before providing the request to the voltage regulator system.[0009] Another exemplary system for dynamic power supply voltage adjustment in a computing device may include means for activating first voltage adjustment logic when the first voltage adjustment logic is enabled and for refraining from activating the first voltage adjustment logic when the first voltage adjustment logic is disabled. The exemplary system may further include means for receiving, while the first voltage adjustment logic is disabled, a request to adjust a power supply voltage from an initial value to a target value, for determining whether a time interval has elapsed since a previous request to adjust the power supply voltage, and if the time interval has elapsed, for activating second voltage adjustment logic to provide the request to a voltage regulator system. The exemplary system may still further include means for determining whether the power supply voltage has reached the target value and for enabling the first voltage adjustment logic if the power supply voltage has reached the target value.[0010] An exemplary computer-readable medium for dynamic power supply voltage adjustment in a computing device may comprise a non-transitory computer-readable medium having instructions stored thereon in computer-executable form. The instructions, when executed by a processing system of the computing device, may configure the processing system to disable a first method for adjusting a power supply voltage. The instructions may further configure the processing system to receive, while the first method is disabled, a request to adjust the power supply voltage from an initial value to a target value using a second method. The instructions may still further configure the processing system to initiate the second method for adjusting the power supply voltage in response to the request if a time interval has elapsed since a previous request to adjust the power supply voltage. The instructions may yet further configure the processing system to determine, while the first is disabled, whether the power supply voltage has reached the target value, and manufacturer if the power supply voltage has reached the target value, to enable the first method for adjusting the power supply voltage. BRIEF DESCRIPTION OF THE DRAWINGS[0011] In the Figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as “102 A” or “102B,” the letter character designations may differentiate two like parts or elements present in the same Figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all Figures.[0012] FIG. 1 is a block diagram illustrating a dynamic power supply voltage adjustment system in a computing device, in accordance with exemplary embodiments.[0013] FIG. 2 is a block diagram illustrating another dynamic power supply voltage adjustment system in a computing device, in accordance with exemplary embodiments.[0014] FIG. 3 is a plot illustrating an example of a power supply voltage exponentially decreasing from a starting or initial value to an ending or target value, in accordance with exemplary embodiments.[0015] FIG. 4 is an activity diagram illustrating a method for dynamic power supply voltage adjustment in a computing device, in accordance with exemplary embodiments.[0016] FIG. 5 is a flow diagram illustrating a method for dynamic power supply voltage adjustment in a computing device, in accordance with exemplary embodiments.[0017] FIG. 6 is a block diagram illustrating a computing device, in accordance with exemplary embodiments.DETAILED DESCRIPTION[0018] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” The word “illustrative” may be used herein synonymously with “exemplary.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The term “coupled” may be used herein to mean connected via zero or more intervening elements, in contrast with the term “directly connected,” which may be used herein to mean connected via no intervening elements.[0019] A reference to a task, thread, etc., executing on a processor means that the software (instructions, data, etc.) represented by the task is executed by a processor retrieving and executing portions of the software from a memory, storing results in a memory, etc., in a manner in accordance with conventional computing principles well understood by one of ordinary skill in the art. In some of the exemplary embodiments described herein, an explicit description of such a memory may be omitted for purposes of clarity.[0020] As illustrated in FIG. 1, in an illustrative or exemplary embodiment, a computing device 100 may include a processor system 102 and a voltage regulator system 104. The computing device 100 may be of any kind in which power management is a consideration, such as, for example, a portable computing device. The processor system 102 may include core logic 106, first voltage adjustment logic 108, and second voltage adjustment logic 110. The voltage regulator system 104 may supply power (i.e., a voltage at a current determined by a load) to one or more components of the device 100. The components supplied with power by the voltage regulator system 104 may include the core logic 106. Although not shown for purposes of clarity, the computing device 100 may include other such core logic, which may be supplied by the voltage regulator system 104 or a different voltage regulator system.[0021] The first voltage adjustment logic 108 and second voltage adjustment logic 110 each may comprise a different type of voltage adjustment logic, i.e., based on a different algorithm or method than each other. The first voltage adjustment logic 108 may be configured to provide voltage adjustments of a first step size, and the second voltage adjustment logic 110 may be configured to provide voltage adjustments of a second step size (i.e., different than the first step size). For example, the first voltage adjustment logic 108 may be configured to provide coarser voltage adjustments than the second voltage adjustment logic 110, and accordingly, the second voltage adjustment logic 110 may be configured to provide finer voltage adjustments than the first voltage adjustment logic 108. As used in this disclosure, the terms “coarse” and “fine” are defined only in relation to each other: coarse voltage adjustments comprise larger steps than fine voltage adjustments, and fine voltage adjustments comprise smaller steps than coarse voltage adjustments.[0022] The first voltage adjustment logic 108 may be configured to be enabled and disabled by the core logic 106. When enabled, the first voltage adjustment logic 108 may provide voltage adjustment requests to the voltage regulator system 104. When disabled, the first voltage adjustment logic 108 may refrain from (i.e., be constrained against) providing voltage adjustment requests to the voltage regulator system 104.[0023] The second voltage adjustment logic 110 may be configured to receive, while the first voltage adjustment logic 108 is disabled, a request from the core logic 106 to adjust a power supply voltage from a starting or initial value to an ending or target value. The second voltage adjustment logic 110 may further be configured to determine whether a time interval has elapsed since a previous request from the core logic 106 to adjust the power supply voltage. If the time interval has elapsed, the second voltage adjustment logic 110 may provide the request to the voltage regulator system 104. The second voltage adjustment logic 110 may provide the request to the voltage regulator system 104 only if the second voltage adjustment logic 110 determines that the time interval has elapsed. Alternatively, the core logic 106 may determine whether the time interval has elapsed based on an indication from another element, such as the second voltage adjustment logic 110 or the voltage regulator system 104. In such an embodiment, the core logic 106 may provide the request to the second voltage adjustment logic 110 only if the core logic 106 receives the indication that the time interval has elapsed, and the second voltage adjustment logic 110 may, in turn, provide a similar request to the voltage regulator system 104 without further regard to the time interval.[0024] The core logic 106 may be configured to be signaled or otherwise to determine, while the first voltage adjustment logic remains disabled, whether the power supply voltage has reached the target value. As described below, such a power supply output voltage may not change instantaneously from the initial value to the target value but rather may reach the target value in an exponential manner. If the power supply voltage has reached the target value, the core logic 106 may enable the first voltage adjustment logic 108. However, before providing the above-referenced request to the second voltage adjustment logic 110, the core logic 106 may disable the first voltage adjustment logic 108.[0025] As illustrated in FIG. 2, in another exemplary embodiment, a computing device 200 may include a system-on-a-chip (“SoC”) 202 and a power management integrated circuit (“PMIC”) 204. The SoC 202 may include a processor 206, power control logic 208, and CPR control logic 210. The processor 206 may provide or embody the core logic 212 by being configured by software in execution. The core logic 212, power control logic 208, CPR control logic 210, and the PMIC 204 may be examples of the core logic 106, first voltage adjustment logic 108, second voltage adjustment logic 110, and voltage regulator system 104, respectively, described above with regard to FIG. 1. Although not shown for purposes of clarity, the PMIC 204 may power one or more voltage rails that supply one or more SoC subsystems, such as a subsystem that includes the core logic 212. Such a subsystem may include one or more processors or processor cores. Although not shown for purposes of clarity, the processor 206 or other processor or processor core may include other such core logic.[0026] The power control logic 208 may include DCVS logic 214. The core logic 212 may in a conventional manner determine a target supply voltage at which it may operate in order to achieve a power-versus-performance balance, as understood by one of ordinary skill in the art. The core logic 212 may provide a request to adjust its supply voltage to the power control logic 208. The power control logic 208 may, in response, provide a request to adjust the supply voltage to the PMIC 204. In providing the request, the power control logic 208 may take into account not only the request received from the core logic 212 but also similar requests received from other core logic (not shown) that may be supplied by the same supply voltage rail from the PMIC 204. In providing the request, the power control logic 208 may use DCVS logic 214, which may operate based on DCVS principles or algorithms. Although not directly relevant to the present disclosure, in accordance with DCVS principles the power control logic 208 may provide a request to adjust a frequency of a clock signal provided to the core logic 212 in conjunction with the request to adjust the supply voltage. As such principles, algorithms and other aspects of DCVS logic 214 are well known, they are not described herein.[0027] The CPR control logic 210 may similarly be conventional or well-known and may comprise, for example, CPR logic 216 and CPR voltage adjustment logic 218. As understood by one of ordinary skill in the art, the CPR logic 216 may use information received from sensors (not shown) distributed on the SoC 202 as closed-loop feedback to determine whether a supply voltage can be reduced (and thereby save power) without adversely affecting chip-level operation of the SoC 202. For example, the sensors may include a delay chain (not shown) having the same operating voltage as the surrounding chip logic. Using a closed-loop (i.e., feedback-based) method, the CPR logic 216 may determine a lowest voltage at which the delay chain operates properly at a desired clock frequency (i.e., the clock frequency at which the surrounding chip logic is then operating). As understood by one of ordinary skill in the art, in some examples CPR produces fine voltage adjustments, in contrast with coarse voltage adjustments produced by DCVS. Nevertheless, in other examples CPR may produce coarser voltage adjustments than CPR. If the CPR logic 216 determines that a supply voltage can be reduced without adversely affecting operation of the SoC 202, the CPR voltage adjustment logic 218 may provide a request to the PMIC 204 to reduce the supply voltage. For the CPR logic 216 to produce accurate results, i.e., to perform the closed- loop CPR algorithm or method properly, the supply voltage on which the sensor- instrumented SoC circuitry operates must be stable. Although not separately shown in FIG. 2 for purposes of clarity, the CPR control logic 210 includes enablement and disablement circuitry or logic gating that enables and disables the CPR logic 216. As described below, the CPR logic 216 is disabled until it is determined that the supply voltage is stable.[0028] The power control logic 208 may also include a timer 220. As described below, the power control logic 208 may start the timer 220 when it provides a request to adjust the power supply voltage to the PMIC 204 based on a determination by the DC VS logic 214. The power control logic 208 refrains from initiating a subsequent such DCVS voltage adjustment request until the timer expires or there is otherwise an indication that a time interval has elapsed since the previous such DCVS voltage adjustment request. Although in the embodiment illustrated in FIG. 2 the timer 220 provides the indication that the time interval has elapsed, in other embodiments an indication that the time interval has elapsed may be provided in another manner, such as, for example, in the form of a signal provided by the PMIC 204. As described in further detail below, a subsequent DCVS voltage adjustment request may be provided to the PMIC 204 even though the corresponding voltage rail supplied by the PMIC 204 has not reached (i.e., become stable at) a target value indicated by a previous request. In contrast, the CPR control logic 210 refrains from, i.e., is disabled from, issuing a request to adjust the power supply voltage to the PMIC 204 based on a determination by the CPR logic 216 until the voltage rail has become stable and the CPR logic 216 accordingly has been reenabled. As described below with regard to FIG. 3, the voltage rail generally does not reach a stable value until substantially after the time interval. Stated conversely, the time interval is generally substantially shorter than the time required for the voltage rail to become stable.[0029] In FIG. 3, a plot 300 shows a power supply (rail) voltage 302 changing or transitioning from a starting or initial value to an ending or target value in response to voltage adjustments 304. The supply voltage 302 changes in an exponential manner due to the load (impedance). The slew rate (i.e., voltage change per unit of time) of the supply voltage 302 varies depending on the amount of current being drawn by the load. The smaller the load, the lower the slew rate, and thus the greater the amount of time until the supply voltage 302 reaches the target value. Although the plot 300 illustrates an example of an exponentially decaying or negatively slewing supply voltage 302 responding to adjustments 304 that decrease the voltage regulator output, in other examples (not shown) a supply voltage could similarly slew in an increasing or positive direction in response to adjustments that increase the voltage regulator output. Indeed, in some examples a supply voltage could increase during some time intervals and decrease during others, in response to various requests to increase and decrease the supply voltage. The present disclosure uses a decreasing supply voltage as an example because the slew rate is generally much lower for voltage decreases than for voltage increases due to the high impedance of the supply rail (discharge path), and the longer time for the supply voltage to reach the target value is potentially more problematic.[0030] The voltage adjustments 304 may comprise a series of steps or successive values at which the voltage regulator sets its output and thus attempts to set the supply rail. Although it may be possible in some embodiments to slew a supply rail voltage from an initial value to a target value by only one voltage regulator adjustment (i.e., a single step) directly to the target value, in the exemplary embodiment described herein the PMIC 204 (FIG. 2) breaks the transition into multiple steps to help minimize ringing on the supply rail. It may be noted in FIG. 3 that the PMIC steps or voltage adjustments 304 define, in effect, a PMIC slew rate, which is generally faster than the supply rail slew rate in the case of a voltage decrease; the supply rail slew rate may match or nearly match the PMIC slew rate in the case of a voltage increase (not shown). Note in the illustrated example that at a time 306, after the last step or voltage adjustment 304 (i.e., when the PMIC 204 adjusts its output voltage to the target value), the supply voltage 302 as measured on the supply rail still has not reached the target value. Rather, in the illustrated example the supply voltage 302 does not reach the target value until a time 308. The PMIC 204 monitors the voltage rail and may issue a signal, such as an interrupt, to indicate it has determined that the supply voltage 302 is no longer changing, i.e., has settled. The supply voltage 302 may remain at the target value after time 308 until it may again be adjusted in the manner described above.[0031] A conventional approach may be to refrain from initiating any voltage adjustments until the supply voltage has settled. In such a conventional approach, the time at which the supply voltage has settled may be estimated based on a worst-case load. This approach may be problematic, because overestimating the time may waste power if CPR could have begun sooner, and underestimating the time may result in inaccurate CPR (e.g., adjusting the voltage too low, possibly causing functional failure). [0032] In the exemplary embodiment (FIG. 2), the power control logic 208 may refrain from issuing a DCVS voltage adjustment request to the PMIC 204 until after a time interval has elapsed since it previously issued such a voltage adjustment request. (If the power control logic 208 has not previously issued any voltage adjustment request since the SoC 206 was reset (i.e., when a system reset, boot, etc., occurred), the time interval may be considered elapsed.) The time interval may begin to be timed when the power control logic 208 issues a DCVS voltage adjustment request to the PMIC 204. The power control logic 208 may, for example, reset the above-described timer 220 contemporaneously with issuing a DCVS voltage adjustment request, and the timer 220 may signal when the time interval has elapsed. The timer 220 may, for example, run continuously, be resettable by the power control logic 208 or a system reset, and continue counting after being reset. The time interval may be based on how much time the PMIC 204 may take to step its output from an initial value to a target value. (The PMIC slew rate, i.e., voltage step per unit time, may be fixed or predetermined.) Alternatively, instead of using the timer 220, when the PMIC has stepped its output to the target value the PMIC 204 may provide the indication that the time interval has elapsed. Referring again to FIG. 3, note that this time interval (which may also be referred to as a “Stepper Time”) may be substantially less than the amount of time it would take for the supply voltage 302 to become stable or settled at the target value at time 308. The difference between the time 308 at which the supply voltage 302 has settled and the time 306 at which the time interval has elapsed may be referred to as a “Final Settling Time.”[0033] Although not shown in the plot 300, in other examples the power control logic 208 may issue a subsequent DCVS voltage adjustment request to the PMIC 204 at any time after the time interval (Stepper Time) has elapsed, such as after the time 306. In contrast, the CPR control logic 210 may remain disabled (and the CPR logic 216 inactive) until after time 308 when the supply voltage 302 has settled. Accordingly, the CPR control logic 210 may refrain from providing CPR voltage adjustment requests to the PMIC 204 until after time 308 when the supply voltage 302 has settled.[0034] In FIG. 4, a sequence diagram or activity diagram 400 illustrates an exemplary sequence of communications or indications that may occur among the elements described above with regard to FIG. 2. Although not shown in FIG. 2 for purposes of clarity, such communications or indications may be conveyed by the use of buses or other interconnections through which messages or other signals may be communicated among the core logic 212, power control logic 208, CPR control logic 210, and PMIC 204. A PMIC arbitrator 402 (FIG. 4) may arbitrate messages between the PMIC 204 and various other elements.[0035] The core logic 212 may provide a CPR disable indication 404 to the CPR control logic 210, indicating that the CPR control logic 210 is to disable its CPR logic 216 (FIG. 2). In response to the indication 404, the CPR logic 216 refrains from (i.e., is constrained against) being active or in a state in which it is performing the abovedescribed closed-loop CPR voltage reduction method. When the CPR logic 216 is enabled, it may become active or enter a state in which it performs the CPR voltage reduction method. The CPR logic 216 may be activated in response to one or more conditions in addition to being enabled (e.g., it may perform CPR at certain times) or, alternatively, may be activated in response to being enabled, without regard to any other conditions.[0036] After providing the disable indication 404 to the CPR control logic 210, the core logic 212 may provide a voltage change request indication 406 to the power control logic 208. The voltage change request indication 406 includes an indication of a target (voltage) value. For purposes of clarity in describing an exemplary sequence, this voltage change request indication 406 may be referred to as a “first” voltage change request indication, and this target voltage may be referred to as a “first” target value. In response, the power control logic 208 may provide a similar voltage change request indication 408 to the PMIC 204 (via the PMIC arbitrator 402). In response to the voltage change request indication 408, the PMIC 204 may step its output voltage toward the first target value. As described above with regard to FIG. 3, the power supply rail may begin to slew toward the first target value in response to the PMIC output.[0037] Contemporaneously with providing the voltage change request indication 408 to the PMIC 204, the power control logic 208 may start the above-described timer 220 (FIG. 2). The timer 220 may output an indication 410 when the above-described time interval or Stepper Time has elapsed. In response to the indication 410, the power control logic 208 may provide an indication 412 to the core logic 212, indicating that the time interval has elapsed. As described above, the time interval is the amount of time it takes for the PMIC 204 to step its output to the target value.[0038] Between providing the voltage change request indication 406 and receiving the indication 412 that the time interval has elapsed, the core logic 212 refrains from providing a subsequent (e.g., “second”) voltage change request indication to the power control logic 208. However, at any time after receiving the indication 412 the core logic 212 may provide a subsequent or second voltage change request indication 414 to the power control logic 208. Such a second voltage change request indication 414 may include an indication of a new or second target value. In response, the power control logic 208 may provide a similar voltage change request indication 416 to the PMIC 204 (via the PMIC arbitrator 402). In response to the voltage change request indication 416, the PMIC 204 may step its output voltage toward the new or second target value. As described above with regard to FIG. 3, the power supply rail may begin to slew toward the new or second target value in response to the PMIC output. Note that the power supply rail voltage may not yet have reached the previously requested or first target value at the time of this second voltage change request indication 416.[0039] After providing the second voltage change request indication 416 to the PMIC 204, the power control logic 208 may again start the above-described timer 220 (FIG. 2). The timer 220 may again output an indication 418 when the above-described time interval or Stepper Time has elapsed. In response to the indication 418, the power control logic 208 may provide an indication 420 to the core logic 212, indicating that the time interval has elapsed.[0040] Between providing the second voltage change request indication 414 and receiving the indication 420 that the time interval has elapsed, the core logic 212 refrains from providing a subsequent (e.g., “third”) voltage change request indication to the power control logic 208. Although the core logic 212 may provide such a third voltage change request after receiving the indication 420, this does not occur in the example illustrated in FIG. 4. Rather, before any such subsequent voltage change request, the PMIC 204 provides an indication 422 that the supply voltage has settled, i.e., has become stable at the target value. After stepping its output to the target value, the PMIC 204 may monitor the supply rail by continuing to measure the supply rail voltage. When the PMIC 204 determines that the supply rail voltage has reached the target value, the PMIC 204 may issue the indication 422, which may be in the form of an interrupt, for example. The interrupt may be communicated from PMIC 204 to the PMIC arbitrator 402 on the SoC 406 (FIG. 2). The PMIC arbitrator 402 may provide to the power control logic 208 a similar indication 424 that the supply voltage has settled at the target value. In response to the indication 424, the power control logic 208 may, in turn, provide to the core logic 212 a similar indication 426 that the supply voltage has settled at the target value. [0041] Although in the example illustrated in FIG. 4 the core logic 212 provides a second voltage change request indication 414 before the core logic 212 receives the indication 424 that the supply voltage has settled, in other exemplary sequences (not shown) the core logic 212 may receive such an indication that the supply voltage has settled before providing a second voltage change request indication. The core logic 212 may provide any number of successive voltage change requests, so long as an amount of time greater than or equal to the time interval has elapsed between the previous voltage change request and the subsequent (next) voltage change request.[0042] In response to the indication 426 that the supply voltage has settled at the target value, the core logic 212 may provide a CPR enable indication 418 to the CPR control logic 210, indicating that the CPR control logic 210 is to enable its CPR logic 216 (FIG. 2). The indication 404 enables the CPR logic 216 to become active or in a state in which it is performing the above-described closed-loop CPR voltage reduction method. As a result of performing the CPR method, the CPR control logic 210 may provide a voltage change request indication 430 to the PMIC 204 (via the PMIC arbitrator 402).[0043] Although in the exemplary embodiments described herein the two voltage adjustment methods are DCVS and CPR, in other embodiments the voltage adjustment methods may be of any other types. In some embodiments, for example, the first voltage adjustment method may provide coarse voltage adjustments, while the second voltage adjustment method may provide fine voltage adjustments.[0044] As illustrated in FIG. 5, a method 500 for dynamic power supply voltage adjustment in a computing device may include the following. It should be understood that the method 500 represents an example or embodiment, and in other embodiments some of the steps or actions described below, or similar steps or actions, may occur in a different order than in the exemplary method 500, or may be omitted. For purposes of example, the method 500 may be described in relation to one or both of the computing devices 100 (FIG. 1) or 200 (FIG. 2). Nevertheless, the method 500 or a related method may be applied to other computing devices, systems, etc.[0045] As indicated by block 502, a first method for adjusting a power supply voltage may be disabled. The first method may be, for example, CPR. As indicated by block 504, a request to adjust the power supply voltage from an initial value to a target value using a second method may be received. The second method may be, for example, DCVS. As indicated by block 506, the second method may be initiated in response to the request if a predetermined time interval has elapsed since a previous request to adjust the power supply voltage using the second method. As indicated by block 508, it may be determined whether the power supply voltage has reached the target value. As indicated by block 510, the first method may then be enabled if it is determined that the power supply voltage has reached the target value. As indicated by block 512, the first method, once enabled, may be initiated (e.g., logic embodying the first method may be activated so as to perform the first method).[0046] As illustrated in FIG. 6, exemplary embodiments of systems and methods for dynamic power supply voltage adjustment in a computing device may be provided in a portable computing device (“PCD”) 600. The PCD 600 may be an example of the computing device 100 (FIG. 1) or 200 (FIG. 2).[0047] The PCD 600 may include an SoC 602, which may be an example of the abovedescribed SoC 202 (FIG. 2). The SoC 602 may include a CPU 604, a GPU 606, a DSP 607, an analog signal processor 608, or other processors. The CPU 604 may include multiple cores, such as a first core 604 A, a second core 604B, etc., through an Nth core 604N. In some examples of the SoC 602, the CPU 604 may be referred to as an application processor. The CPU 604, GPU 606, DSP 607, or other processor may be an example of the above-described processor 206 (FIG. 2) and may control, among other things, various aspects of the methods described above with regard to FIGs. 4-5. For example, the above-described core logic 106 (FIG. 1) or 212 (FIG. 2) may comprise a process or thread executing on the CPU 604. Also, for example, a DCVS method may be embodied in a portion of an operating system kernel executing on the CPU 604.[0048] A display controller 610 and a touch-screen controller 612 may be coupled to the CPU 604. A touchscreen display 614 external to the SoC 602 may be coupled to the display controller 610 and the touch-screen controller 612. The PCD 600 may further include a video decoder 616 coupled to the CPU 604. A video amplifier 618 may be coupled to the video decoder 616 and the touchscreen display 614. A video port 620 may be coupled to the video amplifier 618. A universal serial bus (“USB”) controller 622 may also be coupled to CPU 604, and a USB port 624 may be coupled to the USB controller 622. A subscriber identity module (“SIM”) card 626 may also be coupled to the CPU 604.[0049] One or more memories may be coupled to the CPU 604. The one or more memories may include both volatile and non-volatile memories. Examples of volatile memories include static random access memory (“SRAM”) 628 and dynamic RAMs (“DRAM”s) 630 and 631. Such memories may be external to the SoC 602, such as the DRAM 630, or internal to the SoC 602, such as the DRAM 631. A DRAM controller 632 coupled to the CPU 604 may control the writing of data to, and reading of data from, the DRAMs 630 and 631. In other embodiments, such a DRAM controller may be included within a processor, such as the CPU 604.[0050] A stereo audio CODEC 634 may be coupled to the analog signal processor 608. Further, an audio amplifier 636 may be coupled to the stereo audio CODEC 634. First and second stereo speakers 638 and 640, respectively, may be coupled to the audio amplifier 636. In addition, a microphone amplifier 642 may be coupled to the stereo audio CODEC 634, and a microphone 644 may be coupled to the microphone amplifier 642. A frequency modulation (“FM”) radio tuner 646 may be coupled to the stereo audio CODEC 634. An FM antenna 648 may be coupled to the FM radio tuner 646. Further, stereo headphones 650 may be coupled to the stereo audio CODEC 634. Other devices that may be coupled to the CPU 604 include one or more digital (e.g., CCD or CMOS) cameras 652. In addition, a keypad 660, a mono headset with a microphone 662, and a vibrator device 664 may be coupled to the analog signal processor 608.[0051] A radio frequency (RF) transceiver or modem 654 may be coupled to the analog signal processor 608 and CPU 604. An RF switch 656 may be coupled to the modem 654 and an RF antenna 658.[0052] The SoC 602 may have one or more internal or on-chip thermal sensors 670A and may be coupled to one or more external or off-chip thermal sensors 670B. An analog-to-digital converter (“ADC”) controller 672 may convert voltage drops produced by the thermal sensors 670 A and 670B to digital signals.[0053] A power supply 674 and a power management integrated circuit (“PMIC”) 676 may supply power to the SoC 602 via one or more voltage rails (not shown). The PMIC 676 may be an example of the above-described PMIC 204 (FIG. 2). The SoC 602 may include CPR control logic 678, which may be an example of the above-described CPR control logic 210 (FIG. 2).[0054] Firmware or software may be stored in any of the above-described memories, such as DRAM 630 or 631, SRAM 628, etc., or may be stored in a local memory directly accessible by the processor hardware on which the software or firmware executes. Execution of such firmware or software may control aspects of any of the methods described above with regard to FIGs. 4-5, or configure aspects any of the systems described above with regard to FIGs. 1-2. Any such memory or other non- transitory storage medium having firmware or software stored therein in computer- readable form for execution by processor hardware may be an example of a “computer- readable medium,” as the term is understood in the patent lexicon.[0055] Alternative embodiments will become apparent to one of ordinary skill in the art to which the invention pertains. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein.
A method of etching a glass substrate using an etchant that is reversibly activated to etch only in precise locations in which such etching is desired and is deactivated when outside of these locations. The method involves exposing a first side of the glass substrate to a mixture of chemical substances that includes a neutralized etchant that is photosensitive. The neutralized etchant is formed by reacting a neutralizer with an etchant. The method also includes transmitting light from a direction of a second side of the glass into the mixture of chemical substances. In response to exposure to this light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass, wherein the predetermined areas are defined by the dimension of the light.
CLAIMS1. A method of etching glass with an etchant, comprising:exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant; andtransmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from the neutralized etchant to form the etchant on predetermined areas of the first side of the glass..2. The method of claim 1, further comprising:forming a pattern on the second side of the glass, wherein the light is transmitted through the pattern to define the predetermined areas.3. The method of claim 2, wherein the light transmitting through the pattern creates a dimension of the light, wherein the dimension is associated with the predetermined areas.4. The method of claim 1, further comprising:maintaining the light for an etching period to allow the etchant to etch the glass in the predetermined areas.5. The method of claim 3 wherein the glass is etched to form any of the list consisting of: a through-via, a blind via and a depression.6. The method of claim 1, wherein the mixture of chemical substances also includes additional neutralizer, wherein the amount of neutralizer in the mixture of chemical substances always exceeds the amount of etchant in the mixture of chemical substances.7. The method of claim 1, wherein the first side of the glass is opposite the second side of the glass.8. The method of claim 1, further including:ceasing the transmission of the light, wherein, in response to the ceasing of the transmission of the light, the etchant reacts with the neutralizer to reform the neutralized etchant.9. An apparatus configured for etching glass with an etchant, comprising: means for exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant; andmeans for transmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass.10. The apparatus of claim 9, further comprising:means for forming a pattern on the second side of the glass, wherein the light is transmitted through the pattern to define the predetermined areas.11. The apparatus of claim 10, wherein the light transmitting through the pattern creates a dimension of the light, wherein the dimension is associated with the predetermined areas.12. The apparatus of claim 9, further comprising:means for maintaining the light for an etching period to allow the etchant to etch the glass in the predetermined areas.13. The apparatus of claim 12 wherein the glass is etched to form any of the list consisting of: a through-via, a blind via and a depression.14. The apparatus of claim 9, wherein the mixture of chemical substances also includes additional neutralizer, wherein the amount of neutralizer in the mixture of chemical substances always exceeds the amount of etchant in the mixture of chemical substances.15. The apparatus of claim 9, wherein the first side of the glass is opposite the second side of the glass.16. The apparatus of claim 9, further including:means for ceasing the transmission of the light, wherein, in response to the ceasing of the transmission of the light, the etchant reacts with the neutralizer to reform the neutralized etchant that stops the etching.17. An apparatus configured for etching glass with an etchant, comprising: an etch chamber for exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant; anda light source for transmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass.18. The apparatus of claim 17, further comprising:a patterned photoresist for forming a pattern on the second side of the glass, wherein the light is transmitted through the pattern to define the predetermined areas.19. The apparatus of claim 18, wherein the light transmitting through the pattern creates a dimension of the light, wherein the dimension is associated with the predetermined areas.20. The apparatus of claim 17, further comprising:a computer for maintaining the light for an etching period to allow the etchant to etch the glass in the predetermined areas.21. The apparatus of claim 20 wherein the glass is etched to form any of the list consisting of: a through-via, a blind via and a depression.22. The apparatus of claim 17, wherein the mixture of chemical substances also includes additional neutralizer, wherein the amount of neutralizer in the mixture of chemical substances always exceeds the amount of etchant in the mixture of chemical substances.23. The apparatus of claim 17, wherein the first side of the glass is opposite the second side of the glass.24. The apparatus of claim 17, further including:a computer for ceasing the transmission of the light, wherein, in response to the ceasing of the transmission of the light, the etchant reacts with the neutralizer to reform the neutralized etchant that stops the etching.
METHOD AND APPARATUS FOR LIGHT INDUCED ETCHING OF GLASS SUBSTRATES IN THE FABRICATION OF ELECTRONIC CIRCUITSCROSS-REFERENCE TO RELATED APPLICATION(S)[0001] This application claims the priority of U.S. non-Provisional Application SerialNo. 13/792,094 entitled "METHOD AND APPARATUS FOR LIGHT INDUCED ETCHING OF GLASS SUBSTRATES IN THE FABRICATION OF ELECTRONIC CIRCUITS" and filed on March 10, 2013, which is expressly incorporated by reference herein in its entirety.BACKGROUNDField[0002] Aspects of the present disclosure relate generally to electronic circuits, and more particularly, to the fabrication of integrated circuits.Background[0003] An integrated circuit is an electronic circuit on a small plate (substrate) and may be found in a wide variety of everyday electronic devices. The substrate of integrated circuits may be composed of various different types of material, such as silicon, gallium-arsenide, and the like. Glass, which has many cost and performance benefits, has also been used as a substrate for certain types of integrated circuits. For example, glass may be used as the substrate on which miniature electrical and optoelectronic devices such as microelectromechanical systems (MEMS) display and radio frequency microelectromechanical systems (RF MEMS) are fabricated. The MEMS and RF MEMS generally include electrical connections through the glass substrate because devices that need to be electrically connected may be located on different sides of the glass substrate. These through-glass electrical connections are desirable because they are shorter than connections that go around the substrate. Shorter electrical connections also provide less resistance and use up less space than longer electrical connections. The through-glass connections are implemented by creating vias or holes running from one side of the glass substrate to the opposite side and coating the side of the vias with conductive material (through via). Further, some vias may facilitate connections between layers in the substrate without going completely through the substrate (blind via). [0004] Methods of creating vias in a glass substrate include sandblasting through the glass, applying laser beams to ablate the glass and constructing the glass with special chemical properties. With respect to the method involving glass with special chemical properties, when areas of the glass are exposed to light those areas turn into etchable material. Subsequently, etchants are used to etch away the etchable material to create vias. These methods, however, have disadvantages. For example, sandblasting may damage the glass and cause localized cracking. The minimum size via achievable by this method is larger than the via size typically desired. With respect to laser, its use is sequential in nature. That is, one hole or a small set of vias are created at a time, which results in a slow process. Finally, the formulation of the glass substrate with special chemical properties that turns it into etchable material when exposed to light usually involves doping the glass with other materials such as metals. This doping is costly and often negatively affects certain other properties of the glass substrate. One example of this negative effect is excessive RF loss.[0005] The importance of the integrated circuits to modern life is reflected by their widespread use, as mentioned above. Coupled with this widespread use is the fact that the fabrication of integrated circuits requires sophisticated machinery, which translates to large capital investments and overall expensive processes. In view of these factors, improvements in techniques used in the fabrication of integrated circuits are particularly desirable.SUMMARY[0006] Methods and apparatus according to aspects of the disclosure involve the etching of a glass substrate to create features such as vias in the glass substrate by a mechanism that includes an etchant that is reversibly activated to etch only in precise locations in which such etching is desired and is deactivated when outside of these locations.[0007] In one aspect of the disclosure, a method of etching glass with an etchant, includes exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant. The method also includes transmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass. [0008] In an additional aspect of the disclosure, an apparatus configured for etching glass with an etchant, including means for exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant. The apparatus also includes means for transmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass.[0009] In an additional aspect of the disclosure, an apparatus configured for etching glass with an etchant, including an etch chamber for exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant. The apparatus also includes a light source for transmitting light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass.BRIEF DESCRIPTION OF THE DRAWINGS[0010] FIGS. 1A - 1C are block diagrams illustrating an example of liquid chemical etching of a photosensitive glass substrate.[0011] FIG. 2 is a block diagram illustrating an integrated circuit showing a glass substrate with a through via and a blind via.[0012] FIGS. 3A - 3C illustrate an example of an etchant.[0013] FIG. 4 is a diagram illustrating neutralized etchants according to aspects of the disclosure.[0014] FIG. 5 is a block diagram conceptually illustrating an example of an etching system according to one aspect of the disclosure.[0015] FIG. 6 is a functional block diagram conceptually illustrating example blocks executed to implement one aspect of the disclosure.[0016] FIG. 7 is a block diagram conceptually illustrating an example of an etching system according to one aspect of the disclosure.DETAILED DESCRIPTION[0017] As noted above, the etching of vias in a substrate for the fabrication of integrated circuits, may involve physical and chemical processes that remove substrate material. Some of these processes involve placing a mask on the substrate such that openings in the mask expose areas that are to be etched and cover the areas that are not to be etched. The uncovered areas of the substrate are then exposed to the particular physical or chemical process being used. FIGS. 1A - 1C are block diagrams illustrating an example of existing liquid chemical etching of a photosensitive glass substrate. To begin this process, in FIG. 1A, photosensitive glass substrate 101 is covered with mask 102. Light from light source 103 is then projected towards substrate 101 and mask 102. Because glass substrate 101 is photosensitive, the areas of glass substrate 101 that are exposed to light— sections 104 and 105— are transformed into etchable material. In FIG. IB, mask 102 is removed and liquid chemical 106 is applied to glass substrate 101. Liquid chemical 106 etches sections 105 and 104 but does not have any effect on the other areas of glass substrate 101 that was covered by mask 102. FIG. IB shows that the etching of sections 105 and 104 is partially complete. FIG. 1C shows glass substrate 101 after the etching process is complete so as to form through via 107 and blind via 108 and that liquid chemical 106 has been removed.[0018] FIG. 2 is a block diagram illustrating an integrated circuit with a glass substrate that has a through via and a blind via. Vias 207 and 208 of FIG. 2 may be made by the process of FIGS. 1A - 1C or any of the via fabrication processes according to the concepts disclosed herein. FIG. 2 shows a through-glass via 207, including a conductive thin film 209, in glass substrate 201. MEMS device 210 is formed or otherwise attached to glass substrate 201. Conductive thin film 209 provides a conductive electrical connection through glass substrate 201. In this way, through-glass via 207 provides an electrical connection between MEMS device 210 on one side of glass substrate 201 and a MEMS sensor 21 1 on the other side of glass substrate 201. Blind via 208 has conductive film 212 that electrically connects MEMS device 210 with a layer within glass substrate 201.[0019] In aspects of the current disclosure, a mixture of chemical substances is used as the etchant. According to aspects of the current disclosure, the etchant is activated by exposure to light. This process is known as photolysis. FIGS. 3A - 3C are block diagrams conceptually illustrating an example of an etchant. FIG. 3A shows etchant E in its active state. In its active state, etchant E exhibits etching properties with respect to a substrate. FIG. 3B shows a neutralizer N. Neutralizer N reacts reversibly with etchantE and thereby neutralizes etchant E. FIG. 3C shows etchant E reacting with neutralizerN to form neutralized etchant N-E. Neutralized etchant N-E does not perform an etching function. Notably, FIG. 3C shows that the reaction is reversible. Consistent with this, bond 301 shown between etchant E and neutralizer N may be broken by exposing neutralized etchant N-E to light. In other words, on exposure to light, neutralized etchant N-E disintegrates to form etchant E and neutralizing substance N.[0020] It is important to note that there are some reactions of substances that are activated by light but these reactions are not reversed when the light is removed. Such mechanisms should be contrasted with the above described aspects of the disclosure whereby the reaction that takes place as a result of exposure to light is reversed when the light is removed. The type of light, such as UV light used to break the bond of the neutralized etchant depends on the nature of the neutralized etchant. Particular wavelengths of light may be effective with respect to some neutralized etchants and not to others.[0021] Etchant E may be in the form of molecules and compounds that interact with neutralizer N, which also may be in the form of molecules and compounds. Examples of gaseous or vaporous etchant E include fluorine and chlorine. Depending on the material to be etched, other halogens may be used. Examples of neutralizing gaseous or vaporous substances include aryl compounds such as benzyl, xylyl and tolyl compounds. These aryl compounds may be reacted with fluorine and chlorine to produce neutralized etchants N-E such as aryl fluoride, benzyl fluoride and toluene trifluoride as illustrated in FIG. 4. Other examples of neutralized etchants include fluorine oxides (e.g. F20, F2O2) and interhalogens (e.g. C1F3). It should be noted that the compounds mentioned above are only examples of neutralized etchants that may be used to implement aspects of the disclosure and other compounds may be used.[0022] In determining which compounds would serve as neutralized etchants, according to embodiments of the disclosure, the mechanism of photolysis can be considered. Photolysis (photo dissociation) of a molecule occurs when the molecule is exposed to irradiating light having photon energy larger than the molecule's bond-dissociation energy ("D" (kcal/mol)). Absorption cross section (cm2/mol) is the ability of a molecule to absorb photons of a particular wavelength and polarization. The absorption cross section at a particular photon energy that corresponds to the bond dissociation energy should generally be sufficiently high, to initiate and sustain the photolysis reaction.[0023] According to aspects of the disclosure, the photons used to cause photolysis pass through the glass substrate and enter the etching chamber. In certain aspects of the disclosure, the glass substrates will be 0.5 mm thick or thinner. In such cases, photons in the blue end of the visible spectrum can pass through this glass substrate without being excessively absorbed. Photons having a wavelength of 380 nm (3.26 eV) are high energy (short wave length) photon and could work in these cases. In summary, suitable neutralizer-etchants may be identified by considering the glass substrate (thickness etc.), the wavelength of photons needed in view of the glass substrate, and the bond- dissociation energy of the molecules in question.[0024] FIG. 5 is a block diagram conceptually illustrating an example of an etching system according to aspects of the current disclosure. Etching system 50 includes etch chamber 500 configured to contain gases such as etchant E, neutralizer N, neutralized etchant N-E and etching byproduct B. Because etchant E, neutralizer N and neutralized etchant N-E are gases or vapors they diffuse randomly with etch chamber 500. Etch chamber 500 includes opening 501. Covering this opening is glass substrate 502. Glass substrate 502 is to be etched so as to have certain features such as vias in particular locations. In aspects of the disclosure, glass substrate may be less than or about ½ mm thick and the holes to be etched in it has a diameter of less than or about 5 microns. Such dimensions are only an example, and the various aspects of the disclosure are not limited to any particular dimension of glass substrate. Patterned photoresist 503 is a mask placed adjacent the side of glass substrate 502 that is opposite the side of glass 502 exposed to the gas mixture in etch chamber 500. Light source 504 produces light rays traveling towards patterned photoresist 503. Light passes through the holes in patterned photoresist 503, through glass substrate 502 and into etch chamber 500 via paths 505 and 506 (light that is spatially modulated). In addition to pre-patterned photoresist, spatially patterned light may be achieved by maskless methods. Paths 505 and 506 are volumes of space defined by the shape of the holes in patterned photoresist 503. For example, if the holes in patterned photoresist are circular, then paths 505 and 506 would be cylindrical volumes within etch chamber 500.[0025] As illustrated, within etch chamber 500, along paths 505 and 506, there is a concentration of etchant E. This is so because as the neutralized etchant N-E moves into paths 505 and 506, the light impinges on neutralized etchant N-E and disintegrates into neutralizer N and etchant E. As etchant E disperses away from areas 505 and 506, etchant E is no longer exposed to light and reacts with neutralizer N and thereby reforms neutralized etchant N-E. In this regard, according to aspects of the disclosure neutralizer N is available in abundant amounts in etch chamber 500 to react with etchant E when etchant E is not exposed to light. In other words, the amount of neutralizer N in etch chamber 500 is kept at a level in excess of what is required to react with all of etchant E that is in etch chamber 500. In this way, etchant E (the active state) is not found outside of paths 505 and 506. Instead, etchant E is found in found in paths 505 and 506 only as neutralized etchant N-E (the inactive state).[0026] Because etchant E is present in paths 505 and 506 only within etch chamber 500, etchant E contacts glass substrate 502 at areas 507 and 508 only. For example, if paths 505 and 506 are cylindrical volumes, then etchant E will etch glass substrate 502 to create a cylindrical opening. As can be appreciated, the shape of the opening etched depends on the shape of paths 505 and 506, which, in turn, is controlled by the shape of the openings in patterned photoresist 503 that allows light to pass through it into glass substrate 502 and etch chamber 500. In other words, spatial light patterns formed on photoresist 503 on one side of glass substrate 502 or formed without a photoresist using directly patterned light or scanned laser beams creates an etch pattern on the opposite of glass substrate 502. Accordingly, etching occurs in areas 507 and 508 and continues down into glass substrate 502. The etching process also produces etch by-product B which is a product of the chemical reaction between etchant E and glass substrate 502. When the etching process is complete, for example when the openings (e.g. vias) are fully formed, the light source may be turned off so as to cause all etchant E in etch chamber 500 to react with neutralizer N and thereby terminate the etching process. It should be noted that though FIG. 5 has been described with respect to a mixture of chemical substances that is gas or vapor, aspects of the disclosure may be implemented with the mixture of chemical substances being a liquid.[0027] FIG. 6 is a functional block diagram conceptually illustrating example blocks executed to implement etching of a glass substrate according to aspects of the disclosure. Block diagram 60 begins at block 601, which involves preparing a mixture of chemical substances for use in etching the glass substrate. The mixture of chemical substances is prepared by reacting an amount of etchant E with an amount of neutralizerN. The amount of neutralizer N is in excess of what is required to completely react with the amount of etchant E. As described above, the reaction of etchant E with neutralizerN forms neutralized etchant N-E. Neutralized etchant N-E is photo-sensitive and disintegrates into neutralizer N and etchant E on exposure of neutralized etchant N-E to light. Based on this reversible reaction, when the mixture of chemical substances is not exposed to light, it includes neutralized etchant N-E and an excess amount of neutralizer N. On the other hand, when the mixture of chemical substances is exposed to light, it includes etchant E (in areas where the light impinges), neutralized etchant N-E (in areas not exposed to light) and neutralizer N in both areas.[0028] At block 602, the glass substrate to be etched (e.g. glass substrate 502, FIG. 5) is exposed to the mixture of chemical substances. This may be done by placing glass substrate 502 at opening 501 of etch chamber 500 and then injecting a prepared mixture of chemical substances of neutralizer N and neutralized etchant N-E into etch chamber 500. Of course, the exposure of the mixture of chemical substances could be done in a different manner such as adding the mixture of chemical substances first to etch chamber 500 and then using glass substrate 502 to replace other material used to seal opening 501. In aspects of the disclosure, the exposure of glass substrate 502 to the mixture of chemical substances is done in the dark so as to prevent premature and uncontrolled etching of glass substrate 502.[0029] Once glass substrate 502 is exposed to the mixture of chemical substances, the etching of the glass may begin. To do so, at block 603, light is directed from a direction of a different side of glass substrate 502 than the side to be etched. In this way, etchant E is reversibly released from its bond with neutralizer N in predetermined areas defined by the dimension of the light such as paths 505 and 506. As such, etchant E begins to etch areas 507 and 508 of glass substrate 502 where paths 505 and 506 interface glass substrate 502. As noted, the dimension of the light may be determined by forming a pattern, such as by placing patterned photoresist 503, on the side of the glass different from the side being etched. As such, patterned photoresist 503 allows light to enter etch chamber 500 only in certain predetermined areas. It should be noted that other means of directing the light in certain predetermined areas other than patterned photoresist 503 may be used in aspects of the disclosure. For example, as presented above, directly patterned light or scanned laser beams may be used to create an etch pattern on the opposite of glass substrate 502[0030] At block 604, the light is maintained for an etching period to allow the etchant to begin etching the glass substrate in areas 507 and 508. When glass substrate 502 has been sufficiently etched, block 605 provides for ceasing the direction of light into etch chamber 500 so as to allow etchant E to react with neutralizer N and thereby reform the neutralized etchant. This stops the etching process.[0031] In aspects of the disclosure, block 604 is carried out until etchant E completely etches through glass substrate 502. This would be the case, for example, when a through via is being created. After the etching the through via, it may be metallized, as is known in the art. An example of a metallized through via is via 207 of FIG. 2.[0032] In aspects of the disclosure, a blind via may be made by carrying out block 604 for an etch period that is less than the minimum required time for etching a through via. In this way, the via does not go completely through glass substrate 502. Like the through via, the blind via may be metallized. An example of a metallized blind via is via 208 of FIG. 2.[0033] Although aspects of the present disclosure have been described with reference to the blocks of FIG. 6, it should be appreciated that operation of the present disclosure is not limited to the particular blocks and/or the particular order of the blocks illustrated in FIG. 6. Accordingly, aspects of the disclosure may provide functionality as described herein using various blocks in a sequence different than those of FIG. 6. For example, controlled directing of light as described with respect to block 603 may be performed before exposing the glass substrate to the mixture of chemical substances as described at block 602.[0034] It should also be noted that in aspects of the disclosure, non- vertical vias may be created in a glass substrate. FIG. 7 shows an etching system that may be used in creating non-vertical vias according to aspects of the disclosure. Etching system 70 is similar to system 50. Thus etch chamber 700 and opening 701 are similar to etch chamber 500 and opening 501 of etching system 50. However, non-vertical vias are desired in glass substrate 702. To achieve this, light source 704 or patterned photoresist 703, or both, are configured so that the light rays pass through glass substrate 702 and etch chamber 700 non-vertically. Because of this, paths 705 and 706 are non-vertical. That is, etchant E is disposed in a non-vertical manner consistent with the path of the light rays. Glass substrate 702 is therefore etched in the direction shown DE. In effect, in aspects of the disclosure, the light rays not only control the activation of the etchant material but also controls the direction of etching into the glass substrate.[0035] As can be appreciated, by changing the shape or direction or both of the light path through the glass substrates and etch chambers of etch systems 50 and 70, the shape and configuration of the hole being etched in the glass substrate can be controlled. As such, the present disclosure may include an etch system that is configured to providing different types of etching (e.g. etching as shown in etching systems 50 and 70) and thereby provide a way to easily control various designs, shapes or dimensions of vias or other features produced by etching of a glass substrate. For example, with such a configurable etch system, a via may be etched to have a vertical portion, a non- vertical portion, a concave portion, a convex portion, other shapes and combinations thereof.[0036] In aspects of the disclosure, the operation of light source 504 may be controlled by signals from computers 509 and 709. In this way the etch period may be controlled by computers 509 and 709. Further, computers 509 and 709 may precisely control the direction etching and the shape of the etched feature in various portions of glass substrate 502 and 702 by controlling the direction of the light. Further, computers 509 and 709 may be used to control movement of one or more patterned photoresists 503 and 703 to control the direction etching and the shape of the etched feature in various sections of glass substrates 502 and 702 respectively. In aspects of the disclosure, the combined effect of computers 509's and 709's control of light sources 504 and 704 respectively and patterned photoresist 503 and 703 respectively implements the desired direction etching and shape of the etched feature.[0037] While aspects of the disclosure have been described with respect to etching vias, it should be noted that aspects of the disclosure may also include etching other features such as depressions on a glass substrate for other purposes in making integrated circuits. For example, the depressions may be etched and then material (e.g. conductive material) later deposited in these depressions.[0038] In view of the above described disclosure being used to etch glass substrates for fabrication of integrated circuits, aspects of the disclosure include electronic devices that include integrated circuits fabricated in part by the use of the techniques described herein. Such electronic devices include computers, video camcorders, televisions, radios, cameras, telephones and the like.[0039] As can be appreciated, the disclosure herein provides various improvements over the existing art. For example, the activation and deactivation of the etchant by light exposure enables better control over which areas of the glass substrate gets etched. In this way, various designs, shapes or configurations of the features being etched may be achieved. Further, as compared to the etching based on doping the glass substrate to be photosensitive, the current disclosure avoids the doping of glass and the problems associated with such a procedure (e.g. RF loss). Moreover, doping the glass involves doping all the glass and then using light to act on only certain areas of the glass. In effect, the material used to dope the areas not affected by the light has been wasted and merely remains in the glass to cause the previously mentioned deleterious effects. In contrast, according to aspects of the disclosure, the etchant is activated and used only where it is needed and unused etchant that remains in the chamber can be used for another glass substrate. Further, regular (cheaper) glass may be used in aspects of the disclosure. Furthermore, because aspects of the disclosure involve light activating a neutralized etchant to become an active etchant, the size of the hole being etched can be very small and uniform as established by the size and consistency of the path of the light ray used. In sum, the present disclosure provides a more efficient and precision controlled method of etching glass substrates to form features such as vias.[0040] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to limit the scope of the disclosure. Rather, the detailed description includes specific details for the purpose of providing a thorough understanding of the inventive subject matter. It will be apparent to those skilled in the art that these specific details are not required in every case and that, in some instances, well-known structures and components are shown in block diagram form for clarity of presentation.[0041] FIG. 5 is a block diagram conceptually illustrating a system for etching glass with an etchant. Etch chamber 500 provides means for exposing a first side of the glass to a mixture of chemical substances that includes a neutralized etchant that is photosensitive, the neutralized etchant including a neutralizer bonded with an etchant. Light source 504 and patterned photoresist 503 provide means for directing light from a direction of a second side of the glass into the mixture of chemical substances, wherein, in response to exposure to the light, the etchant is reversibly released from a bond to the neutralizer to form the etchant on predetermined areas of the first side of the glass. Patterned photoresist 503 provides means for forming a pattern on the second side of the glass, wherein the light is transmitted through the pattern to define the predetermined areas. Computer 509 and light source 504 provide means for maintaining the light for an etching period to allow the etchant to etch the glass in the predetermined areas. Computer 509 and light source 504 provide means for ceasing the transmission of the light, wherein, in response to the ceasing of the transmission of the light, the etchant reacts with the neutralizer to reform the neutralized etchant that stops the etching.[0042] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0043] The functional blocks and modules in FIG. 6 may comprise processors, electronics devices, hardware devices, electronics components, logical circuits, memories, software codes, firmware codes, etc., or any combination thereof.[0044] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0045] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general- purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general- purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0046] The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0047] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.[0048] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. [0049] WHAT IS CLAIMED IS:
A strained silicon semiconductor arrangement with a shallow trench isolation (STI) structure has a strained silicon (Si) layer (52) formed on a silicon germanium (SiGe) layer (50). A trench (58) extends through the Si layer (52) into the SiGe layer (50), and sidewall spacers (62) are employed that cover the entirety of the sidewalls within the trench in the SiGe layer (50). Following STI fill, polish and nitride stripping process steps, further processing can be performed without concern of the SiGe layer (50) being exposed to a silicide formation process.
WHAT IS CLAIMED IS: 1. A method of forming an isolation trench, comprising: forming a silicon (Si) layer 52 on a silicon-germanium (SiGe) layer 50; forming a trench 58 extending through the Si layer 52 and into the SiGe layer 50; forming sidewall spacers 62 in the trench 58; and filling the trench 58 with isolating material 64. 2. The method of claim 1, wherein the step of forming sidewall spacers 62 includes depositing a spacer layer 60 in the trench 58. 3. The method of claim 2, wherein the step of forming sidewalls spacers 62 includes anisotropically etching the spacer layer 60. 4. The method of claim 3, wherein the step of anisotropically etching includes overetching the sidewall spacers 62 until at least a portion of the Si layer 52 within the trench 58 is exposed. 5. The method of claim 4, wherein the spacer layer 60 is a nitride. 6. The method of claim 5, further comprising forming a nitride layer 54 on the silicon layer 52 and a capping layer 56 on the nitride layer 54, prior to forming the trench 58, wherein the step of forming the trench 58 also includes forming the trench 58 through the capping layer 56 and the nitride layer 54 extending into the Si layer 52 and the SiGe layer 50. 7. The method of claim 6, further comprising removing the capping layer 56 and the nitride layer 54 after the trench 58 is filled with the isolating material 64. 8. The method of claim 7, further comprising forming suicide with the Si layer 52 after the step of removing the capping layer 56 and the nitride layer 54. 9. A strained silicon semiconductor arrangement within a shallow trench isolation structure comprising: a strained silicon (Si) layer 52 on a silicon germanium (SiGe) layer 50; a trench 58 extending through the Si layer 52 into the SiGe layer 50, the trench 58 having sidewalls; sidewall spacers 62 covering the entirety of the sidewalls within the trench 58 in the SiGe layer 50; and field oxide 64 filling the trench 58. 10. The arrangement of claim 9, wherein the sidewall spacers 62 cover only a portion of the sidewalls within the trench 58 in the Si layer 52.
METHOD OF FORMING ISOLATION TRENCH WITH SPACER FORMATIONFIELD OF THE INVENTION[01] The present invention relates to the fabrication of integrated circuit semiconductor devices, and more particularly, to fabricating highly integrated circuit semiconductor devices having high-quality shallow trench isolation (STI) without exposing the portions of the sidewalls of the trench.BACKGROUND OF THE INVENTION[02] As miniaturization of elements of integrated circuit semiconductor devices drives the industry, the width and the pitch of an active region have become smaller, thus rendering the use of traditional LOCOS (local oxidation of silicon) isolation techniques problematic. STI is considered a more viable isolation technique than LOCOS because, by its nature, STI creates hardly any bird's beak characteristic of LOCOS, thereby achieving better control of active width at sub-micron feature sizes.[03] Conventional STI fabrication techniques include forming a pad oxide on an upper surface of a semiconductor substrate, forming a nitride, e.g., silicon nitride, polish stop layer thereon, typically having a thickness of greater than 1,000 A, forming an opening in the nitride polish stop layer, anisotropically etching to form a trench in the semiconductor substrate, and forming a thermal oxide liner in the trench with insulating material, such as silicon oxide, forming an overburden on the nitride polish stop layer. Planarization is then implemented, as by conducting chemical mechanical polishing (CMP). During subsequent processing, the nitride layer is removed along with the pad oxide followed by formation of active areas, which typically involve masking, ion implantation, and cleaning steps. During such cleaning steps, the top corners of the field oxide are isotropically removed leaving a void or "divot" in the oxide fill.[04] For example, a conventional STI fabrication technique is illustrated in Figs. 1 through 4, wherein similar features are denoted by similar reference characters. Adverting to Fig. 1, a pad oxide 11 is formed over an upper surface of a semiconductor substrate 10, and a silicon nitride polish stop layer 12 is formed thereon, typically at a thickness in excess of 1,000 A. A photomask (not shown) is then used to form an opening through the nitride polish stop layer 12, pad oxide 11, and a trench is formed in the semiconductor substrate 10. [05] Subsequently, a thermal oxide liner (not shown) is formed in the trench, an insulating material is deposited and planarization implemented, as by CMP, resulting in the intermediate structure illustrated in Fig. 2, the reference character 20 denoting the oxide fill. Similarly, the nitride polish stop layer 12 and pad oxide layer 11 are removed and cleaning steps, which include oxide-consuming HF-based wet steps, are performed on the active regions during the process of doping and gate/sacrificial oxide formation. In current ULSI integration schemes, two and even three different gate oxides are integrated onto a single chip to facilitate different types of transistors. This requires an enhanced oxide etch budget and can exacerbate the divot formation. Such cleaning steps result in the formation of divots 30 as illustrated in Fig. 3. [06] The STI divots are problematic in various respects. For example, STI divots are responsible for high field edge leakage, particularly with shallow source/drain junctions. As shown in Fig. 4, suicide regions 41 formed on shallow source/drain regions 40 grow steeply downwards, as illustrated by reference character 42, below the junction depth formed at a latter stage resulting in high leakage and shorting. Segregation of dopants, notably boron, at STI field edges reduces the junction depth. Accordingly, after the junctions are suicided, the suicide 42 penetrating into the substrate causes shorting routes and, hence, large leakage occurrence from the source/drain junctions to a well or substrate.[07] In strained silicon applications, in which a thin silicon (Si) layer is provided on a silicon germanium (SiGe) layer, the potential for formation of STI divots during the STI process exposes the underlying SiGe layer during the process flow. This is highly undesirable as it leads to poor suicide formation, among other issues.SUMMARY OF THE INVENTION[08] There is a need for a method of protecting an underlying SiGe layer of a strained silicon arrangement in the manufacturing process, such that exposure of the SiGe layer caused by divots in a field oxide region do not allow suicide to form at the SiGe layer. Exposure of the SiGe layer also leads to Ge[theta]2 formation at the surface. Unlike Si[theta]2, Ge[theta]2 is unstable and can dissolve even in hot water, exposing more of the SiGe to attack. Redisposition and incorporation of Ge-species from solution into electrically conductive areas may also result in undesirable electrical effects.[09] This and other needs are met by embodiments of the present invention which provide a method of forming an isolation trench comprising the steps of forming a silicon-germanium (SiGe) layer and a silicon (Si) layer on the SiGe layer. A trench is formed extending through the Si layer and into the SiGe layer. Sidewall spacers are formed in the trench, and the trench is filled with isolating material. In certain embodiments of the invention, the formation of the sidewall spacers includes anisotropically etching the spacer layer until at least a portion of the silicon layer within the trench is exposed, but the sidewalls of the trench in the SiGe layer remain completely covered by the sidewall spacers. [10] With the methodology of the present invention, even if divots are formed at the field oxide by the wet cleans, the SiGe layer is not exposed. This preserves the integrity of the suicide formation, among other advantages.[11] The earlier stated needs are met by other aspects of the present invention which provide a method of forming shallow trench isolation structures in a strained silicon arrangement, comprising the steps of forming a strained silicon layer on a silicon-germanium layer. A trench is formed in the silicon layer and the silicon germanium layer. The trench is filed with field oxide while preventing exposure of sidewalls of the trench in the silicon germanium layer to the field oxide.[12] The earlier stated needs are met by still further aspects of the present invention which provide a strained silicon semiconductor arrangement with a shallow trench isolation structure. The arrangement comprises a strained silicon (Si) layer on a silicon germanium (SiGe) layer. A trench extends through the Si layer into the SiGe layer, this trench having sidewalls. Sidewall spacers are provided that cover the entirety of the sidewalls within the trench in the SiGe layer. Field oxide is provided that fills the trench.[13] The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS[14] Figs. 1 through 4 schematically illustrate sequential phases of a conventional method for forming STI regions. In Figs. 1 through 4, similar features are denoted by similar reference characters. [[iota]5JFigs. 5 through 11 schematically illustrate sequential phases of a method in accordance with an embodiment of the present invention. In Figs. 5 through 11, similar features are denoted by similar reference characters.DETAILED DESCRIPTION OF THE INVENTION[16] The present invention addresses and solves problems related to the implementation of STI methodology and the SiGe layer in silicon germanium-on-insulator arrangement. The formation of STI divots during the creation of STI structures may expose the SiGe layer in SGOI arrangements. This exposure leads to poor suicide formation, for example, and other deleterious effects. The present invention addresses these problems, in part, by forming sidewall spacers in the trench that extends through the Si layer and into the SiGe layer. The sidewall spacers are recessed and completely cover the sidewalls of the SiGe layer within the trench. Hence, even if an oxide divot is formed by wet cleans, the SiGe layer will not be exposed. This prevents the poor suicide formation and other deleterious effects caused by the oxide divot and exposure of the SiGe layer. [17] A method in accordance with an embodiment of the present invention is schematically illustrated in Figs. 5 through 11, wherein similar features are denoted by similar reference characters. [18] Adverting to Fig. 5, a layer of silicon germanium 50 is provided on which a layer of silicon 52 is provided. Ih a SGOI arrangement, for example, the SiGe layer 50 and the Si layer 52 are provided on an insulator layer (not shown), such as a buried oxide layer. Conventional methodologies for forming the SiGe layer 50 and the Si layer 52 may be employed.[19] A nitride masking layer 54 and an oxide cap layer 56 are formed on the silicon layer 52. Hence, such layers may be formed by conventional deposition techniques or other methodologies. [20] A conventional etch is performed, the results of which are depicted in Fig. 6. The conventional STI etch creates a recess 58 through the oxide cap layer 56, the nitride layer 54, the silicon layer 52 and the silicon germanium layer 50. Recess 58 may extend into the silicon layer 52 and the silicon germanium layer 50 to a conventional depth. A conventional STI etch recipe may be employed to perform the etching. [21] Following the etching of the STI trench 58, a spacer layer 60 is deposited in the trench by conventional deposition techniques. For example, a suitable material to be deposited is silicon nitride. An etch is now performed, the results of which are depicted in Fig. 8. The etch is one that is selective to oxide, for example, so that the oxide cap layer 56 is preserved. The etching may be anisotropic etching, for example, employing CH3F + O2 or CH3F + O2 + Ar, for example. [22] In certain embodiments of the invention, the anisotropic etching, such as reactive ion etching, is performed to an extent that forms the sidewall spacer 62 but with an overetch that is enough to recess the spacers 62. In other words, the overetching causes exposure of at least a portion of the sidewalls of the silicon layer 52 within the trench 58. The recess of the spacers is necessary in order to ensure that no part of the spacers 62 is contiguous with the nitride layer 54. Since the nitride layer 54 is typically etched away in a phosphoric acid wet etch bath, any spacer contacting it would be attacked as well, unless it were recessed and thus protected by the oxide filling the trench region. However, the overetching is stopped in good time to assure the coverage of the entire sidewalls of the SiGe layer 50 within the trench 58. The bottom of the trench 58 is exposed by the reactive ion etching.[23] With the sidewall spacer 62 thus formed, the STI process continues in a conventional manner, as depicted in Figs. 9-11. Hence, Fig. 9 depicts the filling of trench 58 with isolation material, such as field oxide 64. [24] As depicted in Fig. 10, following the STI fill, a polishing operation is performed that removes the excess STI fill 64 and the oxide layer 56. A conventional polishing technique may be employed. [25] Following the polishing, a nitride strip is then performed, the results of which are depicted in Fig. 11. The nitride layer 54 is removed during the nitride strip, leaving behind the field oxide 64 of the STI arrangement. In subsequent wet cleans, an oxide divot could potentially be formed, as described earlier with respect to Figs. 1-4. These potential oxide divots are depicted in phantom in Fig. 11, and provided with reference numeral 66. Hence, even if the oxide divots 66 are formed by the wet cleans, the sidewalls of the SiGe layer 50 will not be exposed as they are securely protected by the sidewall spacers 62. Subsequent processing can now be performed without concern for silicide formation and other deleterious effects caused by SiGe exposure. [26] The present invention enjoys industrial applicability in fabricating highly integrated semiconductor devices containing STI regions on SGOI arrangements or other arrangements, with improved silicide formation. The present invention enjoys particular applicability in manufacturing semiconductor devices with sub-micron dimensions. [27] Although the present invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the scope of the present invention being limited only by the terms of the appended claims.
Embodiments of the present disclosure include a power management unit for controlling power in a microcontroller. The unit causes a first voltage to be provided in an active mode. In a sleep mode, the unit determines whether a supply voltage is less than an upper reference voltage and, if so, cause a second voltage greater than the first voltage to be provided. If not, the unit inhibits operation of voltage regulation of power supplied to the microcontroller. After inhibition of operation of voltage regulation, the unit determines whether the supply voltage has fallen to a lower reference voltage and, if so, applies the second voltage to the microcontroller.
CLAIMSWhat is claimed is:1. A microcontroller operable in an active mode and a sleep mode and comprising:a voltage supply line connectable to ground via an external decoupling capacitor; a voltage regulator circuit coupled to the voltage supply line and configured to apply to the voltage supply line one of a first voltage and a second voltage greater than the first voltage;a voltage monitor circuit coupled to the voltage supply line and configured to:compare a supply voltage on the voltage supply line to a third voltage that is greater than the first voltage, to produce a first signal; andcompare the supply voltage to a fourth voltage that is less than the first voltage, to produce a second signal;a power management unit circuit electrically connected to the voltage regulator circuit and to the voltage monitor circuit and configured to:determine whether the microcontroller is in the active mode or the sleep mode;responsive to a determination that the microcontroller is in the active mode, cause the voltage regulator circuit to apply the first voltage to the voltage supply line;responsive to a determination that the microcontroller is in the sleep mode:determine from the first signal whether the supply voltage is less than the third voltage;responsive to a determination that the supply voltage is less than the third voltage, cause the voltage regulator circuit to apply the second voltage to the voltage supply line; andresponsive to a determination that the supply voltage is not less than the third voltage, inhibit operation of the voltage regulator circuit at a first time;subsequent to the first time, determine from the second signal whether the supply voltage has fallen to the fourth voltage; and responsive to a determination that the supply voltage has fallen to the fourth voltage, cause the voltage regulator circuit to apply the second voltage to the voltage supply line at a second time; andcircuits configured to output control signals from the microcontroller to one or more devices electrically connected to the microcontroller, the circuits configured to draw power from the voltage supply line and the decoupling capacitor for a time period between the first time and the second time.2. The microcontroller of claim 1, wherein the second voltage and the third voltage are approximately equal.3. The microcontroller of any of Claims 1-2, wherein the circuits require a minimum voltage during the sleep mode, and the fourth voltage is greater than the minimum voltage.4. The microcontroller of any of Claims 1-3, wherein the power management unit circuit is further configured to:inhibit operation of the voltage monitor circuit at the first time; andcause the voltage monitor circuit to operate at regular intervals between the first time and the second time, wherein the intervals between operations of the voltage monitor circuit correspond to a predetermined period of time.5. The microcontroller of Claim 4, further comprising a counter that is configured to store a value corresponding to total time elapsed between the first time and the second time, wherein the power management unit circuit is further configured to:calculate a second predetermined period of time based on the value, wherein the second predetermined period of time is different than the predetermined period of time.6. A method for managing a microcontroller operable in an active mode and a sleep mode, the method comprising:determining whether the microcontroller is in the active mode or the sleep mode; responsive to a determination that the microcontroller is in the active mode, causing a first voltage to be supplied by a voltage regulator circuit to a voltage supply line connected to ground via an external decoupling capacitor;responsive to a determination that the microcontroller is in the sleep mode, causing a second voltage greater than the first voltage to be supplied by the voltage regulator circuit to the voltage supply line;comparing, by a voltage monitor circuit coupled to the voltage supply line, a supply voltage on the voltage supply line to a third voltage that is greater than the first voltage; determining, based on the comparison of the supply voltage to the third voltage, whether the supply voltage is less than the third voltage;responsive to a determination that the supply voltage is less than the third voltage, causing the second to be supplied to the voltage supply line; andresponsive to a determination that the supply voltage is not less than the third voltage, inhibiting operation of the voltage regulator circuit at a first time;subsequent to the first time, comparing, by the voltage monitor circuit, the supply voltage on the voltage supply line to a fourth voltage that is less than the first voltage;determining, based on the comparison of the supply voltage to the fourth voltage, whether the supply voltage has fallen to the fourth voltage; andresponsive to a determination that the supply voltage has fallen to the fourth voltage, causing the second voltage to be supplied by the voltage regulator circuit to the voltage supply line at a second time; andproviding power, during a time period between the first time and the second time, from the voltage supply line and the decoupling capacitor to circuits that output control signals from the microcontroller to one or more devices electrically connected to the microcontroller. 7. The method of Claim 6, wherein the second voltage and the third voltage are approximately equal.8. The method of any of Claims 6-7, wherein the circuits require a minimum voltage during the sleep mode, and the fourth voltage is greater than the minimum voltage.9. The method of any of Claims 6-8, further comprising:inhibiting operation of the voltage monitor circuit at the first time; andcausing the voltage monitor circuit to operate at regular intervals between the first time and the second time, wherein the intervals between operations of the voltage monitor circuit correspond to a predetermined period of time.10. The method of claim 9, further comprising:generating, using a counter, a value corresponding to total time elapsed between the first time and the second time; andcalculating a second predetermined period of time based on the value, wherein the second predetermined period of time is different than the predetermined period of time.11. A power management unit (PMU) for controlling power in a microcontroller, comprising circuitry configured to:determine whether the microcontroller is in an active mode or a sleep mode;based on a determination that the microcontroller is in the active mode, cause a first voltage to be provided to the microcontroller;based on a determination that the microcontroller is the sleep mode:determine whether a supply voltage of the microcontroller is less than an upper reference voltage;based on a determination that the supply voltage is less than the upper reference voltage, cause a second voltage to be provided to the microcontroller, the second voltage greater than the first voltage;based on a determination that the supply voltage is greater than or equal to the upper reference voltage, inhibit operation of voltage regulation of power supplied to the microcontroller;after inhibition of operation of voltage regulation, determine whether the supply voltage has fallen to a lower reference voltage; and based on a determination that the supply voltage has fallen to the lower reference voltage, apply the second voltage to the microcontroller.12. The PMU of Claim 11, further comprising a connection to an external decoupling capacitor.13. The PMU of any of Claims 11-12, further comprising circuitry configured to connect to a voltage regulator, the circuitry to provide the power to the microcontroller. 14. The PMU of any of Claims 11-13, further comprising circuitry configured to connect to a voltage monitor, the circuitry to receive indications of statuses of the supply voltage.15. The of any of Claims 11-14, further comprising circuitry to provide power to output circuits of the microcontroller from power from an extemal decoupling capacitor after inhibition of voltage regulation.16. The PMU of any of Claims 11-15, wherein the second voltage and the upper reference voltage are approximately equal.17. The PMU of any of Claims 11-16, wherein output circuits of themicrocontroller require a minimum voltage during the sleep mode, and the lower reference voltage is greater than the minimum voltage. 18. The PMU of any of Claims 11-17, further comprising circuitry to alternately and periodically inhibit and enable voltage monitoring after voltage regulation is inhibited.19. The PMU of claim 18, further comprising:a counter configured to store a value corresponding to total time elapsed after voltage regulation is inhibited until the determination that the supply voltage has fallen to the lower reference voltage and the second voltage is provided;circuitry configured to adjust a period of alternately inhibiting and enabling voltage monitoring based upon the value.
SYSTEMS AND METHODS FOR MANAGING POWER CONSUMED BY A MICROCONTROLLER IN AN INACTIVE MODETECHNICAL FIELDEmbodiments of the present disclosure are directed to power management and, more particularly, to systems and methods for managing power consumed by a microcontroller in an inactive mode are provided.BACKGROUNDMicrocontrollers are often used in low power applications to provide control signals to devices in which they are installed (e.g., devices that operate from battery power). Many microcontrollers for low power applications can enter an inactive mode in which the state of the microcontroller can be maintained, but in which the microcontroller does not operate to provide control signals as it does in the active mode. However, such low power microcontrollers use circuits for providing an internal supply voltage that are typically designed to optimize efficiency in either the active more or the inactive mode, as it is difficult to design such circuits for high efficiency in both modes. This can lead to wasting power in either the active or sleep modes, as the circuits are not optimized for efficiency in both.SUMMARYIn accordance with some embodiments of the disclosed subject matter, systems and methods for managing power consumed by a microcontroller in an inactive mode are provided.In accordance with some embodiments of the disclosed subject matter, a microcontroller operable in an active mode and a sleep mode is provided, the microcontroller comprising: a voltage supply line connectable to ground via an external decoupling capacitor; a voltage regulator coupled to the voltage supply line and configured to apply to the voltage supply line one of a first voltage and a second voltage greater than the first voltage; a voltage monitor coupled to the voltage supply line and configured to: compare a supply voltage on the voltage supply line to a third voltage that is greater than the first voltage, to produce a first signal; and compare the supply voltage to a fourth voltage that is less than the first voltage, to produce a second signal; a power management unit electrically connected to the voltage regulator and to the voltage monitor and configured to: determine whether the microcontroller is in the active mode or the sleep mode; responsive to a determination that the microcontroller is in the active mode, cause the voltage regulator to apply the first voltage to the voltage supply line; responsive to a determination that the microcontroller is in the sleep mode: determine from the first signal whether the supply voltage is less than the third voltage; responsive to a determination that the supply voltage is less than the third voltage, cause the voltage regulator to apply the second voltage to the voltage supply line; and responsive to a determination that the supply voltage is not less than the third voltage, inhibit operation of the voltage regulator at a first time; subsequent to the first time, determine from the second signal whether the supply voltage has fallen to the fourth voltage; and responsive to a determination that the supply voltage has fallen to the fourth voltage, cause the voltage regulator to apply the second voltage to the voltage supply line at a second time; and circuits that output control signals from the microcontroller to one or more devices electrically connected to the microcontroller, the circuits drawing power from the voltage supply line and the decoupling capacitor for a time period between the first time and the second time.In some embodiments, the second voltage and the third voltage are approximately equal.In some embodiments, the circuits require a minimum voltage during the sleep mode, and the fourth voltage is greater than the minimum voltage.In some embodiments, the power management unit is further configured to: inhibit operation of the voltage monitor at the first time; and cause the voltage monitor to operate at regular intervals between the first time and the second time, wherein the intervals between operations of the voltage monitor correspond to a predetermined period of time.In some embodiments, the microcontroller further comprises a counter that is configured to store a value corresponding to total time elapsed between the first time and the second time, wherein the power management unit is further configured to: calculate a second predetermined period of time based on the value, wherein the second predetermined period of time is different than the predetermined period of time.In some embodiments, a method for managing a microcontroller operable in an active mode and a sleep mode is provided, the method comprising: determining whether the microcontroller is in the active mode or the sleep mode; responsive to a determination that the microcontroller is in the active mode, causing a first voltage to be supplied by a voltage regulator to a voltage supply line connected to ground via an external decoupling capacitor; responsive to a determination that the microcontroller is in the sleep mode, causing a second voltage greater than the first voltage to be supplied by the voltage regulator to the voltage supply line; comparing, by a voltage monitor coupled to the voltage supply line, a supply voltage on the voltage supply line to a third voltage that is greater than the first voltage; determining, based on the comparison of the supply voltage to the third voltage, whether the supply voltage is less than the third voltage; responsive to a determination that the supply voltage is less than the third voltage, causing the second to be supplied to the voltage supply line; and responsive to a determination that the supply voltage is not less than the third voltage, inhibiting operation of the voltage regulator at a first time; subsequent to the first time, comparing, by the voltage monitor, the supply voltage on the voltage supply line to a fourth voltage that is less than the first voltage; determining, based on the comparison of the supply voltage to the fourth voltage, whether the supply voltage has fallen to the fourth voltage; and responsive to a determination that the supply voltage has fallen to the fourth voltage, causing the second voltage to be supplied by the voltage regulator to the voltage supply line at a second time; and providing power, during a time period between the first time and the second time, from the voltage supply line and the decoupling capacitor to circuits that output control signals from the microcontroller to one or more devices electrically connected to the microcontroller.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows an example of a microcontroller configured to manage power consumed in an inactive mode in accordance with some embodiments of the disclosed subject matter.FIG. 2 shows an example of voltage on an internal supply line during different operational modes and states of a microcontroller in accordance with some embodiments of the disclosed subject matter.FIG. 3 shows an example of a process for managing power consumed by a microcontroller in an inactive mode in accordance with some embodiments of the disclosed subject matter.FIG. 4 shows another example of voltage on an internal supply line during different operational modes and states of a microcontroller in accordance with some embodiments of the disclosed subject matter.FIG. 5 shows an example of a process for managing power consumed by a microcontroller in an inactive mode using an adaptively calculated time period in accordance with some embodiments of the disclosed subject matter. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments are not intended to limit the disclosure to the particular forms disclosed herein.DETAILED DESCRIPTIONIn accordance with various embodiments, mechanisms (which can, for example, include systems and methods) for managing power consumed by a microcontroller in an inactive mode are provided.In some embodiments of the subject matter disclosed herein, a microcontroller can include a voltage regulator that is configured to maintain a constant internal supply voltage during an active mode of the microcontroller. For example, the voltage regulator can convert an external supply voltage (e.g., Vcc) to an internal supply voltage to supply power to circuits of the microcontroller that are used to provide control signals to a device in which the microcontroller is installed.In some embodiments, the microcontroller can enter an inactive mode (sometimes referred to as a "standby mode" or "sleep mode") when not being used to provide control signals. For example, if the device in which the microcontroller is installed is not used for a predetermined period of time, or if the device is put into a sleep mode, the microcontroller can attempt to conserve power by entering the inactive mode.In some embodiments, the microcontroller can include a power management unit that can operate during the inactive state to alternately boost the internal supply voltage to a boost voltage level and turn off the voltage regulator until the voltage falls to a threshold voltage. For example, the power management unit can cause the voltage regulator to boost the internal supply voltage to an elevated level (e.g., a boost level) when the microcontroller enters the sleep mode. In such an example, when the internal supply voltage reaches a predetermined boost voltage, the power management unit can stop operations of the voltage regulator to conserve power. While the voltage regulator is turned off, the internal supply voltage can go down over time due to leakage in the microcontroller and operation of certain circuits (e.g., the power management unit) while in the inactive mode. In some embodiments, power can be provided during the inactive mode from an external decoupling capacitor coupled between an internal voltage supply line and ground that can store power (i.e., in an electric field) that was charged by the voltage regulator.In some embodiments, when the internal supply voltage falls below a threshold voltage (e.g., due to leakage, usage by components that are active in the inactive period, and any other losses), the power management unit can re-enable the voltage regulator to boost the voltage back to the boost level.FIG. 1 shows an example 100 of a microcontroller configured to manage power consumed in an inactive mode in accordance with some embodiments of the disclosed subject matter. As shown in FIG. 1, microcontroller 100 can include logic and/or memory 102 that can be used to, for example, receive input (e.g., from a system being controlled by microcontroller 100), calculate values, and provide output (e.g., to the system being controlled by microcontroller 100). In some embodiments, input and output can be received and provided using any suitable input and/or output pins coupled to logic and/or memory 102.In some embodiments, microcontroller 100 can include a voltage regulator 104 that can be configured to maintain a voltage (Vs) of an internal voltage supply line 106 at a particular level. In some embodiments, voltage regulator 104 can be implemented with any suitable circuits using any suitable technique or combination of techniques. For example, in some embodiments, voltage regulator 104 can be implemented using a DC-to-DC converter. Considerations for the implementation of voltage regulator 104 may include that voltage regulator 104 may utilize a voltage reference in order to function correctly. In some cases, such a voltage reference may be internal to voltage regulator 104. In other cases, such a voltage reference may be external to voltage regulator 104, wherein the voltage reference might be shutted down in the same manner as voltage regulator 104. In some embodiments, voltage regulator 104 can be controlled to maintain Vs at different levels based on the mode of microcontroller 100 (e.g., as described below). As shown in FIG. 1, in some embodiments, voltage regulator 104 can receive an external supply voltage Vcc, and use Vcc to maintain Vs on internal voltage supply line 106 at a particular internal supply voltage, which can be used to supply an operating voltage to other components of microcontroller unit 100 (e.g., logic and/or memory 102). Additionally, in some embodiments, an external decoupling capacitor 108 can be connected between internal voltage supply line 106 and ground. In some embodiments, decoupling capacitor 108 can be implemented using any suitable technique or combination of techniques. For example, decoupling capacitor 108 can be implemented using one or more capacitor components connected in series and/or parallel. The capacitance range to implement decoupling capacitor 108 may be, for example, lOOnF to lOuF. The higher the capacitance value, the more efficient power consumption may be.In some embodiments, a voltage monitor 110 can compare Vs on internal voltage supply line 106 to a value based on the current state of microcontroller 100, and can indicate a status of Vs to a power management unit 112. In some embodiments, voltage monitor 1 10 can be implemented with any suitable circuits using any suitable technique or combination of techniques. Voltage monitor 110 may be implemented, for example, with a voltage comparator or a brown-out detector. In some embodiments, operation of voltage monitor 110 can be powered using the voltage on internal voltage supply line 106. In other embodiments, voltage monitor 1 10 may be powered VCC or another external voltage through a supply voltage input to microcontroller 100. In some embodiments, voltage monitor 110 can provide feedback to voltage regulator 104 (e.g., during an active mode) to facilitate maintenance of a particular voltage on internal voltage supply line 106. Accordingly, a safe voltage level is applied to processing logic and memories. When a safe voltage is unavailable, a global reset may be performed. The feedback may be passed from voltage monitor 1 10 to voltage regulator 104 through control logic inside the power management unit 112.In some embodiments, power management unit 112 can control operation of voltage regulator 104 based on a mode (e.g., active or inactive) of microcontroller 100 and/or a state of microcontroller 100 during a particular mode. In some embodiments, power management unit 1 12 can be implemented with any suitable circuits using any suitable technique or combination of techniques. For example, power management unit 112 may be implemented with logic circuits that receive commands from the CPU of microcontroller 100. Power management unit 1 12 may translate these commands to logic signals that control or receive inputs to or from voltage regulator 104 and voltage monitors. Power management unit 112 may be developed with a hardware description language and physically translated to logic gates by a synthesis tool. In some embodiments, power management unit 1 12 can receive an indication from voltage monitor 1 10 that is indicative of the voltage on internal voltage supply line 106. For example, as described below in connection with FIGS. 2 and 3, voltage monitor 1 10 can output a value to power management unit 112 to indicate whether Vs has reached a boost voltage, has fallen below a trigger voltage, etc. In some embodiments, operation of power management unit 1 12 can be powered using the voltage on internal voltage supply line 106. Power management unit 1 12 may be powered by internal voltage supply line 106, though it may be supplied by VCC as well. In some embodiments, power management unit 1 12 can include any other suitable circuits, such as a counter (e.g., as described below in connection with FIGS. 4 and 5), a finite state machine that can be used to store a current state of microcontroller 100 and/or power management unit 1 12, etc.FIG. 2 shows an example 200 of voltage on an internal supply line (e.g., internal supply line 106) during different operational modes and states of a microcontroller (e.g., microcontroller 100) in accordance with some embodiments of the disclosed subject matter. As shown in FIG. 2, during an active mode of microcontroller 100 (e.g., from to to ti), voltage on internal supply line 106 can be maintained by voltage regulator 104 at a nominal operating voltage Vnom. In some embodiments, Vnom can be any suitable value that can be used to power operation of components of microcontroller 100 during an active mode. For example, Vnom can be 0.9 V, 1 V, 1.2 V, etc. In some embodiments, power management unit 1 12 can provide a control signal to voltage regulator 104 during active mode of microcontroller 100 to instruct voltage regulator 104 to maintain Vs at Vnom. In some embodiments, control of voltage regulator 104 can be implemented using any suitable technique or combination of techniques. For example, the control signal can be provided constantly during operation in active mode (e.g., a high value of the control signal provided to voltage regulator 104 can cause voltage regulator 104 to maintain Vnom). As another example, the control signal can be provided during a state change (e.g., form inactive mode to active mode) and stored by voltage regulator 104 (e.g., as a state of a state machine).At ti, microcontroller 100 can enter an inactive mode, and power management unit 1 12 can transition to a boost state during which Vs is raised to a boost level Vboost through control of voltage regulator 104. The value of Vboost may be, for example, Vnom plus 100 mV or 200 mV, or ten-to-twenty percent above Vnom. In some embodiments, power management unit 1 12 can provide a second control signal to voltage regulator 104 during the boost state to instruct voltage regulator 104 to increase Vs to Vboost. In some embodiments, the second control signal can be a different value of the control signal provided during active mode and/or can be an additional and/or different control signal provided to voltage regulator 104. In some embodiments, voltage monitor 1 10 can monitor the value of Vs during the boost state to determine whether it has reached Vboost. In some embodiments, decoupling capacitor 108 can act as storage for power provided to internal voltage supply line 106 during the boost state.At t2, when Vs reaches Vboost, voltage monitor 1 10 can indicate to power management unit 112 that Vboost has been reached, and power management unit 112 can enter a discharge state during which voltage regulator 104 is inhibited from operating, and during which voltage monitor 110 is generally inhibited from operating, but is intermittently operated to determine whether Vs has fallen below a threshold value Vtrig. In some embodiments, power management unit 112 can control an operational state of voltage regulator 104 and/or voltage monitor 1 10 using any suitable control signals. For example, power management unit 1 12 can control a switch (e.g., a transistor) in voltage regulator 104 (and/or external to voltage regulator 104) to interrupt a path between Vcc and a voltage converter of voltage regulator 104. As another example, power management unit 1 12 can control a switch (e.g., a transistor) in voltage monitor 110 (and/or external to voltage monitor 110) to interrupt a path internal voltage supply line 106 and logic in voltage monitor 1 10 that monitors voltage on internal voltage supply line 106. In some embodiments, inhibiting operation of voltage regulator 104 and/or voltage monitor 110 can decrease the amount of power consumed by microcontroller 100 during the discharge state. As shown in FIG. 2, during the discharge state the voltage on the internal supply line is reduced by leakage current and/or by operation of power management unit 1 12 and/or other processes that remain active in the inactive state.After a predetermined period of time has passed after entering the discharge state (e.g., time period T), at t3 power management unit 112 can enable operation of voltage monitor 1 10 to determine whether Vs has fallen below a threshold voltage Vtrig that is greater than the minimum voltage Vmin below which the state of logic and/or memory 102 is likely to be lost. In some embodiments, time period T between measurements by voltage monitor 1 10 can be any suitable length of time. For example, power management unit 112 can enable operation of voltage monitor 110 every 1 millisecond (ms). In some embodiments, power management unit 1 12 can continue to enable operation of voltage monitor 110 after each period (e.g., at t3, , ts, and te) until voltage monitor 110 indicates that Vs has reached (e.g., fallen to or below) Vtrig. Vmin may be the absolute minimum value for which the logic and memory retention features are maintained. Vmin may be within the range of twenty percent less than Vnom. Vtrig may be greater than Vmin and may be within the range of ten percent less than Vnom. In some embodiments, Vtrig can be any suitable value between Vboost and Vmin. For example, in some embodiments, Vtrig can be a value halfway between Vboost and Vmin (i.e., Vtrig=Vboost^v∞m At t6, when Vs has fallen below Vtrig, power management unit 1 10 can enter the boost state, and enable voltage regulator 104 and/or voltage monitor 1 10 to raise the voltage back to Vboost. In some embodiments, this cycle of alternately entering the boost state and the discharge state based on the value of Vs can continue until microcontroller 100 enters the active state (e.g., at tn), at which point power management unit 112 can control operation of voltage regulator 104 to return Vs to Vnom. Note that the values of Vnom, Vboost, Vtng, and Vmin described above are merely examples, and the mechanisms described herein can be used over any suitable range of operating conditions and/or voltages.FIG. 3 shows an example 300 of a process for managing power consumed by a microcontroller in an inactive mode in accordance with some embodiments of the disclosed subject matter. As shown in FIG. 3, process 300 can begin at 302 with microcontroller 100 operating in active mode with the internal supply voltage maintained at a nominal voltage. For example, at 302, process 300 can be operating as shown in FIG. 2 between times ti and t2.At 304, process 300 can receive an instruction indicating that the microcontroller is entering an inactive mode. In some embodiments, process 300 can receive the instruction using any suitable technique or combination of techniques, and the instruction can be received as any suitable input signal. For example, if process 300 is being executed at least in part by power management unit 1 12, power management unit 1 12 can receive the instruction from another portion of microcontroller 100 (e.g., from a portion of logic and/or memory 102) and/or from a source external to microcontroller 100 (e.g., via a pin).At 306, process 300 can raise the internal supply voltage to Vboost. For example, as described above in connection with FIG. 2, process 300 can cause a device executing process 300 (e.g., power management unit 1 12, microcontroller 100, etc.) to enter a boost state. In a more particular example, power management unit 112 can control operation of voltage regulator 104 and/or voltage monitor 1 10 to raise the internal supply voltage to Vboost.At 308, process 300 can inhibit the supply system (e.g., voltage regulator 104 and/or voltage monitor 1 10) of the microcontroller from operating. For example, as described above in connection with FIG. 2, process 300 can cause a device executing process 300 (e.g., power management unit 1 12, microcontroller 100, etc.) to enter a discharge state.At 310, process 300 can determine whether a time period T has elapsed since the voltage monitor was last operational. For example, as described above in connection with FIG. 2, process 300 can determine whether the period T has elapsed since power management unit 1 12 entered the discharge state (e.g., from time t2 to time t3) and/or whether the time period T has elapsed since the last time voltage monitor 110 was turned on to monitor voltage of internal supply line 106. Power management unit 112 may use a clock internal to microcontroller 100 for calculating time periods.If time period T has not elapsed ("NO" at 310), process 300 can return to 310 to wait for time period T to elapse. Otherwise, if time period T has elapsed ("YES" at 310), process 300 can move to 312, and enable the voltage monitor (e.g., voltage monitor 110). At 314, process 300 can receive an indication from the voltage monitor indicating whether the voltage should be boosted back to Vboost. For example, at 314, process 300 can receive an indication from voltage monitor 110 indicating whether the voltage is still above Vtrig (i.e., indicating that Vs> Vtrig). If process 300 determines that the voltage is still above Vtrig ("YES" at 314), process 300 can return to 310 to wait for the next time period T to elapse. Otherwise, if process 300 determines that the voltage has fallen below Vtrig ("NO" at 314), process 300 can return to 306 to boost voltage on internal voltage supply line 106 back to Vboost. In some embodiments, another process can be executed in parallel to determine whether the microcontroller is transitioning back to active mode, and the outcome of such a process can supersede process 300 (e.g., as shown in FIG. 2 at time tn).FIG. 4 shows another example 400 of voltage on an internal supply line (e.g., internal supply line 106) during different operational modes and states of a microcontroller (e.g., microcontroller 100) in accordance with some embodiments of the disclosed subject matter. As shown in FIG. 4, the time period during which the voltage monitor is inhibited from operating is adaptively changed based on the time that elapsed between operations in the boost state. For example, as shown in FIG. 4, during the time from t2 to te the voltage monitor is operated four times to measure voltage on the voltage supply line. As described below in connection with FIG. 5, the number of measurements (or periods) that occur between boost states can be used to calculate a new time period T2 (e.g., period 2) that can be used to determine time between measurements. Adaptively controlling the time period between measurements can, in some embodiments, further reduce the amount of power used during an inactive mode of the microcontroller. Note that the slope of the voltage in the discharge state can change over time (e.g., based on environmental conditions), and the period can be calculated such that it is unlikely that the voltage on the internal voltage supply line will fall below Vmin during a time period in which the voltage monitor is being inhibited from operating. For example, in some embodiments, the leakage in a microcontroller in the inactive mode can increase with increases in temperature, but to increase the leakage by a factor large enough to reduce the voltage on the supply line below Vmin between measurements by the voltage monitor may require a very rapid increase in temperature. In a more particular example, if the leakage increased by a factor of four with an increase of about 30 degrees Centigrade, the increase in temperature would need to happen in a very short time (e.g., on the order of 100 ms given an initial time period of 1 ms) in order to drop the voltage below Vmin.FIG. 5 shows an example 500 of a process for managing power consumed by a microcontroller in an inactive mode using an adaptively calculated time period in accordance with some embodiments of the disclosed subject matter. In some embodiments, 502-508 of process 500 can be similar to 302-308 of process 300.At 510, process 500 can determine whether a time period Tl has elapsed since the voltage monitor was last operational. For example, as described above in connection with FIGS. 2 and 4, process 500 can determine whether the time period Tl has elapsed since power management unit 1 12 entered the discharge state (e.g., from time t2 to time t3) and/or whether the time period Tl has elapsed since the last time voltage monitor 1 10 was turned on to monitor voltage of internal supply line 106.If time period Tl has not elapsed ("NO" at 510), process 500 can return to 510 to wait for time period Tl to elapse. Otherwise, if time period Tl has elapsed ("YES" at 510), process 500 can move to 512, and enable the voltage monitor (e.g., voltage monitor 1 10). At 514, process 500 can receive an indication from the voltage monitor indicating whether the voltage should be boosted back to Vboost. For example, at 514, process 500 can receive an indication from voltage monitor 1 10 indicating whether the voltage is still above Vtrig (i.e., indicating that Vs> Vtrig). If process 500 determines that the voltage is still above Vtrig ("YES" at 514), process 500 can move to 516 to add the time period to a count of the total time of periods elapsed (and/or the number of measurements that have been taken) since the voltage was raised to Vboost. For example, a counter can start at a value of zero, and in response to each execution at 516 can be increased by the value of time period Tl . In a more particular example, for a period of 1 ms, the value of the counter after five periods can be 5 ms. As another more particular example, for a period of 2 ms, the value of the counter after five periods can be 10 ms. Alternatively, the length of the period can be stored, and a counter can increment by one in response to each execution of 516. For example, regardless of the length of the time period, the value of the counter after five periods can be five, and the calculation of a new time period (e.g., as described below in connection with 518) can be based on the stored length of the time period and the value of the counter. In some embodiments, the count can be reset when the discharge state is entered (and/or at any other time), and/or any other technique can be used to determine the total time of periods that have elapsed (and/or measurements that have been taken) since the boost state (e.g., based on a difference between the value of the counter when the discharge state was entered and the value when the boost state is subsequently entered). Process 500 can return to 510 to wait for another time period Tl to elapse.Otherwise, if process 500 determines that the voltage has fallen below Vtrig ("NO" at 514), process 500 can move to 518 and calculate a new period T2 based on the value of the counter incremented at 516 to track the number of periods that elapsed between boost states. In some embodiments, the new period can be calculated using any suitable technique or combination of techniques. For example, if the counter stores the total time elapsed between boosts (i.e., the counter adds the length of the period to the value of the counter) the new period can be calculated by dividing the value of the counter by two (or any other suitable value). In a more particular example, if the time period Tl is 1 ms, and four time periods elapse between boosts, the new time period can be calculated as 4 ms/2 = 2 ms. In another more particular example, if the time period Tl is 2 ms, and one time period elapsed between boosts, the new time period can be calculated as 2 ms/2 = 1 ms. Alternatively, if the counter stores the number of periods that have elapsed between boosts and the time period is stored separately, the new period can be calculated by multiplying the value of the counter by the length of the time period and dividing by two. Process 500 can return to 506 to boost the voltage on the internal voltage supply line back to Vboost. As described above in connection with process 300 and FIG. 3, another process can be executed in parallel to determine whether the microcontroller is transitioning back to an active state, and the outcome of such a process can supersede process 500. In some embodiments, the initial time period Tl can be set to a default value each time the microcontroller enters the inactive state. Alternatively, the initial time period Tl can be set to the last value calculated at 518 the last time the microcontroller was in the inactive state.Note that the number of time periods that elapse between boost states can vary based on the particular implementation of microcontroller 100 and the number of periods shown in FIGS. 3 and 5 are shown merely for ease of explanation.In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu- ray discs, etc.), semiconductor media (such as RAM, Flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.It should be noted that, as used herein, the term mechanism can encompass hardware, software, firmware, or any suitable combination thereof.It should be understood that the above described steps of the processes of FIGS. 3 and 5 can be executed or performed in any order or sequence not limited to the order and sequence shown and described in the figures. Also, some of the above steps of the processes of FIGS. 3 and 5 can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times.Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed embodiments can be combined and rearranged in various ways.
Improved designs for a capacitor, and particularly the sensing and references capacitors used in a column sample-and-hold (CSH) circuitry in a CMOS imager, are disclosed that minimize layout area. In one embodiment, an additional plate layer (e.g., formed in metal 1) is provided above the traditional poly 2-poly 1 capacitor, which additional plate is shorted to traditional poly 1 bottom plate. This adds an additional area capacitance (CaI) which is additive to the capacitance formed by the poly 2-poly 1 capacitor (Cp) to increase the total capacitance, which thus allows the capacitor to be made smaller in layout area. In another embodiment, an additional piece of metal 1 contacts the poly 2 top capacitor plate, such that a sidewall capacitance (Cswl) is defined between the sidewalls of the metal 1 pieces, which is again additive to the total capacitance. These sidewalls can be interdigitized to increase the area of the sidewall capacitance. In yet another embodiment, yet another plate layer (e.g., metal 2) is added above the metal 1 plate, which adds yet another area (Ca2) and sidewall (Csw2) capacitance to the total capacitance.
What is claimed is: 1. An imager integrated circuit, comprising: an array of pixels arranged in a plurality of columns and rows; a plurality of sensing circuits at at least the top of the array, wherein each sensing circuit comprises a plurality of pairs of sensing and reference capacitors, wherein the capacitors comprise: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first conductive piece forms a first additional plate substantially overlying the second capacitor plate to give rise to a first area capacitance additive to the base capacitance. 2. The imager integrated circuit of claim 1, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 3. The imager integrated circuit of claim 2, wherein the first conductive layer comprises a metal 1 layer. 4. The imager integrated circuit of claim 1, wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance. 5. The imager integrated circuit of claim 4, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer. 6. The imager integrated circuit of claim 4, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance. 7. The imager integrated circuit of claim 1, further comprising: third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third conductive piece forms a second additional plate substantially overlying the first conductive piece to give rise to a second area capacitance additive to the base capacitance. 8. The imager integrated circuit of claim 7, wherein the second conductive layer comprises a metal 2 layer. 9. The imager integrated circuit of claim 7, wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 10. The imager integrated circuit of claim 9, wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 11. The imager integrated circuit of claim 9, wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 12. An imager integrated circuit, comprising: an array of pixels arranged in a plurality of columns and rows; a plurality of sensing circuits at at least the top of the array, wherein each sensing circuit comprises a plurality of pairs of sensing and reference capacitors, wherein the capacitors comprise: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance. 13. The imager integrated circuit of claim 12, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 14. The imager integrated circuit of claim 13, wherein the first conductive layer comprises a metal 1 layer. 15. The imager integrated circuit of claim 12, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer. 16. The imager integrated circuit of claim 15, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance. 17. The imager integrated circuit of claim 12, further comprising: third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 18. The imager integrated circuit of claim 17, wherein the second conductive layer comprises a metal 2 layer. 19. The imager integrated circuit of claim 17, wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 20. The imager integrated circuit of claim 19, wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 21. A capacitor structure for an integrated circuit, comprising: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first conductive piece forms a first additional plate substantially overlying the second capacitor plate to give rise to a first area capacitance additive to the base capacitance, and wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance. 22. The capacitor structure of claim 21, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 23. The capacitor structure of claim 22, wherein the first conductive layer comprises a metal 1 layer. 24. The capacitor structure of claim 21, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer. 25. The capacitor structure of claim 24, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance. 26. The capacitor structure of claim 21 , further comprising: third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third conductive piece forms a second additional plate substantially overlying the first conductive piece to give rise to a second area capacitance additive to the base capacitance. 27. The capacitor structure of claim 26, wherein the second conductive layer comprises a metal 2 layer. 28. The capacitor structure of claim 26, wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 29. The capacitor structure of claim 28, wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 30. The capacitor structure of claim 28, wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 31. The capacitor structure of claim 21, wherein the capacitor structure comprises either or both of the sensing or reference capacitors used in the column sample-and-hold circuitry in an imager integrated circuit. 32. A capacitor structure for an integrated circuit, comprising: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance. 33. The capacitor structure of claim 32, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 34. The capacitor structure of claim 33, wherein the first conductive layer comprises a metal 1 layer. 35. The capacitor structure of claim 32, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer. 36. The capacitor structure of claim 35, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance. 37. The capacitor structure of claim 32, further comprising: third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 38. The capacitor structure of claim 37, wherein the second conductive layer comprises a metal 2 layer. 39. The capacitor structure of claim 37, wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 40. The capacitor structure of claim 39, wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 41. The capacitor structure of claim 32, wherein the capacitor structure comprises either or both of the sensing or reference capacitors used in the column sample-and-hold circuitry in an imager integrated circuit. 42. A capacitor structure for an integrated circuit, comprising: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first conductive piece forms a first additional plate substantially overlying the second capacitor plate to give rise to a first area capacitance additive to the base capacitance, and third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third conductive piece forms a second additional plate substantially overlying the first conductive piece to give rise to a second area capacitance additive to the base capacitance. 43. The capacitor structure of claim 42, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 44. The capacitor structure of claim 43, wherein the first conductive layer comprises a metal 1 layer, and the second conductive layer comprises a metal 2 layer. 45. The capacitor structure of claim 42, wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance, and wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 46. The capacitor structure of claim 45, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer, and wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 47. The capacitor structure of claim 45, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance, and wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 48. The capacitor structure of claim 42, wherein the capacitor structure comprises either or both of the sensing or reference capacitors used in the column sample-and-hold circuitry in an imager integrated circuit. 49. A capacitor structure for an integrated circuit, comprising: first and second capacitor plates forming a base capacitance; first and second conductive pieces formed in a first conductive layer over the capacitor plates, the first piece contacting the first capacitor plate and the second piece contacting the second capacitor plate, wherein the first and second conductive pieces are separated by a first spacing to give rise to a first sidewall capacitance additive to the base capacitance, and third and fourth conductive pieces formed in a second conductive layer over the first and second conductive pieces, the third piece contacting the second piece and the fourth piece contacting the first piece, wherein the third and fourth conductive pieces are separated by a second spacing to give rise to a second sidewall capacitance additive to the base capacitance. 50. The capacitor structure of claim 49, wherein the first and second capacitor plates are respectively formed in poly 1 and poly 2. 51. The capacitor structure of claim 50, wherein the first conductive layer comprises a metal 1 layer, and the second conductive layer comprises a metal 2 layer. 52. The capacitor structure of claim 49, wherein the first spacing comprises a minimum spacing for conductors formed in the first conductive layer, and wherein the second spacing comprises a minimum spacing for conductors formed in the second conductive layer. 53. The capacitor structure of claim 49, wherein the first and second conductive pieces are interdigitized to maximize the first sidewall capacitance, and wherein the third and fourth conductive pieces are interdigitized to maximize the second sidewall capacitance. 54. The capacitor structure of claim 49, wherein the capacitor structure comprises either or both of the sensing or reference capacitors used in the column sample-and-hold circuitry in an imager integrated circuit.
IMPROVED SENSING CAPACITANCE IN COLUMN SAMPLE ANDHOLD CIRCUITRY IN A CMOS IMAGER AND IMPROVEDCAPACITOR DESIGNCROSS-REFERENCE TO RELATED APPLICATIONS[oooi] This application claims priority to U.S. Patent Application Serial No.11/494,351, filed July 25, 2006, which is incorporated herein by reference in its entirety.[ooo2] This application is also related to U.S. Patent Application Serial No.11/494,359, entitled "Reduction in Size of Column Sample and Hold Circuitry in a CMOS Imager," filed July 25, 2006, which is incorporated by reference in its entirety.FIELD OF THE INVENTION[ooo3] Embodiments of the invention relate to increasing the capacitance of the sensing capacitors in the column sample-and-hold circuitry in a CMOS imager, and relate more generally to an improved structure for a capacitor.BACKGROUND[ooo4] Complementary Metal-Oxide-Semiconductor (CMOS) imagers are gaining popularity in the market place. As one skilled in the art understands, CMOS imagers are used to sense light and to provide an electronic representation of the sensed imaged. Accordingly, such devices are useable in digital cameras, to cite just one example.[ooo5] Figure 1 shows an example of the basic architecture of a CMOS imager 10 integrated circuit. As can be seen, the CMOS imager 10 includes an array 12 of photosensitive pixels 8 arranged in rows and columns. Read out of a given pixel 8 requires the activation of a given row and column, which is the function of the row decoder circuitry 14 and the column decoder circuitry 16, which in turn are responsive to a row address and column address input into the imager 10. The accessed pixel 8 routes a photo-induced charge from the pixel 8 to its associated column, which meets with column sample-and-hold (CSH) circuitry 18. In Figure 1, the CSH circuitry 18 is shown at the bottom edge of the pixel array 12 (a bottom-only architecture), although it may also appear at the top and bottom of the array 12 as will be discussed further below. Briefly, the CSH circuit 18 samples the accessed pixel's charge via a sampling capacitor and a reference capacitor (more on this below) to produce signals "sig" and "rst," which are input to an amplifier 20. The amplifier 20 in turn produces analog signals indicative of the sensed charge, and provides them to an analog-to-digital Converter (ADC) circuit 22 to provide a digital representation of the intensity of the light impingent on the pixel 8 being read.[ooo6] Figure 2 shows further details of the pixel array 12 and of the sensing circuitry, and in particular the CSH circuitry 18. As can be seen, each pixel 8 comprises a photodiode 11, which induces a charge which scales in magnitude with the intensity of the light impingent upon the photodiode. This induced charge drives a transfer gate 13 to route some amount of the power supply voltage Vcc onto a given column 15, assuming that the access transistor 17 for the row of the pixel 8 in question has been activated by the row decoders 14. Although not shown, one skilled in the art will realize that each pixel 8 may comprise a reset transistor as well.[ooo7] The pixel induced charge is thus routed from the column 15 to the CSH circuitry 18, where it is coupled to two capacitors, called the sampling capacitor, Cs 32, and the reference capacitor, Cr 33. As each column has its own dedicated sampling and reference capacitors 32 and 33, they are denoted in conjunction with the column they support: i.e., the capacitors for column 0 are denoted as COs and COr. While the actual mechanics for using the sensing and reference capacitors 32 and 33 to sense the induced charge on the pixels 8 are well known and not directly important to embodiments of the invention, it is only briefly explained here. Essentially, a sample signal ("samp sig") is sent from the imager 10's control unit (not shown) to close one of transistors 19 move the charge from the column 15 onto the sampling capacitor 32 Cxs. Later in the sensing cycle, the other of the transistors 19 is opened to move charge from the column 15 to the reference capacitor 33 Cxr, which occurs in conjunction with resetting of the pixel. This provides a reference level of charge which is essentially used to normalize the signal charge. The sampled charge on Cxs and the reference charge Cxr are then passed by transistors 21 under control of a column decoder 16 at an appropriate time onto signal lines "sig" and "rst," which are in turn passed to the amplifier 20 to perform the normalization, and ultimately to the ADC 22 where the magnitude of normalized sensed change is digitized.[ooo8] Further details concerning the design and operation of CMOS imagers can be found at http://www.olympusmicro.com/primer/digitalimaging/ cmosimagesensors.html, a copy of which is submitted in an Information Disclosure Statement filed with this application, and which is hereby incorporated by reference in its entirety.[ooo9] Figure 3 shows a typical layout of the sampling and reference capacitors 32 and 33 in conjunction with the pixel array 12, and Figure 4 shows the layout of the capacitors 32 and 33 in more detail, including the connections with the columns 15 and the transistors 19 and 21 (see Fig. 2). In the embodiment shown, the capacitors 32 and 33 are positioned on both the top and bottom of the array 12 (a top-bottom architecture). So arranged, the top sets of capacitors 32t and 33t service the even-numbered columns, while the bottom sets of capacitors 32b and 33b service the odd-numbered columns.[ooio] The sensing and reference capacitors 32 and 33 in this embodiment are formed from two different layers of polycrystalline silicon ("poly 1," "poly 2"), and as best shown in Figure 4, the poly 1 plate 41 is formed with a slightly larger area to allow contact 44 to be easily made from the overlying metal 1 layer 43 to the bottom capacitor plate. (Note that this sizing difference between the poly 1 and poly 2 plates of the capacitors 32 and 33 is in reality quite small, and that the difference is greatly exaggerated in the Figures). As one skilled in the art of semiconductor processing will understand, a dielectric layer (such as a silicon oxide or silicon nitride) intervenes between the two capacitor plates 41 and 42. [ooii] Although the layouts of Figures 3 and 4 are not drawn to scale, one of skill in the art will appreciate that the CSH circuitry 18 takes up significant layout space on the imager integrated circuit. This is primarily due to the size of the sampling and reference capacitors 32 and 33. For proper sensing, it is simply the case that the capacitance of these capacitors needs to be quite large (perhaps 1.2 pF a piece). As a result, these capacitors 32 and 33 are made large in area to maximize their capacitance. Thus, even when the sampling and reference capacitors 32 and 33 are split between the top and bottom of the array 12 as shown in Figure 3, the result is that the CSH circuitry 18 is quite long, what is referred to herein as the "column height" (CH) of the CSH circuitry 18. As can be seen in Figure 3, this column height CH is dominated by the height h of each of the sampling and reference capacitors 32 and 33.[ooi2] In any event, the column height of the CSH circuitry 18 in CMOS imagers is a significant issue, and reduction of the height is greatly desired. Without schemes to reduce this height, further miniaturization of these devices (which ultimately increases their profitability) will become increasing difficult.BRIEF DESCRIPTION OF THE DRAWINGS[ooi3] Embodiments of the inventive aspects of this disclosure will be best understood with reference to the following detailed description, when read in conjunction with the accompanying drawings, in which:[ooi4] Figure 1 illustrates the basic circuit blocks in a CMOS imager integrated circuit.[ooi5] Figure 2 illustrates the circuit schematic for the pixel array and column sample and hold (CSH) circuitry for the CMOS imager of Figure 1.[ooi6] Figure 3 illustrates the layout of the sampling and reference capacitors in the CSH circuit in accordance with the prior art.[ooi7] Figure 4 illustrates the layout of the sampling and reference capacitors in more detail than is shown in Figure 3, including the connections with the columns and the transistors in the CHS circuitry.[ooi8] Figure 5 illustrates a first embodiment of an improved capacitor structure employing additional metal layer pieces to provide an additional area capacitance and a sidewall capacitance to the standard capacitor structure of the prior art.[ooi9] Figure 6 illustrates the nature of the sidewall capacitance of Figure 5.Figure 7 illustrates how the sidewall capacitance can be increased through increasing the length of the sidewalls by interdigitizing the metal pieces. [002i] Figure 8 illustrates yet another embodiment of an improved capacitor structure employing additional metal layer pieces to provide an additional area capacitance and an additional sidewall capacitance to the capacitor structure of Figure 5.Figure 9 illustrates how the additional sidewall capacitance can be increased through increasing the length of the sidewalls by interdigitizing the additional metal pieces.DETAILED DESCRIPTIONAn improved design for sensing and references capacitors used in a column sample-and-hold (CSH) circuitry in a CMOS imager is disclosed. The improved design for the capacitors allows the same capacitance to be maintained, but using a smaller layout area, which allows the column height (CH) of the capacitors to be reduced.In one embodiment, an additional plate layer (e.g., formed in metal 1) is provided above the traditional poly 2 top plate of the sensing and reference capacitors, which additional plate is shorted to traditional poly 1 bottom plate. This adds an additional area capacitance as defined by the area between the overlap of the poly 2 and metal 1 (CaI) which is additive to the base capacitance formed by the poly 2-poly 1 capacitor (Cp). Because the total capacitance is increased, the capacitor can be made smaller in layout area. In another embodiment, an additional piece of metal 1 contacts the poly 2 top capacitor plate, such that a sidewall capacitance (Cswl) is defined between the sidewalls of the metal 1 pieces. This sidewall capacitance can be increased by increasing the length of the sidewalls, which can be accomplished by interdigitizing the metal 1 pieces to provide serpentined metal 1 sidewalls. This additional sidewall capacitance is also additive to the total capacitance, which allows for even further reduction in layout area of the capacitors.In yet another embodiment, yet another plate layer (e.g., metal 2) is added above the metal 1 plate and coupled the poly 2 top plate (via the metal 1 layer) such that an additional area capacitance forms (Ca2) between the metal 2 and metal 1 plates which is additive to the total capacitance. Additionally, the metal 2 pieces can be brought into proximity to give rise to another sidewall capacitance, which is again additive to the total capacitance. These metal 2 pieces can additionally be interdigitized to increase this second sidewall capacitance. [0026] Figure 5 shows a first embodiment 50 of the improved capacitor structure in both a top-down layout view and a cross-sectional view. The illustrated capacitor structure 50 can be used to form either or both of the sensing capacitor 32 and/or the reference capacitor 33 in the CSH circuitry 18. Because this exemplary layout can be the same for both, only one capacitor is illustrated. Having said this, one skilled in the art will realize that other conductors not illustrated would be used to couple the capacitors 32 or 33 appropriately for use in CSH circuitry, such as is illustrated in Figure 4.As can be seen, a first piece 51 of a metal 1 layer 43 makes contact to the poly 1 layer 41 of the capacitor, and a second piece 52 of the metal 1 layer 43 makes contact to the poly 2 layer 42 of the capacitor. These metal 1 pieces 51 and 52 can be used to connect to the CSH circuitry transistors 19 and 21 (see Fig. 4) and/or to the other capacitor 32 or 33. However, as is different from the prior art layout as shown in Figure 4, the metal piece 51 has been formed with a substantial surface area, and essentially forms a plate. As is shown, and as is preferred, the metal piece 51 is maximized in its surface area, and in particular is maximized to cover as much of the poly 2 as possible. In this regard, one skilled in the art will recognize that the Figures are not drawn to scale to simplify illustration of aspects of the invention.Because the metal piece 51 is tied by metal-to-poly contacts 44 to the poly 1, and because of its substantial surface area coverage of the poly 2, a second capacitance results, CaI, which scales proportionally to the effective area (i.e., overlap; Aeff) of the metal piece 51 and the poly 2 plate 42, and scales inversely proportional to the thickness (t) of the dielectric between these layers (i.e., CaI = [epsilon] * Aeff / t). (As one skilled in the art understands, such a dielectric between the metal 1 layer 43 and the poly 2 layer 42 usually comprises silicon oxide or silicon nitride). The additional capacitance CaI provided by this layout is in parallel with the otherwise base capacitance, Cp, formed by the two poly plates in the prior art. Due to the parallel configuration, these capacitances are additive, and thus the total capacitance for the improved capacitor 50, Ctot, equals CaI + Cp. The result is therefore a higher total capacitance than that exhibited for the purely poly-based capacitors of the prior art. Or viewed differently, the improved capacitor design can provide the same capacitance as does the design of the prior art, but with a smaller layout area. The result is that the sensing capacitor 32 and the reference capacitor 33 can be made with a smaller column height (CH; see Fig. 3), which yields a more compact CSH circuitry 18 layout, and which allows for the fabrication of a smaller imager integrated circuit and its associated benefits (improved yield, lower manufacturing costs, etc.).Maximizing the surface area of the metal piece 51 has other benefits which can still further increase the capacitance of the improved capacitor 50. As shown in Figure 5, when the metal piece 51 is maximized in its surface area above the poly 2 layer 42, it is brought into close proximity to the metal piece 52 which contacts the poly 2 layer 42. In fact, design rules usually specify a minimum spacing [lambda] between conductors in the metal 1 layer 43. As shown in Figure 6, in modern day processes, this spacing [lambda] can be quite small (approaching 0.1 microns), and can be significantly smaller than the thickness, H, of the metal 1 layer 43 itself (on the order of 0.3 microns). This close proximity of the sidewalls of metal pieces 51 and 52 gives rise to yet another capacitance, called the sidewall capacitance, Cswl. This lateral sidewall capacitance, Cswl, scales in proportion to the area of the sidewall, which is the metal pieces' height (H) times their effective lengths (Leff), and further scales in inverse proportion to the thickness of the dielectric between the pieces (i.e., [lambda]), such that Cswl = [epsilon] * H * Leff / [lambda]. Significantly then, the sidewall capacitance Cswl scales with the effective length Leff of the spacing between two pieces 51 and 52, which length is quite significant as shown in Figure 5. Thus, when the sidewall capacitance Cswl is maximized and made significant through long lengths between the pieces (Leff) and finer spacings ([lambda]), the total capacitance increases further: Ctot = Cp + CaI + Cswl. In short, by bringing the metal 1 pieces 51 and 52 into close proximity, the capacitance of the improved capacitor structure can be further increased, allowing the capacitor to be made still smaller while retaining a suitable capacitance value for pixel charge sensing. [0030] Another embodiment which even further increases the sidewall capacitance is shown in Figure 7. As can be seen, the metal 1 pieces 51 and 52 are formed with interdigitized fingers, creating a space between them which is serpentined. Such a serpentine arrangement greatly lengthens the effective length Leff of the sidewall capacitance, and thus makes the sidewall capacitance that much more significant in adjusting the total capacitance Ctot of the improved capacitor 50. In short, by interdigitizing the metal pieces 51 and 52, the total capacitance Ctot can be still further increased, and the improved capacitor 50 can be made that much smaller.[003i] Still further embodiments can further increase the total capacitance, Ctot, and thus can allow for the fabrication of capacitors which take up even less layout area in the CSH circuitry 18. Another embodiment is shown in Figure 8. In this embodiment, a metal 2 layer 46 is employed to add further additive capacitances to the total capacitance. Specifically, and as in shown, the metal 2 layer 46 (like the metal 1 layer) is broken into two pieces, 61 and 62. Metal 2 piece 62 is coupled to metal 1 piece 51 via metal 2-to-metal 1 contacts 45, and metal 2 piece 61 is coupled to metal 1 piece 52 via similar contacts 45. So formed, it is noticed that metal 2 piece 61 substantially overlies the entirety of the metal 1 piece 51, and so like metal piece 51, metal 2 piece 61 comprises a capacitor plate. Likewise, metal 2 piece 62 overlies metal 1 piece 52 (which pieces both might have a smaller surface area when compare with pieces 51 and 61). It is worth noting that a metal 2 layer is typically already present on an imager integrated circuit, and hence implementation of the embodiment of Figure 8 is easily accomplished.The configuration of Figure 8 gives rise to yet further additive capacitances that increase the total capacitance of the improved capacitor 50. As before, the poly capacitance is present (Cp), as it the area capacitance formed by the overlap of the metal 1 pieces 51 with the poly 2 plate 42 (CaI). The metal 1 sidewall capacitance is also present (Cswl). The addition of the metal 2 layer 46 also provides a second area capacitance (Ca2), which is defined by the overlap of the metal 2 piece 61 with the metal 1 piece 51 (and to perhaps a lesser extent, the overlap of pieces 62 and 52). Considering just these factors, it is noticed that the total capacitance is yet again increased, such that Ctot = Cp + CaI + Cswl + Ca2, which again allows the capacitor to be made smaller in area with the benefits already mentioned.Depending on the design-specified spacing for the metal 2 conductors, denoted as [lambda]' in Figure 8, a sidewall capacitance of the metal 2 layer 46 (Csw2) can also be employed to still further increase the capacitance of the improved capacitor 50. Such metal 2 spacings [lambda]' are usually specified as slight larger than the spacings [lambda] employed in the metal 1 layer, but in modern-day processes are still sufficiently small to provide a significant additional capacitance, especially if the effective length Leff of the sidewalls between the two metals 2 pieces 61 an 62 are long. When this additional factor is considered - i.e., when the metal 2 pieces 61 and 62 are brought into close proximity - the total capacitance is again increased: Ctot = Cp + CaI + Cswl + Ca2 + Csw2.Additionally, just as with the metal 1 layer 43, the metal 2 layer 46 can be laid out so as to increase the effective length of the metal 2 sidewall capacitance, and thus increase the significance of that capacitance. As shown in Figure 9, the metal 2 pieces 61 and 62 can be interdigitized to increase the effective length of the sidewall capacitance, Csw2, which again gives rise to an effective length which is serpentined between the two pieces. This can occur even if the metal 1 layer is not similarly interdigitized (as it is in Figure 7). However, as shown in Figure 9, to maximize both of the sidewall capacitance Cswl and Csw2 of the metal layers 43 and 46, it is preferable to interdigitize both layers. To make the interdigitized fingers of each of the metal layers 43 and 46 overlap, it may be necessary to equate the spacings [lambda] and [lambda]' of the two metal layers. [0035] In any event, Figure 9, like all of the Figures, represents only an exemplary layout for increasing the capacitance of an otherwise traditional two-plate capacitor. Many other layouts schemes can achieve the same benefits, i.e., will maximize the additive capacitances of CaI, Cswl, Ca2, and Csw2. [0036] Although not shown, it should be realized that the scheme of Figures 8 and 9 can be perpetuated such that additional overlying metal layers (metal 3, metal 4, etc.) can be employed to add further area and sidewall capacitances to the total capacitance (not shown). Because many modern-day integrated circuits already contain such additional metal layers, this can be easily accomplished, although consideration should be taken not to interfere with otherwise necessary metal signal routing.Although the disclosed embodiments of an improved capacitor structure are disclosed as particularly useful in reducing the layout area of the sampling and reference capacitors in CSH circuitry of a CMOS imager, it should be understood that such embodiments are not so limited. Indeed, the disclosed capacitors structures can be used as a substitute and improvement for any capacitors in an integrated circuit.It should be understood that the inventive concepts disclosed herein are capable of many modifications. To the extent such modifications fall within the scope of the appended claims and their equivalents, they are intended to be covered by this patent.
Methods and apparatus for opportunistic improvement of Memory Mapped Input/Output (MMIO) request handling (e.g., based on target reporting of space requirements) are described. In one embodiment, logic in a processor may detect one or more bits in a message that is to be transmitted from an input/output (I/O) device. The one or more bits may indicate memory mapped I/O (MMIO) information corresponding to one or more attributes of the I/O device. Other embodiments are also disclosed.
CLAIMS 1. A processor comprising: a first logic to detect one or more bits in a message that is to be transmitted from an input/output (I/O) device, wherein the one or more bits are to indicate memory mapped I/O (MMIO) information corresponding to one or more attributes of the I/O device; a memory to store the MMIO information; and a processor core to access a MMIO region in the memory based on the MMIO information. 2. The processor of claim 1, wherein the one or more attributes are to comprise one or more of: a prefetchable attribute, a write-through type caching attribute, a write type attribute, a speculative access attribute, or a memory ordering model attribute. 3. The processor of claim 2, wherein the write type attribute is to comprise one or more of a combine write attribute, a collapse write attribute, or a merge write attribute. 4. The processor of claim 1, wherein the one or more bits are present in a completion message corresponding to a processor initiated request to the MMIO region. 5. The processor of claim 1 , wherein the MMIO information is to comprise data on ordering or data handling requirements of the IO device. 6. The processor of claim 1 , wherein the one or more bits are to be transmitted by the I/O device in response to a request received at the I/O device. 7. The processor of claim 6, wherein the request is to be generated in response to lack of an entry, corresponding to the MMIO region, in the memory. 8. The processor of claim 1, wherein the memory is to comprise one or more of a data cache, a dedicated cache, a Translation Look-aside Buffer, or a Bloom filter. 9. The processor of claim 1 , wherein one or more of the first logic, the memory, and the processor core are on a same integrated circuit die. 10. The processor of claim 1, further comprising a plurality of processor cores to access the MMIO region in the memory that corresponds to the MMIO information. 11. The processor of claim 10, wherein one or more of the first logic, the memory, and one or more of the plurality processor cores are on a same integrated circuit die. 12. The processor of claim 1, wherein the I/O device is to comprise logic to generate the MMIO information. 13. The processor of claim 1 , wherein a switching logic is to generate the MMIO information, wherein the switching logic is coupled between the processor and the I/O device. 14. A method comprising: receiving a message comprising one or more bits from an input/output (I/O) device, wherein the one or more bits are to indicate memory mapped I/O (MMIO) information corresponding to one or more attributes of the I/O device; storing the MMIO information in a memory; and accessing a MMIO region in the memory based on the MMIO information. 15. The method of claim 14, further comprising detecting the one or more bits after receiving the message. 16. The method of claim 14, wherein the one or more attributes are to comprise one or more of: a prefetchable attribute, a write-through type caching attribute, a write type attribute, a speculative access attribute, or a memory ordering model attribute. 17. The method of claim 16, wherein the write type attribute is to comprise one or more of a combine write attribute, a collapse write attribute, or a merge write attribute. 18. The method of claim 14, wherein receiving the message comprises receiving a completion message corresponding to a processor initiated request to the MMIOregion. 19. The method of claim 14, further comprising transmitting the one or more bits in response to receiving a request at the I/O device. 20. The method of claim 14, further comprising generating a request for the one or more bits in response to lack of an entry, corresponding to the MMIO region, in the memory. 21. A system comprising: an input/output (I/O) device; a processor comprising a first logic to detect one or more bits in a message that is to be received from the I/O device, wherein the one or more bits are to indicate memory mapped I/O (MMIO) information corresponding to one or more attributes of the I/O device; a memory to store the MMIO information; and the processor comprising at least one processor core to access a MMIO region in the memory based on the MMIO information. 22. The system of claim 21, wherein the one or more attributes are to comprise one or more of: a prefetchable attribute, a write-through type caching attribute, a write type attribute, a speculative access attribute, or a memory ordering model attribute. 23. The system of claim 22, wherein the write type attribute is to comprise one or more of a combine write attribute, a collapse write attribute, or a merge write attribute. 24. The system of claim 21 , wherein the one or more bits are present in a completion message corresponding to a processor initiated request to the MMIO region. 25. The system of claim 21 , wherein the one or more bits are to be transmitted by the I/O device in response to a request received at the I/O device. 26. The system of claim 25, wherein the request is to be generated in response to lack of an entry, corresponding to the MMIO region, in the memory. 27. The system of claim 21 , wherein the memory is to comprise one or more ofa data cache, a dedicated cache, a Translation Look-aside Buffer, or a Bloom filter. 28. The system of claim 21 , wherein one or more of the first logic, the memory, and the processor are on a same integrated circuit die. 29. The system of claim 21 , wherein the I/O device is to comprise logic to generate the MMIO information. 30. The system of claim 21 , wherein a switching logic is to generate the MMIO information, wherein the switching logic is coupled between the processor and the I/O device.
OPPORTUNISTIC IMPROVEMENT OF MMIO REQUEST HANDLING BASED ON TARGET REPORTING OF SPACE REQUIREMENTS FIELD The present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to techniques for opportunistic improvement of Memory Mapped Input/Output (MMIO) request handling, e.g., based on target reporting of space requirements. BACKGROUND MMIO generally refers to a mechanism for performing input/output operations, e.g., between a processor and peripheral devices in a computer. For example, designated or reserved areas of a memory device that are addressable by a processor (e.g., for read and write operations) may be mapped to select input/output ("I/O" or "10") device(s). In this fashion, communication between processors and I/O devices may be performed through a memory device. Some current processor and chipset handling of MMIO access by a processor (for example in memory marked "Uncached" (UC)) may be dictated by legacy compatibility concerns that may generally be much more conservative than is necessary for the majority of implementations. Some attempts have been made to work around this by defining new memory space types such as Write-Combining (WC), but such approaches may be configured by system software, and so may only be used when requiring the implementation of new system software and also when potentially new application software is acceptable. Very often this is not acceptable because of increased costs and time to market, and instead one may need to live with the performance consequences of behaviors that may be almost always needlessly conservative. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.Figs. 1, 4-5, and 7 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein. Fig. 2 illustrates a comparison diagram, according to an embodiment. Fig. 3 illustrates header and MMIO range attributes, according to an embodiment. Fig. 6 illustrates a flow diagram of a method according to an embodiment. Detailed description In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, some embodiments may be practiced without the specific details. In other instances, well- known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Some embodiments relate to efficient techniques to differentiate the request handling requirements for different MMIO spaces. In one embodiment, a device (and/or its associated driver software in an embodiment) may be configured to be aware of and understand the requirements for MMIO accesses to that device. By providing a mechanism for this information to be communicated to the host processor/core/uncore/chipset (which would in turn include logic to detect and process the device specific information), the default request handling behaviors (e.g., associated with the UC memory implementation) may be opportunistically modified. Moreover, legacy devices may remain unaffected, in part, because they retain the default UC request handling characteristics. More particularly, in one embodiment, new I/O devices may indicate the request handling requirements, for particular memory regions mapped to the respective I/O device, using a message defined for the purpose and/or information included with completion messages for processor initiated requests to that region. This information may be stored or cached by the processor, e.g., in a buffer, a data cache, a dedicated cache, a TLB (Translation Lookaside Buffer), a Bloom filter (e.g., which may be a space-efficient probabilistic data structure that is used to testwhether an element is a member of a set), or in some other caching or storage structure appropriate for indicating request handling attributes, such as storage devices discussed herein with reference to Figs. 2-7. In an embodiment, the cached/stored information may be cleared under pre-defined conditions in an attempt to ensure stale information is not used. Accordingly, some embodiments provide a capability to improve MMIO performance without requiring system software enabling or system software modification. As a result, some embodiments support the continued use of unmodified legacy hardware and/or software, while allowing new hardware to achieve performance improvements, e.g., as allowed by the host system including a processor. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 may include one or more agents 102-1 through 102-M (collectively referred to herein as "agents 102" or more generally "agent 102"). In an embodiment, one or more of the agents 102 may be any of components of a computing system, such as the computing systems discussed with reference to Figs. 4-5 or 7. As illustrated in Fig. 1, the agents 102 may communicate via a network fabric 104. In one embodiment, the network fabric 104 may include a computer network that allows various agents (such as computing devices) to communicate data. In an embodiment, the network fabric 104 may include one or more interconnects (or interconnection networks) that communicate via a serial (e.g., point-to-point) link and/or a shared communication network. For example, some embodiments may facilitate component debug or validation on links that allow communication with fully buffered dual in-line memory modules (FBD), e.g., where the FBD link is a serial link for coupling memory modules to a host controller device (such as a processor or memory hub). Debug information may be transmitted from the FBD channel host such that the debug information may be observed along the channel by channel traffic trace capture tools (such as one or more logic analyzers).In one embodiment, the system 100 may support a layered protocol scheme, which may include a physical layer, a link layer, a routing layer, a transport layer, and/or a protocol layer. The fabric 104 may further facilitate transmission of data (e.g., in form of packets) from one protocol (e.g., caching processor or caching aware memory controller) to another protocol for a point-to-point or shared network. Also, in some embodiments, the network fabric 104 may provide communication that adheres to one or more cache coherent protocols. Furthermore, as shown by the direction of arrows in Fig. 1, the agents 102 may transmit and/or receive data via the network fabric 104. Hence, some agents may utilize a unidirectional link while others may utilize a bidirectional link for communication. For instance, one or more agents (such as agent 102-M) may transmit data (e.g., via a unidirectional link 106), other agent(s) (such as agent 102- 2) may receive data (e.g., via a unidirectional link 108), while some agent(s) (such as agent 102-1) may both transmit and receive data (e.g., via a bidirectional link 110). In some situations, the I/O device will know the ordering and data handling requirements (also referred to as "attributes" herein) associated with MMIO regions owned by the I/O device. Current approaches however may require system configuration software to program Memory Type Range Register (MTRR) or page attributes to enable the processor/platform to comprehend these attributes. As a result, this limits use to cases where the appropriate system software infrastructure exists and generally may result in poor scalability. Alternately, an embodiment provides a mechanism for an I/O device to provide one or more MMIO region attributes (and/or data on ordering and data handling requirements of the I/O device) which would be stored, e.g., in a history buffer, cache, or other storage devices. The default MMIO attributes would match the current UC behavior until a device (e.g., directly or via a interconnecting device such as a switch) indicates deviation is acceptable. This indication could be associated with a completion message for an earlier request, or through a message triggered by an access to a device (which might include an indication requestingsuch a message, for example because there is no entry corresponding to a MMIO range in a corresponding storage/cache of the processor). This indication could also be associated with a message transmitted autonomously by a device. A variety of aspects of transaction handling may be modified based on a device's requirements (however, a processor is not required to implement all of these procedures and may only perform a subset in some embodiments). For example, the following attributes could be described: (1) prefetchable - generally has no read side effects and returns all bytes on reads regardless of the byte enables (e.g., indicating which specific bytes are desired or required to satisfy the memory request), and allows Write Merging (which is further discussed below). For example, memory that is prefetchable has the attribute that it returns the same data when read multiple times and does not alter any other device state when it is read. (2) write-through type caching - when a memory location is written, the values written are immediately written to memory. Write-through is typically contrasted with write-back, which avoids writing immediately, typically by requiring exclusive ownership of a cache line, writing into the cache line but not to memory. After one or many such writes, the "dirty" line is written to memory. (3) write type(s) such as combine/collapse/merge writes - combining separate but sequential increasing-address-order memory writes into a single larger transfer; using byte enables to disable unwritten locations is permitted, although this is generally not possible in PCIe (Peripheral Component Interconnect (PCI) express) due to PCIe's byte enable semantics. Merging writes may involve merging separate but sequential masked (byte granularity) memory writes to one DWORD address into a single larger, provided any byte location is written only once. Also, collapsing writes may involve sequential memory writes to the same address being converted into a single transfer, by writing only the most recent data. This is generally not permitted in PCI, although a WC memory type may be used to perform this. (4) speculative access - MMIO memory locations may haveside effects - they may perform operations such as rebooting the system in response to loads. Some microprocessors use "speculative" techniques such as out-of-order execution and branch prediction to improve performance. On such systems the hardware microarchitecture may execute loads (and other operations) speculatively, that the programmer does not intend to execute, or in an order different than specified by the programmer. The hardware ensures that the effects of such operations appear as intended by the programmer, but typically only for ordinary memory, not MMIO. Ordinary memory does not have side effects to loads, speculative or otherwise. MMIO obviously may exhibit bad behavior if speculative loads are performed to MMIO. To this end, in accordance with one embodiment, it is indicated what regions of memory permit speculative loads, and what regions do not. (5) memory ordering model - some computer processors may implement memory ordering models such as sequential consistency, total store ordering, processor consistency, or weak ordering. In some implementations, weaker memory ordering models allow simpler hardware, but stringer memory ordering models may be assumed by some programmers for parallel applications. The following figure illustrates how this sort of information could be used to improve the performance of a sequence of processor accesses to an I/O device. More particularly, Fig. 2 illustrates sample code sequence and resulting bus operations for a sample current system versus an embodiment of the invention (which results in performance improvement). As shown in Fig. 2, in some current systems, all of the processor reads to the device are serialized - processor operation is stalled waiting for the results from each read before proceeding to the next instruction. In an optimized system (shown on the right side of the figure), the data reads to the device are pipelined speculatively behind the status register read operation. If the status test fails (e.g., the "OK" code is skipped), the results from the data reads will be discarded. In the case where the status read test passes, the data values will be used. Note that in both cases the reads occur in order, so there is no possibility that, for example, the datareads would be reordered ahead of the status reads. However, it might be acceptable that the data reads could be reordered amongst themselves in some embodiments (although this is not shown in the figure). Furthermore, for the processor/chipset to make this sort of optimization, the I/O device communicates the attributes of the memory space to the processor/chipset in some embodiments. One way of doing this is by including the attribute(s) in each read completion message returned by the I/O device. For a PCIe I/O device, this could be done by replacing the Completer ID field (which may not have an architecturally defined use) with an MMIO Range Attributes field, as shown in Fig. 3. More particularly, Fig. 3 illustrates Completion Header with MMIO Range Attributes Replacing Completer ID Field, according to an embodiment. The previously reserved MRA (MMIO Range Attributes) bit would indicate a completion message including MMIO Range Attributes. A processor access to an MMIO range (e.g., an aligned 4K region of UC memory) without cached/stored MMIO Attributes would be completed using the default UC handling. When a completion message is returned, indicating MMIO Range Attributes that differ from the default UC attributes, this information would be stored and used to appropriately modify future accesses to the same region. Alternately (or in addition), a message protocol could be used where, either triggered by an explicit request from the processor or through an implicit request (such as a page access) an I/O device would send a message to the processor/chipset indicating the MMIO Range and associated attributes. In some embodiments, cached entries would be maintained by the processor/chipset until evicted due to cache capacity limitations (e.g., using an LRU (Least Recently Used) algorithm), or due to an explicit or implicit request to invalidate an entry. Any access by a processor to the configuration space of a particular device to change memory range settings (e.g., PCIe BARs (Base Address Registers)) would invalidate cached attribute entries for the corresponding device. As a simplification in some embodiments, one might invalidate these entries when any PCIeconfiguration accesses is made to a device, or (even more simply) when any PCIe configuration write is performed. Using a message protocol, a device could explicitly request invalidation or updating of page attributes in some embodiments. Also, a device might want to change the attributes of a given region, for example, when changing from one mode of operation to another, so that it could use the most aggressive or efficient attributes in a mode where such use is acceptable, and change these attributes to less aggressive or more conservative attributes when needed, rather than having to use the more conservative approach of always indicating the less aggressive attributes. This technique might, for example, be used by a graphics card which might apply one set of attributes to on- card memory allocated for use by a graphics application, but apply a different set of attributes when the same on-card memory is reallocated for use by a GP-GPU (Generalized Programming-Graphics Processing Unit) implementation. As shown in Fig. 3, bits 0 through 6 of Byte 4 may be used to indicated MMIO range attributes. Bit values and corresponding indications are shown in tabular format on the bottom portion of Fig 3. Depending on the implementation, a set bit or cleared bit may be used to select an option. Various types of computing systems may be used to implement the embodiments discussed herein (such as those discussed with reference to Figs. 2-3). For example, Fig. 4 illustrates a block diagram of portions of a computing system 400, according to an embodiment. In one embodiment, various components of the system 400 may be implemented by one of the agents 102-1 and/or 102-M discussed with reference to Fig. 1. Further details regarding some of the operation of the computing system 400 will be discussed herein with reference to Fig. 6. The system 400 may include one or more processors 402-1 through 402-N (collectively referred to herein as "processors 402" or more generally "processor 402"). Each of the processors 402-1 through 402-N may include various components, such as private or shared cache(s), execution unit(s), one or more cores, etc. Moreover, the processors 402 may communicate through a bus 404 with other components such as an interface device 406. In an embodiment, the interfacedevice 406 may be a chipset or a memory controller hub (MCH). Moreover, as will be further discussed with reference to Fig. 7, the processors 402 may communicate via a point-to-point (PtP) connection with other components. Additionally, the interface device 406 may communicate with one or more peripheral devices 408-1 through 408-P (collectively referred to herein as "peripheral devices 408" or more generally "device 408"). The devices 408 may be a peripheral device that communicates in accordance with the PCIe specification in an embodiment. As shown in Fig. 4, a switching logic 412 may be coupled between a variety of agents (e.g., peripheral devices 408 and the interface device 406). The switching logic 412 may include a attribute logic 420 to send attribute information (such as those discussed with reference to Figs. 2-3), e.g., on behalf of one or more of the peripheral device 408, to the interface device 406 (or a chipset such as the chipset 506 of Fig. 5) and/or processor(s) 402. Furthermore, as shown, one or more of the processors 402 may include MMIO logic 422 to receive the information from the attribute logic 420 and/or the peripheral device(s) directly. The processor(s) may include a storage unit (or a cache) to store the attribute/MMIO information. Also, even though logic 420 is shown to be included in switching logic 412, it may be located elsewhere in the system 400, such as the interface device 406. Fig. 5 illustrates a block diagram of an embodiment of a computing system 500. One or more of the agents 102 of Fig. 1 and/or the system 400 of Fig. 4 may comprise one or more components of the computing system 500. The computing system 500 may include one or more central processing unit(s) (CPUs) 502 (which may be collectively referred to herein as "processors 502" or more generically "processor 502") coupled to an interconnection network (or bus) 504. The processors 502 may be any type of processor such as a general purpose processor, a network processor (which may process data communicated over a computer network 505), etc. (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 502 may have a single or multiple core design. The processors 502 with a multiple core design may integrate different types of processor cores on the same integratedcircuit (IC) die. Also, the processors 502 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. The processor 502 may include one or more caches (not shown), which may be private and/or shared in various embodiments. Generally, a cache stores data corresponding to original data stored elsewhere or computed earlier. To reduce memory access latency, once data is stored in a cache, future use may be made by accessing a cached copy rather than refetching or recomputing the original data. The cache(s) may be any type of cache, such a level 1 (Ll) cache, a level 2 (L2) cache, a level 3 (L3), a mid-level cache, a last level cache (LLC), etc. to store electronic data (e.g., including instructions) that is utilized by one or more components of the system 500. A chipset 506 may additionally be coupled to the interconnection network 504. In an embodiment, the chipset 506 may be the same as or similar to the interface device 406 of Fig. 4. Further, the chipset 506 may include a memory control hub (MCH) 508. The MCH 508 may include a memory controller 510 that is coupled to a memory 512. The memory 512 may store data, e.g., including sequences of instructions that are executed by the processor 502, or any other device in communication with components of the computing system 500. In an embodiment, the memory 512 may be the same or similar to the memory 411 of Fig. 4. Also, in one embodiment of the invention, the memory 512 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), etc. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may be coupled to the interconnection network 504, such as multiple processors and/or multiple system memories. The MCH 508 may further include a graphics interface 514 coupled to a display device 516 (e.g., via a graphics accelerator in an embodiment). In one embodiment, the graphics interface 514 may be coupled to the display device 516 via PCIe. In an embodiment of the invention, the display device 516 (such as a flat panel display) may be coupled to the graphics interface 514 through, for example, asignal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory (e.g., memory 512) into display signals that are interpreted and displayed by the display 516. As shown in Fig. 5, a hub interface 518 may couple the MCH 508 to an input/output control hub (ICH) 520. The ICH 520 may provide an interface to input/output (I/O) devices coupled to the computing system 500. The ICH 520 may be coupled to a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge that may be compliant with the PCIe specification, a universal serial bus (USB) controller, etc. The bridge 524 may provide a data path between the processor 502 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may be coupled to the ICH 520, e.g., through multiple bridges or controllers. For example, the bus 522 may comply with the PCI Local Bus Specification, Revision 3.0, 2004, available from the PCI Special Interest Group, Portland, Oregon, U.S.A. (hereinafter referred to as a "PCI bus"). Alternatively, the bus 522 may comprise a bus that complies with the PCI-X Specification Rev. 3.0a, 2003 (hereinafter referred to as a "PCI-X bus") and/or PCI Express (PCIe) Specifications (PCIe Specification, Revision 2.0, 2006), available from the aforementioned PCI Special Interest Group, Portland, Oregon, U.S.A. Further, the bus 522 may comprise other types and configurations of bus systems. Moreover, other peripherals coupled to the ICH 520 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), etc. The bus 522 may be coupled to an audio device 526, one or more disk drive(s) 528, and a network adapter 530 (which may be a NIC in an embodiment). In one embodiment, the network adapter 530 or other devices coupled to the bus 522 may communicate with the chipset 506 via the switching logic 512 (which may be the same or similar to the logic 412 of Fig 4 in some embodiments). Other devices may be coupled to the bus 522. Also, various components (such as thenetwork adapter 530) may be coupled to the MCH 508 in some embodiments of the invention. In addition, the processor 502 and the MCH 508 may be combined to form a single chip. In an embodiment, the memory controller 510 may be provided in one or more of the CPUs 502. Further, in an embodiment, MCH 508 and ICH 520 may be combined into a Peripheral Control Hub (PCH). Additionally, the computing system 500 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media capable of storing electronic data (e.g., including instructions). The memory 512 may include one or more of the following in an embodiment: an operating system (O/S) 532, application 534, and/or device driver 536. The memory 512 may also include regions dedicated to MMIO operations. Programs and/or data stored in the memory 512 may be swapped into the disk drive 528 as part of memory management operations. The application(s) 534 may execute (e.g., on the processor(s) 502) to communicate one or more packets with one or more computing devices coupled to the network 505. In an embodiment, a packet may be a sequence of one or more symbols and/or values that may be encoded by one or more electrical signals transmitted from at least one sender to at least on receiver (e.g., over a network such as the network 505). For example, each packet may have a header that includes various information which may be utilized in routing and/or processing the packet, such as a source address, a destination address, packet type, etc. Each packet may also have a payload that includes the raw data (or content) the packet is transferring between various computing devices over a computer network (such as the network 505). In an embodiment, the application 534 may utilize the O/S 532 to communicate with various components of the system 500, e.g., through the devicedriver 536. Hence, the device driver 536 may include network adapter (530) specific commands to provide a communication interface between the O/S 532 and the network adapter 530, or other I/O devices coupled to the system 500, e.g., via the chipset 506. In an embodiment, the O/S 532 may include a network protocol stack. A protocol stack generally refers to a set of procedures or programs that may be executed to process packets sent over a network (505), where the packets may conform to a specified protocol. For example, TCP/IP (Transport Control Protocol/Internet Protocol) packets may be processed using a TCP/IP stack. The device driver 536 may indicate the buffers 538 that are to be processed, e.g., via the protocol stack. As illustrated in Fig. 5, the network adapter 530 may include the attribute logic 420 (discussed with reference to Fig. 4) which may send attribute information discussed with reference to Figs. 2-3 to the CPU(s) 502. As with Fig. 4, the CPU(s) may include logic (e.g., logic 422) to receive the attribute information. Also, the CPU(s) may include storage unit(s) (such as a cache, buffer, etc.) to store the attribute information. Also, while logic 420 is included in network adapter 530 in Fig. 5, it may be located elsewhere such as within the switching logic 512, chipset 506, etc. The network 505 may include any type of computer network. The network adapter 530 may further include a direct memory access (DMA) engine 552, which writes packets to buffers (e.g., stored in the memory 512) assigned to available descriptors (e.g., stored in the memory 512) to transmit and/or receive data over the network 505. Additionally, the network adapter 530 may include a network adapter controller 554, which may include logic (such as one or more programmable processors) to perform adapter related operations. In an embodiment, the adapter controller 554 may be a MAC (media access control) component. The network adapter 530 may further include a memory 556, such as any type of volatile/nonvolatile memory (e.g., including one or more cache(s) and/or other memory types discussed with reference to memory 512). In an embodiment, thememory 556 may store attribute information (such as those discussed with reference to Figs. 2-3) of the network adapter 530. Fig. 6 illustrates a flow diagram of a method 600 to access MMIO region(s), according to an embodiment. In one embodiment, various components discussed with reference to Figs. 1-5 and 7 may be utilized to perform one or more of the operations discussed with reference to Fig. 6. Referring to Figs. 1-6, at an operation 602, a message is received (e.g., from logic 420 at logic 422). In some embodiments, the message is generated by an I/O device (or a switch on behalf of the I/O device coupled to the switch) without a query from another component and at the device's own initiation. At an operation 604, attribute indicia (e.g., one or more bits such as those discussed with MMIO attributes of Fig. 3) are detected (e.g., by logic 422). If the attribute indicia is not present, method 600 returns to operation 602 to receive another message; otherwise, the attribute information may be stored 608 (e.g., in a storage device (such as a cache, buffer, table, etc.) of a processor such as those discussed with reference to Figs. 1-5 or 7). At an operation 610, a MMIO region may be accessed (by processor(s)/core(s) such as those discussed with reference to Figs. 1-5 or 7), e.g., based on the stored information at operation 608. Fig. 7 illustrates a computing system 700 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 7 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. The operations discussed with reference to Figs. 1-6 may be performed by one or more components of the system 700. As illustrated in Fig. 7, the system 700 may include several processors, of which only two, processors 702 and 704 are shown for clarity. The processors 702 and 704 may each include a local memory controller hub (MCH) 706 and 708 to enable communication with memories 710 and 712 (which may store MMIO regions such as discussed with reference to claims 2-3). The memories 710 and/or 712 may store various data such as those discussed with reference to the memory512 of Fig. 5. As shown in Fig. 7, the processors 702 and 704 may also include one or more cache(s) such as those discussed with reference to Figs. 4 and 5. In an embodiment, the processors 702 and 704 may be one of the processors 502 discussed with reference to Fig. 5. The processors 702 and 704 may exchange data via a point-to-point (PtP) interface 714 using PtP interface circuits 716 and 718, respectively. Also, the processors 702 and 704 may each exchange data with a chipset 720 via individual PtP interfaces 722 and 724 using point-to-point interface circuits 726, 728, 730, and 732. The chipset 720 may further exchange data with a high-performance graphics circuit 734 via a high-performance graphics interface 736, e.g., using a PtP interface circuit 737. In at least one embodiment, the switching logic 412 may be coupled between the chipset 720 and other components of the system 700 such as those communicating via a bus 740. Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system 700 of Fig. 7. Furthermore, other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 7. Also, chipset 720 may include the logic 420 (discussed with reference to Figs. 2-6) and processor(s) 702, 704 may include logic 422 (discussed with reference to Figs. 2-6). Further, logic 420 may be located elsewhere in system 700, such as within logic 412, communication device(s) 746, etc. The chipset 720 may communicate with the bus 740 using a PtP interface circuit 741. The bus 740 may have one or more devices that communicate with it, such as a bus bridge 742 and I/O devices 743. Via a bus 744, the bus bridge 742 may communicate with other devices such as a keyboard/mouse 745, communication devices 746 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 505), audio I/O device, and/or a data storage device 748. The data storage device 748 may store code 749 that may be executed by the processors 702 and/or 704. In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-7, may be implemented as hardware (e.g., circuitry), /5software, firmware, microcode, or combinations thereof, which may be provided as a computer program product, e.g., including a machine-readable or computer- readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-7. Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) through data signals provided in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection). Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment. Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other. Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
A receive-side client interface for a media access controller embedded in an integrated circuit having programmable logic is described. A media access controller core includes a receive engine. A receive-side datapath is coupled to the media access controller core. The receive-side datapath configured is configured to operate at two frequencies to accommodate the programmable logic in the integrated circuit.
The invention claimed is:1. A receive-side client interface for a media access controller, comprising:a media access controller core including a receive engine; anda receive-side datapath coupled to the media access controller core, the receive-side datapath including a first set of registers, a second set of registers and a third set of registers, the first set of registers clocked responsive to a first clock signal, the second set of registers clocked responsive to a second clock signal and coupled to receive a select mode signal, the select mode signal being enabled to select either a first data width or a second data width, the third set of registers clocked responsive to a third clock signal, the second clock signal being an undivided version of the first clock signal, the third clock signal being a divided version of the first clock signal;wherein the media access controller is embedded in an integrated circuit having programmable logic, the programmable logic capable of operating at the third clock signal frequency; andwherein the first set of registers and a first portion of the second set of registers are multiplexer select registers and in combination are enabled to provide a multiplexer select signal.2. The receive-side client interface, according to claim 1, further comprising a first set of multiplexers coupled to receive the multiplexer select signal as control select input.3. The receive-side client interface, according to claim 2, wherein a second portion of the first set of registers and a second portion of the second set of registers being data valid registers and in combination configured to provide data valid signals.4. The receive-side client interface, according to claim 3, wherein a first portion of the first set of multiplexers is coupled to receive the data valid signals; wherein a third portion of the first set of registers and a third portion of the second set of registers are receive data registers and in combination are configured to provide receive data signals; and wherein a second portion of the first set of multiplexers is coupled to receive the receive data signals.5. The receive-side client interface, according to claim 4, wherein the third set of registers is coupled to receive selected output from the first set of multiplexers; and further comprising a second set of multiplexers coupled to receive the select mode signal as control select input and coupled to receive clocked output from the third set of registers.6. The receive-side client interface, according to claim 5, wherein a first portion of the second set of multiplexers is configured to output receive data responsive to receive-side input data provided to the third portion of the first set of registers and the second set of registers.7. The receive-side client interface, according to claim 6, wherein a second portion of the second set of multiplexers is configured to output validity information responsive to receive-side data valid input and a multiplexer select input provided to the first portion and the second portion of the first set of registers and the second set of registers.8. The receive-side client interface, according to claim 7, wherein a third portion of the second set of multiplexers is configured to output most significant word validity information responsive to receive-side data valid input and a multiplexer select input provided to the first portion and the second portion of the first set of registers and the second set of registers.9. The receive-side client interface, according to claim 8, wherein the receive-side data valid input is asserted responsive to the third clock signal being at a logic low state and the receive-side input data being an even number of bytes.10. The receive-side client interface, according to claim 8, wherein the receive-side data valid input is asserted responsive to the third clock signal being at a logic low state and the receive-side input data being an odd number of bytes.11. The receive-side client interface, according to claim 8, wherein the receive-side data valid input is asserted responsive to the third clock signal being at a high state and the receive-side input data being an even number of bytes.12. The receive-side client interface, according to claim 8, wherein the receive-side data valid input is asserted responsive to the third clock signal being at a logic high state and the receive-side input data being an odd number of bytes.13. The receive-side client interface, according to claim 8, wherein in a bypass mode, the receive-side data valid input is asserted responsive to the third clock signal and maintained for reception of all bytes of the receive-side input data and then deasserted responsive to the third clock signal being at a logic low state.14. The receive-side client interface, according to claim 8, wherein the first portion of the second set of multiplexers is coupled to directly receive output from the third portion of the first set of registers.15. The receive-side client interface, according to claim 8, wherein the second portion of the second set of multiplexers is coupled to directly receive output from the second portion of the first set of registers.16. The receive-side client interface, according to claim 8, wherein the select mode signal is asserted for selecting a 16-bit mode.17. The receive-side client interface, according to claim 8, wherein the media access controller core is for an Ethernet media access controller.18. The receive-side client interface, according to claim 8, wherein the media access controller core is formed from application specific circuitry located in a programmable logic device.19. The receive-side client interface, according to claim 18, wherein the receive-side client interface is to programmably configurable logic of the programmable logic device.20. The receive-side client interface, according to claim 19, wherein the programmable logic device is a Field Programmable Gate Array.
CROSS REFERENCEThis patent application claims priority to and incorporates by reference the U.S. provisional application Ser. No. 60/604,855, entitled "Ethernet Media Access Controller Embedded in a Programmable Logic Device", by Ting Y. Kao, et al., filed Aug. 27, 2004 and to U.S. patent application Ser. No. 10/985,493 entitled "An Embedded Network Media Access Controller", by Ting Y. Kao et al, filed Nov. 10, 2004.LIMITED COPYRIGHT WAIVERA portion of the disclosure of this patent document contains material to which the claim of copyright protection is made. The copyright owner has no objection to the facsimile reproduction by any person of the patent document or the patent disclosure, as it appears in the U.S. Patent and Trademark Office file or records, but reserves all other rights whatsoever.FIELD OF THE INVENTIONOne or more aspects of the invention relate generally to a network interface and more particularly, to an Ethernet Media Access Controller ("EMAC") embedded in an integrated circuit (IC).BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs) are a well-known type of integrated circuit that can be programmed to perform specified logic functions. One type of PLD, the field programmable gate array (FPGA), typically includes an array of programmable tiles. These programmable tiles can include, for example, input/output blocks (IOBs), configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM), multipliers, digital signal processing blocks (DSPs), processors, clock managers, delay lock loops (DLLs), and so forth.Each programmable tile typically includes both programmable interconnect and programmable logic. The programmable interconnect typically includes a large number of interconnect lines of varying lengths interconnected by programmable interconnect points (PIPs). The programmable logic implements the logic of a user design using programmable elements that can include, for example, function generators, registers, arithmetic logic, and so forth.The programmable interconnect and programmable logic are typically programmed by loading a stream of configuration data into internal configuration memory cells that define how the programmable elements are configured. The configuration data can be read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Another type of PLD is the Complex Programmable Logic Device, or CPLD. A CPLD includes two or more "function blocks" connected together and to input/output (I/O) resources by an interconnect switch matrix. Each function block of the CPLD includes a two-level AND/OR structure similar to those used in Programmable Logic Arrays (PLAs) and Programmable Array Logic (PAL) devices. In some CPLDs, configuration data is stored on-chip in non-volatile memory. In other CPLDs, configuration data is stored on-chip in non-volatile memory, then downloaded to volatile memory as part of an initial configuration sequence.For all of these programmable logic devices (PLDs), the functionality of the device is controlled by data bits provided to the device for that purpose. The data bits can be stored in volatile memory (e.g., static memory cells, as in FPGAs and some CPLDs), in non-volatile memory (e.g., FLASH memory, as in some CPLDs), or in any other type of memory cell.The terms "PLD" and "programmable logic device" include but are not limited to these exemplary devices, as well as encompassing devices that are only partially programmable.To enhance functionality of PLDs, embedded cores have been added. For example, FPGAs may include one or more hardwired microprocessors. However, an Ethernet Media Access Controller ("EMAC") core for PLDs has only been available as a program core. For example, a program or "soft" implementation in FPGA programmable circuitry ("fabric") of an EMAC is available from Xilinx, Inc. of San Jose, Calif., which is described in additional detail in "1-Gigabit Ethernet MAC Core with PCS/PMA Sublayers (1000BASE-X) or GMII v4.0" by Xilinx, Inc. [online] (Aug. 25, 2004)<URL:http://www.xilinx.com/ipcenter/catalog/logicore/docs/gig_eth_mac.pdf>, which is incorporated by reference herein in its entirety (hereinafter "soft EMAC core").Advantageously, having a soft EMAC core allows users to connect an FPGA to a network, such as an Ethernet. Unfortunately, the cost of the soft EMAC core implementation is significant with respect to use of configurable logic cells.Accordingly, it would be desirable and useful to provide an EMAC core that uses fewer configurable logic cells than a soft EMAC core and provides the same or greater functionality of a soft EMAC core. Moreover, such an EMAC core may be substantially compatible with the Institute of Electronic and Electrical Engineers ("IEEE") specification 802.3-2002. Furthermore, as PLDs may have any user instantiated design, such an EMAC core may be independent of user design.SUMMARY OF THE INVENTIONThe invention relates generally to a receive-side client interface to a media access controller embedded in a programmable logic device.An aspect of the invention is a programmable logic device including: configurable logic having a first frequency of operation; and a media access controller integrated circuit embedded in the programmable logic device, where the media access controller integrated circuit has a second frequency of operation of at least approximately twice the first frequency of operation. The media access controller integrated circuit has a receive-side client interface having a selectable data input width and configurable for operation at any of a plurality of data rates, where the receive-side client interface is for communication with the configurable logic at the first frequency of operation and for communication outside of the programmable logic device at the second frequency of operation.Another aspect of the invention is a receive-side client interface for a media access controller. A media access controller core includes a receive engine. A receive-side datapath is coupled to the media access controller core, where the receive-side datapath includes a first set of registers, a second set of registers and a third set of registers. The first set of registers is clocked responsive to a first clock signal. The second set of registers is clocked responsive to a second clock signal, and the third set of registers is clocked responsive to a third clock signal. The second clock signal is an undivided version of the first clock signal, and the third clock signal is a divided version of the first clock signal. The media access controller is embedded in an integrated circuit having programmable logic, where the programmable logic is capable of operating at the third clock signal frequency but not capable of operating at the first clock signal frequency.BRIEF DESCRIPTION OF THE DRAWINGSAccompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the invention; however, the accompanying drawing(s) should not be taken to limit the invention to the embodiment(s) shown, but are for explanation and understanding only.FIG. 1 is a high-level block diagram depicting an exemplary embodiment of a Field Programmable Gate Array ("FPGA").FIG. 1A is a simplified block diagram depicting an exemplary embodiment of an Ethernet Media Access Controller ("EMAC") core.FIG. 2A is a high-level block diagram depicting an exemplary embodiment of an instantiation of an EMAC in configurable logic.FIG. 2B is a high-level block diagram depicting an exemplary embodiment of an FPGA having an embedded EMAC system.FIG. 2C is a high-level block/schematic diagram depicting an exemplary embodiment of a clock tree for an EMAC core.FIG. 2D, there is shown an exemplary embodiment of signal timing for signals of FIG. 2C.FIG. 2E is a block/schematic diagram depicting an exemplary embodiment of a transmit clock generator.FIG. 2F-1 is a schematic diagram depicting an exemplary embodiment of an on-chip global buffer multiplexer.FIG. 2F-2 is a schematic diagram depicting an exemplary embodiment of a divider circuit.FIG. 2G is a block/schematic diagram depicting an exemplary embodiment of a receive clock generator.FIG. 3 is a block/schematic diagram depicting an exemplary embodiment of an FPGA configured for an overclocking mode.FIG. 3A is a simplified block diagram depicting an exemplary embodiment of clock management for a Media Independent Interface.FIG. 4-1 is a high-level block/schematic diagram depicting an exemplary embodiment of a host interface.FIG. 4-2 is a block/schematic diagram depicting an exemplary embodiment of a host interface.FIG. 4-3 is state diagram depicting an exemplary embodiment of a state machine for EMAC register read select logic block.FIG. 4-4 is a state diagram depicting an exemplary embodiment of a state machine for address filter read logic block.FIG. 4-5A is a block/schematic diagram depicting an exemplary embodiment of device control register ("DCR") bridge.FIG. 4-5B is a table diagram depicting an exemplary embodiment of DCR address and bit assignments for a DCR bridge.FIG. 4-5C is a table diagram listing an exemplary embodiment of definitions for memory-mapped registers.FIG. 4-6 is a state diagram depicting an exemplary embodiment of a state machine of a DCR acknowledgement generator.FIG. 4-7 is a state diagram depicting an exemplary embodiment of a state machine of a DCR read bypass multiplexer enable generator 552.FIG. 4-8 is a block diagram depicting exemplary embodiments of logic blocks of a control generator block for generating control signals for reading from or writing to a DCR bridge to a host bus.FIG. 4-9 is a block diagram depicting exemplary embodiments of logic blocks of a control generator block for generating control signals for reading or writing data from or to a host bus into a DCR bridge.FIG. 4-10 is a state diagram depicting an exemplary embodiment of a state machine of a configuration read/write bus controller.FIG. 4-11 is a state diagram depicting an exemplary embodiment of a state machine of a MIIM read/write bus controller.FIG. 4-12 is a state diagram depicting an exemplary embodiment of a state machine of a statistics read bus controller.FIG. 4-13 is a state diagram depicting an exemplary embodiment of a state machine of an address filter read/write bus controller.FIG. 4-14 is a state diagram depicting an exemplary embodiment of a state machine of an address filter content addressable memory read/write bus controller.FIG. 4-15 is a state diagram depicting an exemplary embodiment of a state machine of a read data received controller.FIG. 4-16 is a state diagram depicting an exemplary embodiment of a state machine of a configuration read/write controller.FIG. 4-17 is a state diagram depicting an exemplary embodiment of a state machine of a statistics read controller.FIG. 4-18 is a state diagram depicting an exemplary embodiment of a state machine of a MIIM read/write controller.FIG. 4-19 is a state diagram depicting an exemplary embodiment of a state machine of an address filter read/write controller.FIG. 4-20 is a state diagram depicting an exemplary embodiment of a state machine of a multicast address register read/write controller.FIGS. 4-21A through 4-21C are timing diagrams for respective exemplary instances of generation of a sample cycle pulse.FIG. 4-22 is a flow diagram depicting an exemplary embodiment of a receive configuration word register read access flow.FIG. 4-23 is a flow diagram depicting an exemplary embodiment of a receive configuration word register write access flow.FIG. 4-24 is a flow diagram depicting an exemplary embodiment of a multicast frames received okay register read flow ("statistics register read flow").FIG. 4-25 is a flow diagram depicting an exemplary embodiment of a MIIM register read flow.FIG. 4-26 is a flow diagram depicting an exemplary embodiment of a MIIM register write flow.FIG. 4-27 is a flow diagram depicting an exemplary embodiment of an address filter multicast address register read flow.FIG. 4-28 is a flow diagram depicting an exemplary embodiment of an address filter multicast address register write flow.FIG. 4-29 is a block diagram depicting another exemplary embodiment of an address filter multicast address register read flow.FIG. 4-30 is a block diagram depicting another exemplary embodiment of an address filter multicast address register write flow.FIG. 4-31 is a high-level block diagram depicting an exemplary embodiment of a host interface coupled to a physical layer interface for a read from a physical layer device register.FIG. 4-32 is a high-level block diagram depicting an exemplary embodiment of interfacing between a host interface and physical layer interface for a write to a physical layer device register.FIGS. 4-33A and 4-33B is a code listing depicting an exemplary embodiment of a logic block with logic equations in Verilog Register Transfer Level ("RTL").FIG. 4-34 is a code listing depicting an exemplary embodiment of a main bus control block with logic equations in Verilog RTL.FIG. 5A is a high-level block diagram depicting an exemplary embodiment of a transmit-side ("Tx") client interface.FIG. 5B is a high-level block diagram depicting an exemplary embodiment of a receive-side ("Rx") client interface.FIG. 5C is a schematic diagram depicting an exemplary embodiment of a Tx client interface datapath.FIG. 5D is a state diagram depicting an exemplary embodiment of a state machine for a datapath multiplexer controller block.FIGS. 5E, 5F, 5G and 5H are respective output timing diagrams of exemplary embodiments of either even or odd transmit data byte lengths for when a transmit datapath is in an 16-bit mode.FIG. 5I is an output timing diagram depicting an exemplary embodiment of a bypass mode for when a transmit datapath is in an 8-bit mode.FIG. 5J-1 is a schematic diagram depicting an exemplary embodiment of a transmit data valid generator.FIG. 5J-2 is a state diagram depicting an exemplary embodiment of a state machine for a data valid generator.FIG. 5K is a schematic diagram depicting an exemplary embodiment of an Rx client interface.FIG. 5L is a schematic diagram depicting an exemplary embodiment of a circuit implementation of multiplexer select register.FIGS. 5M, 5N, 5O and 5P are respective output timing diagrams of exemplary embodiments of either even or odd receive data byte lengths for when a receive datapath is in an 16-bit mode.FIG. 5Q is an output timing diagram depicting an exemplary embodiment of a bypass mode for when a receive datapath is in an 8-bit mode.FIG. 6 is a high-level block diagram depicting an exemplary embodiment of an EMAC statistics registers, which may be read via a DCR bus.FIG. 7A is a high-level block diagram depicting an exemplary embodiment of a Tx statistics interface.FIG. 7B is a high-level block diagram depicting an exemplary embodiment of a receive-side statistics interface.FIG. 7C is a block/schematic diagram depicting an exemplary embodiment of a transmit statistics multiplexer.FIG. 7D is a state diagram depicting an exemplary embodiment of a state machine for a transmit statistics multiplexer controller.FIG. 7E is a timing diagram depicting an exemplary embodiment of timing for a Tx statistics interface.FIG. 7F is a block/schematic diagram depicting an exemplary embodiment of a receive statistics multiplexer.FIG. 7G is a state diagram depicting an exemplary embodiment of a state machine for a receive statistics multiplexer controller.FIG. 7H is a timing diagram depicting an exemplary embodiment of timing for a receive statistics multiplexer.FIG. 8 is a high-level block diagram depicting an exemplary embodiment of address filter.FIGS. 9 and 10 are simplified block diagrams depicting respective exemplary embodiments of Field Programmable Gate Array architectures in which one or more aspects of the invention may be implemented.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.DETAILED DESCRIPTION OF THE DRAWINGSIn the following description, numerous specific details are set forth to provide a more thorough description of the specific embodiments of the invention. It should be apparent, however, to one skilled in the art, that the invention may be practiced without all the specific details given below. In other instances, well known features have not been described in detail so as not to obscure the invention. For ease of illustration, the same number labels are used in different diagrams to refer to the same items, however, in alternative embodiments the items may be different. Moreover, for purposes of clarity, a single signal or multiple signals may be referred to or illustratively shown as a signal to avoid encumbering the description with multiple signal lines. Moreover, along those same lines, a multiplexer or a register, among other circuit elements, may be referred to or illustratively shown as a single multiplexer or a single register though such reference or illustration may be representing multiples thereof. Furthermore, though particular signal bit widths, data rates and frequencies are describe herein for purposes of clarity by way of example, it should be understood that the scope of the description is not limited to these particular numerical examples as other values may be used.EMAC SystemFIG. 1 is a high-level block diagram depicting an exemplary embodiment of an FPGA 100. FPGA 100 includes FPGA programmable configurable circuitry ("FPGA fabric") 101 in which an area is reserved for an embedded processor, as well as other embedded circuitry ("hardwired"), namely, processor block 102. Notably, processor block 102 need not be for an embedded processor, but generally refers to any area on an FPGA die reserved for embedded circuitry, more generally Application Specific Integrated Circuitry ("ASIC") block 102. FPGA fabric 101 may include configurable logic configured for interfacing to one or more interfaces, such as Physical Layer ("PHY") interfaces 119, clock interface 115, host bus 118, statistics interfaces 116, and client interfaces 117, in this exemplary embodiment. Notably, the words "include" and "including", and variations thereof, as used herein shall mean including without limitation.Processor block 102 includes the following embedded, i.e., hardwired, circuitry: processor 103, Ethernet Media Access Controller 110 ("EMAC0"), EMAC 111 ("EMAC1"), and host interface 112. Embedded processor 103 may be a PowerPC 405 core from IBM, though other known processor cores may be used. In an alternative embodiment embedded processor 103 is a the hardwired form of the MicroBlaze or PicoBlaze softcore processor from Xilinx, Inc. EMAC 110, EMAC 111 and host interface 112 are collectively referred to as the top-level EMAC ("EMAC_top") 104. EMACs 110 and 111 may be used for access to and from Ethernet 39 via PHY interface 119. Alternatively, rather than a PHY interface 119, a transceiver, such as a Multi-Gigabit Transceiver ("MGT") or an external PHY integrated circuit may be used. Notably, though an EMAC is described herein for purposes of clarity by way of example, it should be understood that the scope is not limited to an Ethernet type of network. Accordingly, a media access controller for interfacing to any known network may be embedded in an integrated circuit having configurable logic for communication therewith.Processor block 102 may include traces for busing. A Device Control Register ("DCR") bus 114 is described herein. DCR bus 114 is a known DCR interface for a PowerPC 405 core ("PPC core"). Though a PPC core is described herein for purposes of clarity by way of example, it should be understood that other known processor cores may be used. Furthermore, it should be appreciated that though an embedded processor is described herein, an external host processor 10 may optionally be used instead of embedded processor 103. External host processor 10 may be any of a variety of known processors. Furthermore, it should be understood that host bus 118 may optionally be coupled to an internal embedded processor, such as embedded processor 103, or a processor 10A instantiated in configurable logic of FPGA fabric 101. Notably, configurable logic may be used to instantiate a bridge between host interface 118 and processor 10A. Moreover, it should be understood that EMACs 110 and 111 may be used without any host processor, as configuration vectors may be provided via tie-off pin inputs.EMACs 110 and 111 share a single host interface 112. Either or both of EMACs may be selected via host interface 112. Though two EMACs are shown one or more than two EMACs may be used. Host interface 112 may be used to interface to a microprocessor or other known integrated circuit external to FPGA 100. Such access to an external integrated circuit may be via host bus 118. Notably, host bus 118 is a processor platform independent host bus. In an implementation, host interface 112 may use either an EMAC host bus, such as host bus 118, or a DCR bus 114 through a DCR bridge 113, which may or may not be part of host interface 112. In other words, either host bus 118 or DCR bus 114 is used at a time.EMAC core 123 includes clock generator/management 124. Clock generator 124 may be used to provide a transmit clock signal and a receive clock signal, among other below-described clock signals for EMAC 110. EMAC Core 133 includes clock generator 134. Clock generator 134 may be used to provide a transmit clock signal and a receive clock signal, among other below-described clock signals for EMAC 111.As EMAC 110 and EMAC 111 are the same, only EMAC 110 will be described herein below for purposes of clarity.FIG. 1A is a simplified block diagram depicting an exemplary embodiment of an EMAC core 123. With simultaneous reference to FIGS. 1 and 1A, EMAC 110 is further described. Again, EMAC cores 123 and 133 are the same, so only one is described for purposes of clarity.EMAC 110 includes EMAC core 123, transmit-side statistics multiplexer circuitry ("transmit statistics multiplexer") 125, receive-side statistics multiplexer circuitry ("receive statistics multiplexer") 126, and a client interface 117, which may be thought of as including a transmit-side client interface 127 and a receive-side client interface 128.In an exemplary implementation of client interface 117, an 8-bit or 16-bit wide mode may be selected. Client interface 117 is coupled to transmit engine 820 and to receive engine 850. Receive engine 850 may include an address filter 129. Transmit engine 820, which may be considered part of or coupled to transmit client interface 127, is coupled to flow control 105. Receive engine 850, which may be considered part of or coupled to receive client interface 128, is coupled to flow control 105. Transmit engine 820 and receive engine 850 are coupled to MII/GMII/RGMII interface 106, which in turn may be coupled to a physical layer interface 119. MII/GMII/RGMII interface 106 may be coupled to PCS/PMA Sublayer 107, which in turn may be coupled to an MGT and may provide an MDIO interface to a physical layer interface 119 along with MII management interface 108. Receive engine 850, MII management interface 108 and configuration registers may be coupled to host interface 112 via select circuitry 890, which circuitry alternatively may be part of host interface 112.EMAC 110 may be a multiple mode EMAC. In an exemplary implementation, EMAC 110 may support data rates of approximately 10, 100, and 1000 megabits per second and be compliant with IEEE 802.3-2002 specifications. Though EMAC 110 may operate at a single data rate, such as either approximately 10, 100, or 1000 megabits per second, it may operate as a tri-mode EMAC switching as between data rates. Notably, other data rates may be used, such as other data rates greater than 100 megabits per second.In an exemplary implementation, EMAC 110 may support Reduced Gigabit Media Independent Interface ("RGMII") protocol for use with double data rate ("DDR") operation thereby reducing width of a data bus to an external physical layer interface, such as physical layer interface 119. A Physical Medium Attachment ("PMA") sub-layer may be used with a Multi-Gigabit Transceiver ("MGT") of FPGA 100 to provide an on-chip 1000BASE-X implementation. MGTs that may be used are shown in FIG. 10, for example.An embedded EMAC may operate with a Media Independent Interface ("MII"), a Gigabit MII ("GMII"), or a PCS/PMA to an MGT. The input/output ("I/O") pins for these physical layer (PHY) interfaces 119 cross the ASIC-FPGA boundary to the I/O cells or MGT driver cells located in FPGA 100. Notably, EMAC 110 may use one and only one set of PHY interface pins at a time, and thus only one physical layer interface 119 interfacing, such as for MII, GMII, or MGT, is done at a time.Meanwhile, processor block 102 has a limited number of I/O pins available for EMAC 110 due to routing channel constraints in FPGA 100. Hence, PHY interface I/O pins are re-used for the different interfaces, such as for a RGMII, an MII, a GMII, a 1000BASE-X, and a Serial Gigabit Media Independent Interface ("SGMII"). In an exemplary implementation, a total reduction of approximately 78 I/O pins on a PHY interface 119 may be achieved, along with output pin reductions in a statistics interface 116, as described below in additional detail. In this exemplary implementation, this reduction in pin count facilitated adding another embedded EMAC, namely, EMAC 111, in a processor block 102 of a pre-existing dimension. Thus, for example, EMAC 110 may have approximately 57 to 61 pins for a physical layer interface 119.EMAC 110 may be configured to generate statistics on data traffic. For example, at the end of each transmitted or received frame, EMAC 110 may output a statistics vector for a frame to logic, which may be instantiated in FPGA fabric 101, for accumulation. However, statistics accumulation may be independent of the transmitted or received frame provided that each accumulation completes before the next statistics output so that no statistics vector is missed.Because processor block 102 has a limited number of I/O pins as mentioned above, statistics interface 116 may output a statistics vector in a number of bits at a time, which is smaller than the length of a statistics vector. This output may be done over several cycles instead of in one cycle to reduce the number of output pins used to provide statistics interface 116. For two EMACs 110 and 111, output pins for statistics interfaces 116 may be reduced by approximately 102 pins in an exemplary implementation. Along with PHY interface I/O pins reduction, statistics interface pin reduction may facilitate integration of more than one EMAC in an ASIC block 102.EMAC 110 includes address filter 129 to accept or reject incoming frames on a path coupled to receive client interface 128. Thus, for example, bi-directional traffic is communicated on physical interface 119 to EMAC core 123 which traffic, either transmit or receive traffic, is obtained from transmit client interface 127 or provided to receive client interface 128 and address filter 129, respectively. EMAC core 123 is configured to provide statistics vectors, whether receive statistics vectors or transmit statistics vectors, for multiplexing by transmit statistics multiplexer 125 and receive statistics multiplexer 126, respectively. Configurable logic of FPGA fabric 101 may be configured to accumulate statistics provided from EMAC 110.On the physical layer interface side of EMAC 110, GMII and MII interfaces use standard input/outputs ("I/Os") to access data and control signals to a network connection via physical layer interface 119, in which an additional PHY integrated circuit may be disposed between PHY interface 119 and the physical medium (i.e., Ethernet lines). In addition, EMAC 110 physical layer interface 119 can be configured for a Physical Coding Sublayer (PCS) and a PMA sub-layer ("PCS/PMA") interface which may use a serializer-deserializer ("SERDES") to access a data signal serially. A SERDES may be instantiated in programmable IOBs, such as IOBs 2904 of FIG. 10. An example of a SERDES that may be instantiated is described in additional detail in a co-pending U.S. patent application entitled, "MULTI-PURPOSE SOURCE SYNCHRONOUS INTERFACE CIRCUITRY", by Paul T. Sasaki, et al., U.S. patent application Ser. No. 10/919,901, filed Aug. 17, 2004, which is incorporated by reference herein in its entirety.Flow control module 105 may be used to avoid or reduce congestion in EMAC 110 from communication traffic. MIIM interface may allow a processor access to control and status registers in the PCS layer when configured in a 1000BASE-X or Serial Gigabit Media Independent Interface ("SGMII") mode.Clock generator 124 facilitates EMAC 110 to operate in different modes, for example such as GMII, MII, RGMII, SGMII, and 1000BASE-X modes. Furthermore, clock generator 124 facilitates EMAC 110 to operate at one of three different speeds, for example such as 10, 100, or 1000 megabits per second, or other high data rate for "overclocking."It should be understood that in contrast to an EMAC instantiated in configurable logic, with the embedding of EMAC 110 as dedicated circuitry there is an FPGA fabric 101/EMAC 110 boundary with which to contend. Notably, this boundary is different than interfacing to an embedded processor, which conventionally has a general purpose interface, as an embedded EMAC 110 has special purpose interfacing. However, as described below, EMAC 110interfacing is configured in part to provide a general purpose communication client-side interface to FPGA fabric 101. For dynamically reconfigurable logic, such general purpose communication client-side interface facilitates coupling different user-design instantiations without redesign thereof to accommodate EMAC 110. In other words, the interface between EMAC 110 and a user-defined instantiated design in FPGA fabric 101 may be independent of one another.Implementation of an embedded EMAC core 123 in processor block 102 facilitates use of processor 103 as a host processor. To accomplish this, a host interface 112, including a DCR bridge 113 and supporting logic, is provided. Notably, DCR bridge 113 may be external to host interface 112. In addition, host interface 112 allows for a host processor, embedded in or external to FPGA 100, to manage EMAC configuration registers using a host bus 118 supported by EMAC 110. This usage is in contrast to use of processor 103 via a DCR bridge 113 and DCR bus 114.When DCR bus 114 is used as a host bus, DCR bridge 113 translates commands carried over DCR bus 114 into EMAC host bus signals. These signals are then input into at least one of EMAC 110 and 111. In an exemplary implementation, DCR bridge 113 includes four device control registers, two of which are used as data registers, such as respective 32 bit wide data registers. Another is used as a control register. The fourth device control register is used as a ready status register. A host processor, such as processor 103, polls this fourth device control register to determine access completion status. Bits in this fourth device control register are asserted when there is no access in progress. When an access is in progress, a bit corresponding to the type of access is automatically de-asserted. This bit is automatically re-asserted when the access is complete.Alternatively, host interface 112 may provide an interrupt request to inform a host processor, such as processor 103 or an external host processor 10, of an access completion. A user may select to use either polling or interrupting to inform a host processor of access status.Notably, transmit client interface 127 and receive client interface 128 each may operate in respective clock domains. Processor, such as processor 103, as associated with host interface 112 may operate in a separate clock domain too. Notably, by clock domain it is not meant to imply that the frequencies are different, though they may be the same or different frequencies, but rather that clocking may be asynchronous with respect to separate clock domains.Soft EMACFIG. 2A is a high-level block diagram depicting an exemplary embodiment of a "soff" EMAC ("EMAC_top") 204S. EMAC_top 204S and interface logic 202S is a program core that may be instantiated in configurable logic of FPGA 100. Notably, EMAC_top 104 of FIG. 1 may be designed using a hardware description language, such as VHDL or Verilog, among others. Accordingly, EMAC_top 104 of FIG. 1 may be an ASIC conversion of hardware description language code, whereas EMAC_top 204S may be an FPGA program code conversion of part of such hardware description language code. Thus, EMAC_top 104 and interfaces thereto of FIG. 1 may be provided in part as a design listing for subsequent instantiation in configurable logic of a programmable logic device. However, EMAC_top 204S is not just a repeat of EMAC 104 though in instantiated as opposed to embedded form, as a clock generator 204 in contrast to clock generator 124, is external to an EMAC.Logic interface 201 couples EMAC_top 204S to interface logic 202S. Interface logic 202S is instantiated in configurable logic of FPGA fabric 101. Interface logic 202S is a program core for instantiating interfaces, such as configurable logic versions of client interface 117 and physical layer interface 119 of FIG. 1, in configurable logic. In an exemplary implementation, logic 202S may be a relatively fast FIFO for hold transmit and receive data packets from a client interface ofEMAC_top 204S.A clock signal is provided from clock generator 204 to a clock distribution network ("clock tree") of FPGA fabric 101, which is generally indicated as chip-global distribution circuitry ("BUFG") 203 coupled to clock network 205, for clock signal distribution to EMAC_top 204S and interface logic 202S.With simultaneous reference to FIGS. 1, 1A and 2A, it should be understood that because clock generator 124 is inside EMAC core 123, there is a clock buffering unknown propagation delay when the clock goes through a design instantiated in FPGA fabric 101. Thus, an unknown propagation delay of a clock signal going from an ASIC EMAC to a user instantiated design in FPGA fabric 101 may be handled by sending a clock out of such EMAC and then buffered and sent back into such EMAC, and for EMAC core 123, a clock interface is provided as described below in additional detail. In contrast, when a clock generator 204 is instantiated in FPGA fabric 101, such as along with a user design, a known clocking relationship exists by using FPGA fabric 101 clock network 205 resources. So such clock signal need not go into and out of an EMAC instantiated in configurable logic.Clock InterfaceA clock network may introduce clock skew. With respect to a clock network in FPGA fabric 101, such skew may be unknown. Handling skew from clock signal distribution in FPGA fabric 101 is described below in additional detail.For an EMAC system instantiated in configurable logic, all of the logic for EMAC_top 204S and interface logic 202S may be in FPGA fabric 101. Clock signals are routed to EMAC_top 204S and interface logic 202S with FPGA clock networks, such as clock network 205. As a result, due to known clock buffering and routings in FPGA 100, the clock skews between EMAC_top 204S and interface logic 202S may be controlled within a tolerance range.However, in EMAC 110 of FIG. 1, a clock network includes a balanced clock tree with known and fixed delays throughout. In contrast, a clock network in FPGA fabric includes a clock driver and clock network routings. The delay of a clock signal through an FPGA fabric clock network is dependent on FPGA design implementation. As a result, there is an uncontrolled amount of clock skew between EMAC ASIC logic and FPGA configurable logic at the "ASIC-FPGA" interface. Furthermore, in an exemplary implementation, EMAC 110, including clock generator 124, is implemented in processor block 102 with standard cells, and the same standard cells are not used to implement configurable logic in FPGA fabric 101. In other words, clock tree routing in FPGA fabric 101 and processor block 102 are different.By feeding back an FPGA fabric clock into EMAC 110 to account for design specific clock delay in a user-instantiated design in FPGA configurable logic and to use a delay cell in the input datapath in EMAC 110 as described herein, clock skew introduced by an EMAC clock tree may be compensated.For purposes of clarity, only ASIC versions of EMAC_top 104 embedded in an FPGA 100 are described hereinafter, as a configurable logic instantiated version of EMAC_top 104 will be apparent from description of an ASIC version thereof.FIG. 2B is a high-level block diagram depicting an exemplary embodiment of an FPGA 100 having an embedded EMAC system. More particularly shown is EMAC core 123 having a clock generator 124 and a clock-output tree 210. A clock signal provided from clock generator 124 is sent to clock-output tree 210 and separately to a clock tree 213, which is generally shown as a global buffer ("BUFG") driver. Thus, the clock signal output from clock generator 124 may be provided external to processor block 102 but internal to FPGA 100 where it is driven by BUFG driver 213.A clock signal output from BUFG driver 213 is routed through a clock network provided for in conventional FPGA clock routing, as previously described. However, in the ASIC implementation of EMAC core 123, clock-output tree 210 is used to drive a clock signal output from clock generator 124.It should be appreciated that the output from clock-output tree 210 and the output from BUFG driver 213 may be skewed with respect to one another because a design implemented in FPGA fabric external to processor block 102 is user-dependent, and thus the amount of clock loading may not be known in advance when implementing clock-output tree 210. Though FIG. 2B only shows EMAC 110, the above description applies equally to EMAC 111.FIG. 2C is a high-level block/schematic diagram depicting an exemplary embodiment of a clock tree for EMAC core 123. Clock generator 124 outputs clock signal 221 to BUFG driver 213. Output clock signal from BUFG driver 213 is provided as a feedback clock signal ("CLIENTCLKIN") 220. Clock generator 124 generates an output clock signal 221 responsive to a reference clock signal 222, which may be provided from a clock source external to FPGA 100. Client clock input signal 220 is fed back to EMAC core 123, and more particularly to clock-input tree 211 of EMAC core 123.Accordingly, having a clock signal go through a clock network in FPGA fabric responsive to a user-instantiated design, generally indicated as BUFG driver 213, there will be some clock loading. Thus, output signal 231 may be out of phase with respect to client clock input signal 220.Referring to FIG. 2D, there is shown an exemplary embodiment of signal timing for signals of FIG. 2C. With simultaneous reference to FIGS. 2C and 2D, signals of FIG. 2C are further described. A phase difference 233 between signals 220 and 231 will be less than the total delay of a delay cell of EMAC core 123 used to compensate for this known phase difference 233. The total delay of the delay cell is slightly larger than the total clock tree delay 233 due at least in part to setup time which should taken into account.Client clock input signal 220 is used to clock flip-flops 226 and 225. Clock output signal 231 from clock-input tree 211 is used to clock flip-flops 229 and 224 of EMAC core 123. Data output of flip-flop 229, namely output signal 228, thus will be active for a period equivalent to a period of clock signal 231. Output of flip-flop 225, namely EMAC core input signal 227, will have an active ("high") time equivalent to a period of clock signal 220.Input clock signal 227 is provided to a buffer 223 of EMAC core 123 to provide a delay. Output signal 230 is a compensated delay output signal which may be used as a data input to flip-flop 224 driven by clock signal 231 to provide a data output 232 for clocking EMAC 110.By feeding back FPGA fabric clock network loaded clock signal 220 to drive clock-input tree 211 of ASIC EMAC core 123, clock skew between ASIC EMAC core 123 and FPGA fabric 101 due to clock network loading of client clock input signal 220 may be taken into account. Output signal 231 of ASIC input-clock tree 211 is skewed by a known clock tree delay in an ASIC implementation. This clock skew is compensated by one or more delay cells, such as one or more buffers 223, for instances of data inputs from FPGA fabric 101 to EMAC core 123, such as via register ("flip-flop") 225 clocked by FPGA fabric 101 loaded clock signal 220. For instances of data outputs from EMAC core 123 to FPGA fabric 101, such as via register 226 clocked responsive to loaded clock signal 220, output data 228 may be captured by register 226. Notably, registers 224 and 229 are merely representative of circuits in EMAC 110, and are not the actual circuits, which are described below in additional detail. Accordingly, registers 224 and 229 are generally representative of how the EMAC clock domain may work. Moreover, registers 226 and 225 represent a user-instantiated design, and thus may not be the actual circuits implemented by a user.Embedded ("hardwired") EMACs implemented in standard cells generally have more than twice the performance of supporting logic implemented FPGA fabric 101. To take advantage of this increase in performance, in an exemplary implementation, client interface 117 at the ASIC-FPGA fabric boundary is configured to run EMAC 110 at approximately twice the clock frequency of the supporting logic programmed in configurable logic. To maintain throughput, client interface 117 datapath width is doubled to compensate for the supporting logic running at half the clock frequency. Client interface 117 allows for EMAC 110 to run at the same clock frequency as the supporting logic in FPGA by using half the allocated datapath width.FIG. 2E is a block/schematic diagram depicting an exemplary embodiment of a transmit clock generator 124T of clock generator 124. For example client output clock ("clientclkout") signal 221 of FIG. 2C could corresponds to transmit GMII/MII output clock ("TX_GMII_MII_CLK_OUT") signal 277 and transmit client clock output ("TX_CLIENT_CLK_OUT") signal 276, and client input clock ("clientclkin") signal 220 could correspond to transmit GMII/MII input clock ("TX_GMII_MII_CLK_IN") signal 265 and transmit client input clock ("TX_CLIENT_CLKIN") signal 269.Responsive to EMAC 110 operating in an "overclocking" mode, such as a 16-bit overclocking mode, MII transmit clock input ("MII_TX_CLK") signal 267 is not used; hence, the input clock pin for MII_TX_CLK signal 267 may be used to bring in a divided by two clock signal from a DCM, as described below in additional detail with reference to FIG. 3.With continuing reference to FIG. 2E, clock signal 222 is provided to a counter 240, multiplexer 247, multiplexer 248, and multiplexer 251. Counter 240, as well as counter 241, may be Johnson counters for tracking a logic one. For purposes of clarity by way of example, it will be assumed that clock signal 222 has a frequency of approximately 125 MHz. All frequencies provided herein below are approximate, and actual frequency used depends upon implementation.Counter 240 provides a divide by 5 clock signal ("CLK-25 MHz") 256 at 25 MHz and a divided by 10 clock signal ("CLK_12-5 MHz") 257 at 12.5 MHz. Counter 240 provides signal CLK-25 MHz 256 to a multiplexer 242 as input. CLK-25 MHz 256 is provided to a logic 1 input ("input I1") of multiplexer 242. By a logic 1 ("logic high") input of a multiplexer, it is meant that to select that input for output from the multiplexer, a control signal will be a logic 1. Counter 240 provides signal CLK_12-5 MHz 257 to a counter 241 and to a multiplexer 243 as input. CLK_12-5 MHz 257 is provided to a logic 1 ("logic high") input I1 ("input I1") of multiplexer 243.Clock signal 222 is provided to a logic 1 input of multiplexers 247, 248, and 251. Tx_GMII_MII_Clk_IN signal 265 is provided to a logic 0 input of multiplexer 251.Counter 241 provides a divided by 5 clock ("CLK-2-5 MHz") signal 258 at 2.5 MHz and a divided by 10 clock ("CLK_1-25 MHz") signal 259 at 1.25 MHz. Counter 241 provides CLK-2-5 MHz signal 258 to a logic low input of multiplexer 242. Counter 241 provides CLK-1-25 MHz signal 259 to a logic 0 ("logic low") input of multiplexer 243 as input. By a logic 0 ("logic low") input of a multiplexer, it is meant that to select that input for output from the multiplexer, a control signal will be a logic 0.A speed select ("Speed_Is-100") signal 253 is provided to multiplexer 242 and to multiplexer 243 as control input. When Speed_Is-100 signal 253 is logic 1, multiplexer 242 selects CLK-25 MHz signal 256, input I1, for output, and multiplexer 243 selects CLK-1-25 MHz signal 259, input I0, for output. When select Speed_Is-100 signal 253 is logic 0, multiplexer 242 selects CLK-2-5 MHz signal 258, input I0, for output, and multiplexer 243 selects CLK-12-5 MHz signal 257, input I1, for output.Multiplexer 242 provides a transmit speed clock selected ("SPEED_SEL_TX_MII_CLK") signal 260 as output. Multiplexer 243 provides a transmit speed core clock selected ("SPEED_SEL_TX_CORE_CLK") signal 261 as output for setting transmit speed of an EMAC. Speed select signal 253 may be used for selecting an input signal 256 or 258 for a transmit speed selected for MII clock ("SPEED_SEL_TX_MII_CLK") signal 260 output of multiplexer 242 and an input signal 259 or 257 for a transmit speed selected core clock output ("SPEED_SEL_TX_CORE_CLK") signal 261 of multiplexer 243. SPEED_SEL_TX_MII_CLK signal 260 is provided to a logic high input of multiplexer 245. SPEED_SEL_TX_CORE_CLK signal 261 is provided to a logic high input of multiplexer 246.An MII transmit clock ("MII_TX_CLK") signal 267 is provided to multiplexer 250, multiplexer 245, and divider 244. MII transmit clock signal 267 is provided to multiplexers 250 and 245 at respective logic high inputs thereof. Divider 244 provides a divided by two clock ("MII_TX_CLK_DIV2") signal 262 to a logic high input of multiplexer 246.A Serial Gigabit or Reduced Media Independent Interface ("SRGMII") select signal 254 is provided to multiplexer 245 and to multiplexer 246 as control input. Responsive to SRGMII select signal 254 being a logic 0, multiplexer 245 selects MII_TX_CLK signal 267 for output and multiplexer 246 selects MII_TX_CLK_DIV2 signal 262 for output. Responsive to SRGMII select signal 254 being a logic 1, multiplexer 245 selects SPEED_SEL_TX_MII_CLK signal 260 for output and multiplexer 246 selects SPEED_SEL_TX_CORE_CLK signal 261 for output.Multiplexer 245 provides an internal MII transmit clock ("INT_MII_TX_CLK") signal 278 as output. Multiplexer 246 provides an SRGMII transmit core clock ("TX_CORE_CLK_SRGMII") signal 266 as output. Internal clock signal 278 is provided to a logic low input of multiplexer 247. Transmit clock signal 266 is provided to a logic low input of multiplexer 248.A speed select ("SPEED_IS-1000") signal 255 is provided to multiplexer 247 and to multiplexer 248 as control input. Responsive to speed select signal 255 being a logic 1, multiplexers 247 and 248 both select clock signal 222 for output. Responsive to speed select signal 255 being a logic 0, multiplexer 247 selects INT_MII_TX_CLK signal 278 for output and multiplexer 248 selects TX_CORE_CLK_SRGMII signal 266 for output. Alternatively, speed select signal 255 may be referred to as a SPEED_IS_10_100 with logic high and low inputs of multiplexers 247 and 248 reversed in FIG. 2E.Multiplexer 247 provides a transmit output clock ("TX_GMII_MII_CLK_OUT") signal 277 as output. Multiplexer 248 provides an internal transmit core clock ("INT_TX_CORE_CLK") signal 275 as output. Internal core clock signal 275 is provided to a buffer 249 as input. Buffer 249 provides a transmit client output clock ("TX_CLIENT_CLK_OUT") signal 276 as output.An overclocking mode select signal 270 is provided to multiplexer 250 as a control input. Responsive to overclocking mode select signal 270 being a logic 1, multiplexer 250 selects an MII transmit clock ("MII_TX_Clk") signal 267 for output therefrom. Responsive to overclocking mode select signal 270 being a logic 0, multiplexer 250 selects a logic 0, tied to a logic low input of multiplexer 250, for output. Output of multiplexer 250 is a divided by two transmit client clock ("TX_CLIENT_DIV2_CLK") signal 272, which may be disabled by selecting an input of multiplexer 250 tied to ground.A PCS/PMA mode select ("PCS_PMA") signal 271 is provided to multiplexer 251 as a control input. Responsive to PCS/PMA select signal 271 being a logic 1, multiplexer 251 selects clock signal 222 for output. Responsive to PCS/PMA 271 select signal being a logic 0, multiplexer 251 selects TX_GMII_MII_CLK_IN signal 265 for output. Multiplexer 251 provides a GMII/MII transmit clock ("TX_GMII_MII_CLK") signal 273 as output.A transmit client clock ("TX_CLIENT_CLK_IN") signal 269 is provided to a buffer 252. Buffer 252 provides a transmit core clock ("TX_CORE_CLK") signal 274 as output.Notably, in an implementation of EMAC 110, EMAC 110 is a tri-mode MAC, namely, frequency of operation may be switched on the medium from approximately 1000, to 100, to 10 Mb/s. This translates into switching the system clock. Host interface 112 handles this switching. To control switching of clocks to avoid creating an unwanted pulse, clocks are only switched during a low period of the clocks. For this switching in an exemplary implementation, multiplexers 242, 243, 245, 246, 247, 248, 250, and 251 may be what is known as on-chip global buffer multiplexers, an example of which is described with reference to FIG. 2F-1.FIG. 2F-1 is a schematic diagram depicting an exemplary embodiment of an on-chip global buffer multiplexer 99. A select signal 21 is provided to inverter 11 and to an input of an AND gate 17. Inverter 11 provides an inverted version of signal 21 to an input of an AND gate 14. Another input to AND gate 14 is provided by an inverter 12. Another input to AND gate 17 is provided by an inverter 13.AND gate 14 provides an input data A signal ("dataA_in") 22 as output. AND gate 17 provides an input data B signal ("dataB_in") 25 as output. Input data A signal 22 is provided to a data input of a register 15. Input data B signal 25 is provided to a data input of a register 18.A clock A signal ("clockA") 23 is provided to a clock input of register 15 and to an input of an AND gate 16. A clock B signal ("clockB") 26 is provided to a clock input of register 18 and to an input of an AND gate 19. Register 15 provides a register A signal ("Areg") 24 as output. Register 18 provides a register B signal ("Breg") 27 as output. Areg signal 24 is provided to another input of AND gate 16. Breg signal 27 is provided to another input of AND gate 19.AND gate 16 provides ANDed Areg signal 24 and clockA signal 23 to an input of an OR gate 20. AND gate 19 provides ANDed Breg 27 signal and clockB signal 26 to another input of OR gate 20. OR gate 20 provides an output clock signal ("outputClock") 28 as output of on-chip global buffer multiplexers 99.FIG. 2F-2 is a schematic diagram depicting an exemplary embodiment of a divider circuit 98, which may be used in an implementation for divider 244 of FIG. 2E. A clock signal 41 is provided to a clock input of a register 32. A data input to register 32 is provided by an inverter 31. Register 32 provides a register output signal ("Reg1_Out") 42 as output. Register output signal 42 is provided to buffers 31 and 33 as input. Buffer 33 provides a divided by 2 clock signal ("CLK_DIV2") 43 as output of divider 98.FIG. 2G is a block/schematic diagram depicting an exemplary embodiment of a receive clock generator 124R. Receive clock generator 124R is a part of clock generator 124. Receive clock generator 124R includes multiplexers 285, 286, 287 and 288, divider 284, buffers 283 and 291, and OR gate 281. In an exemplary implementation, multiplexers 285, 286, 287, and 288 are on-chip global buffer multiplexers, an example of which is illustratively shown in FIG. 2F-1, and divider 284 may be a divider as illustratively shown in FIG. 2F-2.Overclocking mode signal 270 is provided as an input to an OR gate 281. PCS/PMA mode signal 271 is provided as another input to OR gate 281. OR gate 281 outputs ORed overclocking mode signal 270 and PCS/PMA mode signal 271 as a mode select ("OVERCLOCKIN_OR_PCS_PMA") signal 296. Select signal 296 is provided to a multiplexer 288 and to a multiplexer 286 as control input.Clock signal 222 is provided to multiplexer 288 at a logic 1 input thereof. A receive clock ("RX_CLK") signal 295 is provided to multiplexer 288, divider 284, multiplexer 285, and multiplexer 287 as an input.Receive clock signal 295 is provided to multiplexers 285 and 288 at respective logic 0 inputs and to multiplexer 287 at a logic 1 input. Responsive to select signal 296 being in a logic 1 state, multiplexer 288 selects clock signal 222 for output. Responsive to select signal 296 being in a logic 0 state, multiplexer 288 selects receive clock signal 295 for output. Multiplexer 288 provides an internal receive GMII/MII clock ("INT_RX_GMII_MII_CLK") signal 279 as output.Internal receive GMII/MII clock signal 279 is provided to a buffer 283 as input. Buffer 283 provides a receive GMII/MII clock ("RX_GMII_MII_CLK") signal 294 as output.Divider 284 provides a divided by two internal receive GMII/MII clock ("INT_RX_GMII_MII_CLK_DIV2") signal 280 to a logic 1 input of multiplexer 285. Recall, the other input to multiplexer 285 is receive clock signal 295.A speed select ("SPEED_IS-10-100") signal 255 is provided to multiplexer 285 as control input. Responsive to select signal 255 being a logic 1, multiplexer 285 selects divided by two internal receive GMII/MII clock signal 280 for output. Responsive to select signal 255 being a logic 0, multiplexer 285 selects receive clock signal 295 for output. Multiplexer 285 provides a speed select receive core clock signal ("SPEED_SEL_RX_CORE_CLK") 282 as output.Overclocking mode select signal 270 is provided to multiplexer 287 as control input. Responsive to overclocking mode select signal 270 being a logic 1, multiplexer 287 selects receive clock signal 295 for output. Responsive to overclocking mode select signal 270 being a logic 0, multiplexer 287 selects an input of multiplexer 287 tied to ground for output of a logic 0 (i.e., to disable overclocking). Multiplexer 287 provides a divided by two receive client clock ("RX_CLIENT_DIV2_CLK") signal 292 as output.A speed select receive core clock signal 282 is provided from multiplexer 285 to a logic 0 input of multiplexer 286. A transmit core clock signal 274 is provided to a logic 1 input of multiplexer 286.Responsive to select signal 296 being in a logic 0 state, multiplexer 286 selects speed select receive core clock signal 282 for output. Responsive to select signal 296 being in a logic 1 state, multiplexer 286 selects transmit core clock signal 274 for output. Multiplexer 286 provides a receive client output clock ("RX_CLIENT_CLK_OUT") signal 293 as output.A receive client clock input ("RX_CLIENT_CLK_IN") signal 289 is provided to a buffer 291. Buffer 291 provides a receive core clock ("RX_CORE_CLK") signal 290 as output.Accordingly, it should be appreciated that EMAC core 123 includes a clock generator 124 from which a clock signal is generated and a version of which generated clock signal is fed back to EMAC core 123 to account for clock signal distribution through FPGA fabric 101. Secondly, it should be appreciated that any of several modes, such as MII, GMII, SGMII, RGMII, and 1000BASE-X PCS/PMA, may be used where transmit clock generator 124T and a receive clock generator 124R portions of clock generator 124 are used for providing clock signals for transmission and reception for communicating via a network. Furthermore, clock signals for a PCS/PMA sublayer mode or an overclocking mode may be selected. Along these lines, clock generator 124 provides both EMAC core and client interface clock signals.In an implementation, when EMAC 110 is configured for tri-mode operation or non-tri-mode operation, transmit clock speed is approximately 2.5, 25, and 125 MHz for 10,100 and 1000 Mb/s approximate data rates, respectively. In an implementation, when EMAC 110 is configured for tri-mode operation or non-tri-mode operation, receive clock speed is approximately 2.5, 25, and 125 MHz for 10, 100 and 1000 Mb/s approximate data rates, respectively. It should be understood that embedded EMAC 110 may be capable of operating at a faster frequency than FPGA fabric 101.FIG. 3 is a block/schematic diagram depicting an exemplary embodiment of FPGA 100 configured for an overclocking mode. In this exemplary embodiment, a digital clock manager ("DCM") 308 is coupled to clock-input trees 211 and 301 of EMAC core 123 and is configured to provide a divide by two clock signal 305. Clock signal 221 output from clock generator 124 is input to DCM 308. Output from DCM 308 is a divided by two clock signal 305 and an undivided or 1* clock signal 304 with respect to divided by two clock signal 305.Responsive to EMAC 110 being in an overclocking mode, such as a 16-bit overclocking mode, DCM 308 in FPGA 100 is used to provide divided by two clock signal 305. Because a DCM is used, the phase between 1* clock signal 304 and divided by two clock signal 305 are phase aligned at the output of DCM 308. Clock signal 305 of FIG. 3 may be handled as was clock signal 221, described with reference to FIG. 2C, as the same principle in the above solution applies to the divided by two clock skew.Clock signal 304 may be input to buffer 302, and clock signal 305 may be provided to buffer 303. Output of buffer 302 may be fed back as an input to DCM 308 and may be provided as clock signal 220 to clock-input tree 211. Output of buffer 303 may be provided as clock signal 306 to FPGA fabric 101 and to clock-input tree 301. Notably, separate clock trees may be used for handling clocks of different frequencies, for example where clock signal 304 is greater than or equal to 250 MHz and clock signal 305 is greater than or equal to 125 MHz. Recall, for this example that clock signal 304 is twice the frequency of clock signal 305.FIG. 3A is a simplified block diagram depicting an exemplary embodiment of clock management for an RGMII. Though an RGMII example is used, it should be understood that an MII or MGII may be used, depending on which mode of these three MII modes is selected. However, for compliance with an interface protocol, frequency of the output signal may be specified, as described below. EMAC 110 provides a client output transmit clock 221T for RGMII logic 106R and a buffered client input transmit clock 220T may be received. Clock output signal 2002 and clock input signal 2003, such as respective RGMII transmit and receive clock signals, may be any of a variety of frequencies as describe in additional detail below herein. Client EMAC transmit and receive input and output clocks 2004 through 2007 may be provided to a user design 2001instantiated in programmable logic. For MII, clock frequencies for clock signals 2002 and 2003 are likewise selectable. However, for a GMII, while clock frequency of clock signal 2003 is selectable, frequency of clock signal 2002 is set to that called out in the GMII specification, such as 125 MHz, for a physical layer interface.Host InterfaceWith renewed reference to FIG. 1, in embedded EMAC top 104, a host bus 118 is configured for backward compatibility with a soft EMAC core host interface. This backward compatibility allows users who have been using the soft EMAC core to use an embedded EMAC 110 without having to redesign the host interface of the soft EMAC core, thereby facilitating user migration.FIG. 4-1 is a high-level block/schematic diagram depicting an exemplary embodiment of a host interface 112. Host bus 118 allows for a host processor to be located in FPGA 100 or external to FPGA 100. In addition, in a PowerPC 405 ("PPC405") processor core implementation, a DCR bridge 113 is implemented internal to host interface 112, so that PPC405 processor 103 residing in the Processor block 102 can act as a host in managing EMAC 110 configuration registers via DCR bus 114. Implementing DCR bridge 113 in processor block 102 with area-efficient standard cells facilitates making available configurable logic resources in FPGA 100 for customer applications. In addition, DCR bridge 113 in processor block 102 provides an efficient way for processor 103 in processor block 102 to act as a host processor to access host registers in EMAC core 123 through DCR bus 114. Notably, DCR bridge may be internal or external to host interface 112.In addition, a PPC405 implementation of processor 103, using DCR bridge 113, can read statistics registers implemented in FPGA fabric 101. When DCR bus 114 is not used, host interface 112 allows a user to manage EMAC host registers via a host bus 118. Additionally, host interface 112 includes logic for processor 103 to read, via DCR bus 114 or host bus 118, statistics registers, such as may be implemented in configurable logic for accumulation of statistics, located in FPGA fabric 101.An input signal 406 to processor block 102 called "dcremacenable" is used to select the host bus type to use. Dcremacenable signal 406 is asserted to select DCR bus 114 for use as a host bus, and deasserted to select host bus 118 for use as a host bus. Dcremacenable signal 406 may be provided via a tie-off pin that can be tied to a logic value (high or low) when FPGA 100 is configured. Notably, it should be understood that if an embedded processor other than a PPC405 core were implemented, then DCR bridge 113 and DCR bus 114 may be replaced with a bridge or hub and associated busing thereof for the type of processor embedded. For example, a Northbridge may be used for interfacing to an embedded Pentium processor from Intel of Santa Clara, Calif. Furthermore, no embedded processor may be present in FPGA 100, as processor 103 is not necessary for operation of EMAC 110. Tie-off pins are provided with FPGA 100, such that a user may set values to avoid the need for a processor. Tie-off pins may be used to configure FPGA 100 as a network device, such as a router, bridge, hub, and the like for example.Furthermore, processor 103 may be used as a host processor and host bus 118 may be used in addition thereto. For example, there may be peripheral functions to be associated with EMAC 110 which peripheral functions could be instantiated in configurable logic of FPGA 100. If such peripheral functions employ register access of EMAC 110, such register access may be had via host bus 118. An example of such a peripheral function would be processing of statistics on network transmission. Another example of such a peripheral function would be address filtering in addition to that already provided with EMAC 110.Host bus 118 is used to provide signals 414 and 409 to host interface 112 and to receive signal 413 from host interface 112. Dcermacenable signal 406 may be provided as a select signal to multiplexers 401 and 402. Input to logic high inputs of multiplexers 401 and 402 may include DCR selection information as between selecting one or both of EMACs 110 and 111. Outputs from multiplexers 401 and 402 may be buffered via AND gates ("buffers") 404 and 405 for providing to EMAC 110 and 111, respectively. However, only one EMAC 110 or EMAC 111 may communicate with a host device at a time, and thus outputs from EMAC 110 and 111 may be provided to multiplexer 403 for communicating via host bus 118. A select signal provided to multiplexer 403 may originate from the output of multiplexer 402.FIG. 4-2 is a block/schematic diagram depicting an exemplary embodiment of host interface 112. Table 1 lists signal sets for FIG. 4-2. For purposes of clarity by way of example, bit lengths for an implementation are provided; however, the particular bit lengths need not be implemented, as other bit lengths may be used. Moreover, logic equations are described in Verilog Register Transfer Level ("RTL").<tb>TABLE 1<tb>Signal Set<sep>Signals<tb> (1)<sep>dcrClk, dcrABus[8:9], dcrWrite, dcrRead, dcrWrDBus[0:31],<tb><sep>dcrAck, dcrRdDBus[0:31]<tb> (2)<sep>dcr_hostAddr[9:0], dcr_hostOpCode[1:0], dcr_hostMIIMsel,<tb><sep>dcr_hostReq, dcr_hostWrData[31:0], dcr_AddrFilRd,<tb><sep>dcr_AddrFilWr, dcr_AFcamWr, dcr_AFcamRd<tb> (3)<sep>HOST_ADDR[9:0], HOST_MIIM_SEL,<tb><sep>HOST_OPCODE[1:0]<tb> (4)<sep>hostAddr[9:0], hostOpcode[1:0], hostMIIMsel, hostReq,<tb><sep>hostWrData[31:0], hostAddrFilRd, hostAddrFilWr,<tb><sep>hostAFcamRd<tb> (5)<sep>HOST_ADDRe0[9:0], HOST_OPCODEe0[1:0],<tb><sep>HOST_MIIM_SELe0, HOST_REQe0,<tb><sep>HOST_WR_DATAe0[31:0], HOST_AddrFilRdE0,<tb><sep>HOST_AddrFilWrE0, host_AFcamRdE0<tb> (6)<sep>HOST_ADDRe1[9:0], HOST_OPCODEe1[1:0],<tb><sep>HOST_MIIM_SELe1, HOST_REQe1,<tb><sep>HOST_WR_DATAe1[31:0], HOST_AddrFilRdE1,<tb><sep>HOST_AddrFilWrE1, host_AFcamRdE1<tb> (7)<sep>hostAddr[9:0], hostReq, hostMIIMsel, hostOpcode[1:0]<tb> (8)<sep>AFcfgRdEn, AFcfgWrEn, AFcfgCAMrdEn<tb> (9)<sep>{16'h000, dcr_hostReq, dcr_hostOpcode[1:0], 2'b00,<tb><sep>dcr_emac1Sel, dcr_hostAddr[9:0]}<tb>(10)<sep>(dcr_StatsRdEn & dcremacenable)Host interface 112 uses two clocks signals, namely, a DCR clock ("dcrClk") signal 516 (shown in FIG. 4-5A) and a host clock ("HOST_CLK") signal 440. The dcrClk signal 516 runs at the same clock frequency as the system clock for processor 103. DCR bridge 113 uses both dcrClk signal 516 and HOST_CLK signal 440. HOST_CLK signal 440 comes from a host device coupled to host bus 118 and is part of signal set (3). Signals 414 include signal set (3), HOST_CLK signal 440, a host request signal, and a host write data signal 438. HOST_CLK 440 is used to interface to host registers in EMAC core 123.Signal set (1) is provided via DCR bus 114 to and from DCR bridge 113. From signal set (1), it should be understood that DCR bus 114 contains only two least significant address bits. This is because a central DCR address decoding unit is implemented in processor block 102 and DCR bridge 113 uses only four DCR registers in this exemplary implementation. The central DCR address decoding unit decodes the DCR address bus ("dcrABus[0:7]") signal from processor 103 and in conjunction with DCR read and DCR write signals generates DCR write or DCR read signals if the address is targeted to DCR bridge 113.DCR bridge 113 converts the DCR commands in a dcrClk domain into host bus signals in a HOST_CLK domain for output, namely, DCR bridge output signals are dcr_emac1Sel 411 and signal set (2), generally referred to as signals 412. Dcremacenable signal 406 is provided as a control select input to multiplexers 401 and 402. Dcremacenable signal 406 is used to select which host bus to use, namely, either host bus 118 or DCR bus 114. The selected host bus signals are emac1Sel 411 and signal set (4), generally indicated as signals 412, namely, the outputs of multiplexers 402 and 401, respectively. Input to multiplexer 401 is signal set (2), which is also provided to bus 443. Other signals input multiplexer 401 are signals 414. Input to multiplexer 402 is dcr_emac1sel signal 411 and Host_emac1Sel signal 409. Notably, there is a one-to-one correspondence of same signal inputs between inputs to multiplexers 401 and 402 from DCR bridge 113 and host interface 118.Responsive to emac1Sel signal 400 being a logic 1, host bus signals 410 are directed to EMAC 111 and directed to EMAC 110 responsive to emac1Sel signal 400 being a logic 0. In an exemplary implementation, signal emac1Sel 400 may be address bit [10] of host bus 118. Output 410 from multiplexer 401 is provided as input to buffers 404 and 405. Output 400 from multiplexer 402 is provided as input to buffer 405 and logic block 429, and inverted then provided as input to buffer 404. Signal set (5) is host bus signals output from buffer 404 to EMAC 110, and signal set (6) is host bus signals to output from buffer 405 to EMAC 111. Host bus 118 may be coupled to host interface logic, which logic is describe in additional detail in the above-referenced soft EMAC core.Logic block 421 contains address decoding for address filter host registers read and write enable and address filter content addressable memory ("CAM") read enable. Notably, though the term CAM is used herein, an actual CAM may or may not be implemented. Storage for multicast addresses may be in the form of registers for example, namely, multicast address registers ("MARs"). Accordingly, the terms CAM and MAR should be considered interchangeable.Signal set (3) includes inputs and signal set (8) includes outputs of logic block 421. Thus, only a portion signals 414 are provided to logic block 421. The address filter CAM write signal is the same signal as the host registers write signal, but the address filter CAM read signal uses a separate signal, AFcfgCAMrdEn of signals 439, because the CAM read is an added function to Address Filter read logic 422. The address decode and read enable or write enable signals for host address bus 118 are provided via host interface 112 because DCR bridge 113 generates those read enable or write enable signals, and symmetry is used for signals between DCR bridge 113 and host bus 118 signals.Below is a code listing for an exemplary embodiment of address decode logic equations for address filter host registers read or write and CAM read enable for logic block 421, where the logic equations are in Verilog RTL: assign AFcfgAddrDec = ( HOST_ADDR[9] & HOST_ADDR[8] & HOST_ADDR[7] & HOST_ADDR[6] & HOST_ADDR[5] & HOST_ADDR[4]) (HOST_ADDR[9] & HOST_ADDR[8] & HOST_ADDR[7] & HOST_ADDR[6] & HOST_ADDR[5] & HOST_ADDR[4]); assign AFcfgRdEn = AFcfgAddrDec & (HOST_MIIM_SEL & HOST_OPCODE[1]); assign AFcfgWrEn = AFcfgAddrDec & (HOST_MIIM_SEL & HOST_OPCODE[1]); assign AFcfgCAMaddrDec = (HOST_ADDR[9] & HOST_ADDR[8] & HOST_ADDR[7]) & (HOST_ADDR[6] & [integral]HOST_ADDR[5] & HOST_ADDR[4]) & (HOST_ADDR[3] & HOST_ADDR[2] & HOST_ADDR[1]) & HOST_ADDR[0]; assign AFcfgCAMrdEn = AFcfgCAMaddrDec & (HOST_MIIM_SEL & HOST_OPCODE[1] & HOST_WR_DATA[23]);Dcremacenable signal 406 and host statistics read data enable signal 420 are input to AND gate 411, the output of which is a control select input to multiplexers 435 and 436. Configuration address filter read enable, MIIM read enable, MIIM write enable, and host clock signals 463 are input to logic block 429 along with signal 400 to provide as output emac1 select register signal 469 as a control select to multiplexer 428. Dcremacenable signal 406 activates DCR bus access.Logic block 429 uses decoded read command signals to generate emac1SelReg signal 469 to keep the read data return path open for the selected EMAC until another read command. This is used because each type of read returns data with different timing.Below is a code listing for an exemplary embodiment of logic equations for logic block 429, where the logic equations are in Verilog RTL: always @(posedge HOST_CLK) if (HOST_RESET) emac1SelReg ≤1'b0; else if (cfg_AFrdEnMIIMrdEnMIIMwrEn) emac1SelReg ≤emac1Sel; else emac1SelReg ≤emac1SelReg;Input to multiplexer 428 are host MIIM ready EMAC1, read data EMAC1 [31:0] and address filter read data EMAC1 [47:0] signals 468, and host MIIM ready EMAC0, read data EMAC0 [31:0] and address filter read data EMAC0 [47:0] signals 487. Output of multiplexer 428 is host MIIM ready, host read data [31:0] and host address filter read data [47:0] signals 461. A portion of host address filter read data signal, namely, the last 16 bits, is provided as host address filter read data [47:32] signal 462 as an input to multiplexer 427. A portion of host address filter read data signal, namely, the first 32 bits, is provided as host address filter read data [31:0] signal 455 as an input to a port of multiplexer 454. Provided to another port of multiplexer 454 is host read data signal [31:0] 437. Host read data signal [31:0] 437 is provided to a logic low input port of multiplexer 436. Host MIIM ready signal 408 is provided to a logic low input port of multiplexer 435. Host address filter read data [47:0] signal 434 is provided as an input to DCR bridge 113. From signals 414, host MIIM select signal 450 is input to a logic high port of multiplexer 435, and host write data [31:0] signal 438 from bus 442 is input to a logic high port of multiplexer 436. Output from multiplexer 435 is host MIIM ready bridge input signal 432 and is provide to DCR bridge 113. Output from multiplexer 436 is host read data bridge input [31:0] signal 433 and is provide to DCR bridge 113.Logic block 431 includes address decoding for host register read, statistics register read and MII Management ("MIIM") interface host register read. Signal set (7) lists inputs to logic block 431, generally indicated as signal 491. Outputs of logic block 431 are statistics read enable, configuration read enable, MIIM read enable, MIIM write enable, and DCR statistic read enable signals 467. EMAC register read select logic 430 receives statistics read enable, configuration read enable and MIIM read enable signals 446, as well as host clock signal 440 and host MIIM ready signal 408, and provides host read data enable and host statistic read data enable signals 464.Notably, the different types of read signals are distinguished from one another because each type of read returns data with different timing.Below is a code listing for an exemplary embodiment of address decode logic equations for logic block 431, where the logic equations are in Verilog RTL: assign StatsReg = (hostAddr[9:4]==6'b00-0000)(hostAddr[9:4] == 6'b00-0001)(hostAddr[9:4] == 6'b00-0010)(hostAddr[9:4] ==6'b00-0100); assign configReg =(hostAddr[9:8] == 2'b10)(hostAddr[9:7] == 3'b11-0); assign StatsRdEn = StatsReg & hostMIIMsel & hostReq; assign dcr_StatsRdEn = dcr_statAdrDecReg & hostMIIMsel & hostReq; assign configRdEn = configReg & hostMIIMsel & hostOpcode[1]; assign MIIMrdEn = hostMIIMsel & hostopcode[1] &hostOpcode[0] & hostReq; assign MIIMwrEn = hostMIIMsel & hostOpcode[1] & hostOpcode[0] & hostReq;With continuing reference to FIG. 4-2, emacRegRdSel logic block 430 generates signals to steer read data to the proper datapath. When a HOST_RdDen signal of signals 464 is asserted, the read data is from a host register in an embedded EMAC, either EMAC 110 or 111 in this example. When the HOST_statsRdDen signal is asserted, the read data is from a statistics register implemented in FPGA fabric 101.FIG. 4-3 is state diagram depicting an exemplary embodiment of a state machine 457 for emacRegRdSel logic block 430. Responsive to reset signal 474 being asserted, state machine 457 goes to idle state 472.Responsive to statistics read enable signal at a logic high state and configuration read enable signal and MIIM read enable signal being at a logical low state of signals 446, state machine 457 transitions from idle state 472 to state S1 475. All the states of state machine 457 for a host read data enable signal and a host statistics read data enable signal outputs 464 from emacRegRdSel logic block 430 are set forth below in Table 2.State machine 457 transitions from idle state 472 to state C1 473 when statistics read enable signal and MII read enable signal are both at a logic low state and configuration read enable signal is at a logic high state. State machine 457 transitions from idle state 472 to state M1 470 responsive to statistics read enable signal and configuration read enable signal being at a logic low state and MIIM read enable signal may be at either a logic low or a logic high state for this transition to occur.State machine 457 stays in state M1 470 responsive to host MIIM ready signal 408 not being asserted, and transitions from state M1 470 to state M2 471 responsive to host MIIM ready signal 408 being asserted. All other transitions occur responsive to host clock signal 440, namely, transitioning from state M2 471 to idle state 472, transitioning from state C1 473 to idle state 472, and transitions from state S1 475 to state S2 476 to state S-3 477 to state S4 478 to state S5 479 to state S6 480 to state S7 481 and back to idle state 472.In Table 2 are state machine 457 outputs for signals 464 for each of the states in FIG. 4-3.<tb>TABLE 2<tb>State<sep>HOST_RdDen<sep>HOST_statsRdDen<tb>IDLE<sep>0<sep>0<tb>S1<sep>0<sep>0<tb>S2<sep>0<sep>0<tb>S3<sep>0<sep>0<tb>S4<sep>0<sep>0<tb>S5<sep>0<sep>0<tb>S6<sep>1<sep>1<tb>S7<sep>1<sep>1<tb>C1<sep>1<sep>0<tb>M1<sep>1<sep>0<tb>M2<sep>1<sep>0Returning to FIG. 4-2, address filter read logic block ("AFrd") 422 generates read datapath control signals 441, namely, host address filter least significant word read enable ("HOST_AFlswRdEn") signal and host address filter most significant word read enable ("HOST_AFmswRdEn") signal, for reading CAM data in address filter 129. In an embodiment, CAM data is 48 bits long. Address filter read logic block 422 is clock responsive to host clock signal 440. Output from logic block 421, namely, address filter configuration read enable and address filter configuration CAM read enable signals 439, are input to address filter read logic block 422.Responsive to HOST_AFlswRdEn being asserted, a CAM read data [31:0] signal is output to a read bus to host bus 118, namely, HOST_RD_DATA[31:0] 445. In the next host clock 440 cycle, HOST_AFmswRdEn is asserted, and read data [47:32] is output to HOST_RD_DATA[15:0] of host read data bus 445. This outputting the least-significant-word first followed in the next host clock cycle by the most-significant-word is for consistency with reading statistic registers. For this, the read data from hostAddrFilRdD[47:32] 462 is registered for one host clock cycle delay in outputting. This may be done by providing an address filter configuration CAM read enable register signal 460 as a control select input to multiplexer 427 having host address filter read data signal [47:32] 462 as one set of logic high data inputs and feeding back address filter read data CAM most-significant-word register ("AFrdDcamMSWreg[15:0]") signal 459 as a set of logic low data inputs to multiplexer 427. Output from multiplexer 427 is provided to register 426. Register 426 is clocked responsive to host clock signal 440. Output of register 426 is AFrdDcamMSWreg[15:0] signal 459.FIG. 4-4 is a state diagram depicting an exemplary embodiment of a state machine 447 for address filter read logic block 422. State machine 447 transitions to idle state 483 responsive to reset signal 474. From idle state 483, state machine 447 transitions to address filter read state 482 responsive to address filter configuration read enable signal of signals 439 being asserted. From address filter read state 482, state machine 447 transitions back to idle state 483 responsive to the next host clock cycle. State machine 447 transitions from idle state 483 to address filter control state 1 484 responsive to address filter configuration CAM read enable signal of signals 439 being asserted. From address filter CAM state 1 484, state machine 447 transitions to address filter CAM state 2 485 responsive to a next host clock cycle. From address filter CAM state 2 485, state machine 447 transitions back to idle state 483 responsive to a subsequent host clock cycle. State machine 447 stays in idle state 483 if neither of signals 439 are asserted.In Table 3 are state machine 447 outputs for signals 441 for each of the states in FIG. 4-4.<tb>TABLE 3<tb>State<sep>HOST_AFlswRdEn<sep>HOST_AFmswRdEn<tb>IDLE<sep>0<sep>0<tb>AFR<sep>1<sep>0<tb>AFC1<sep>1<sep>0<tb>AFC2<sep>0<sep>1Read data ("RdDe#[31:0]") and address filter read data ("AddrFilRdDe#[47:0]"), where # is a 0 or 1 respectively for EMAC 110 and EMAC 111, are provided to multiplexer 428, along with MIIM read done signal ("HOST_MIIM_RDY#"), where # is a 0 or 1 respectively for EMAC 110 and EMAC 111. Multiplexer 428 output is selected responsive to emac1SelReg signal 469. Thus, RdDeO[31:0] contains the read data from the EMAC0 host registers and AddrFilRdDeO[47:0] contains the read data from the EMAC0 address filter 129, and RdDe1 [31:0] contains the read data from the EMAC1 host registers and AddrFilRdDel [47:0] contains the read data from the EMAC1 address filter.Responsive to emac1SelReg signal 469 being at a logic high state, the read data set from EMAC1 is selected, and responsive to emac1SelReg signal 469 being at a logic low state, the read data set from EMAC0 is selected.AFcfgCAMrdEnReg signal 460 is the registered version of the AFcfgCAMrdEn signal of signals 439. In an implementation, because read data bus 445 of host bus 118 is only 32 bits wide, hostAddrFilRdD[47:32] 462 is registered and output in the next host clock cycle. Again, for a data set, the least significant word is output first and immediately followed by the most significant word of the data set on the following host clock cycle so that the read timing for an address filter, such as address filter 129, is consistent with the read timing of statistics registers.When embedded processor 103 is used as a host processor, host bus 118 is not used for communicating with a host processor. Hence, host bus 118 I/O pins may be re-used in a different way to read statistics registers implemented in FPGA fabric 101. This re-use of I/O pins facilitates interfacing FPGA fabric 101 to ASIC and other embedded logic in processor block 102 using the limited number of I/O pins available in processor block 102.Data signals 437 and 455 are input to multiplexer 454 along with signals from bus 457, namely, 16 bits of padding coupled to ground 458 or other fixed logic low value and 16 bits from signal 459. Output from multiplexer 454 is selected responsive to a three bit wide control select input from host read data enable, host address filter least significant word read enable and host address filter most significant word read enable signals 456. Host read data [31:0] signal 452 output from multiplexer 454 is input to a logic low port of multiplexer 423. Signal set (10) generally indicated as signal 448 is provided as a control select input to multiplexers 423 and 424. Signal 412 in addition to logic zero padding 444 provided to bus 443 is provided to a logic high input port of multiplexer 423. Input to a logic high port of multiplexer 424 is select signal 451, and input to a logic low port of multiplexer 424 is ready signal 408. Output from multiplexer 423 is host read data [31:0] signal 445, and output from multiplexer 424 is host MIIM ready signal 446. Outputs from multiplexers 423 and 424 may be bussed outputs 413 of host bus 118.DCR bridge 113 translates DCR commands into host read signals, namely, signal set (9) and dcr_hostMIIMsel 451, for output to statistics registers. Signal set (9) uses output pins for HOST_RD_DATA[31:0] 455 and dcr_hostMIIMsel 451 uses the output pin for HOST_MIIM_RDY 446 for read commands output instead of returning read data and a MIIM read done signal, respectively.In an exemplary implementation, bit assignments on HOST_RD_DATA[31:0] 445 for translated DCR read command output signals for statistics registers read and HOST_MIIM_RDY 446 output pins usage are:HOST_RD_DATA [31:16] = 16'h0000 [15] = HOST_REQ [14:13] = HOST_OPCODE[1:0] [12:11] = 2'b00 [10] = HOST_emac1Sel [9:0] = HOST_ADDR[9:0]; andHOST_MIIM_RDY = used as HOST_MIIM_SEL.Statistics read data and read done signals are returned via input pins HOST_WR_DATA[31:0] 438 and HOST_MIIM_SEL signal 450, respectively. In an exemplary implementation, HOST_WR_DATA[31:0] 438 and HOST_MIIM_SEL signal 450 input pins usage for statistics register read via DCR bridge 113 are:HOST_WR_DATA[31:0] = used as HOST_RD_DATA[31:0]; andHOST_MIIM_SEL = used as HOST_MIIM_RDY.Thus, it should be appreciated that pins for signals 445 and 446 are used for read busing of host configuration registers and for write busing of statistic registers. Notably, these pins for signals 445 and 446 do not need to be used just for statistics registers, but may be used to access any registers instantiated in FPGA fabric 101.For example, by re-using processor block 102 I/O pins, PPC405 processor can act as a host processor to perform all the management functions as a host processor embedded in FPGA 100 or external to FPGA 100. When DCR bus 114 is not used as a host bus, host bus 118 may be used to access host registers in EMAC core 123. Again, host bus 118 allows a host processor to reside in FPGA 100 or be external to FPGA 100.DCR BridgeFIG. 4-5A is a block/schematic diagram depicting an exemplary embodiment of DCR bridge 113. DCR 113 translates PPC405 processor 103 commands into host bus 118 signals for processor 103 to operate as a host processor in managing host registers of EMACs 110, 111 and read statistics from statistics registers implemented in FPGA fabric 101. Table 4 lists signal sets for DCR bridge 113.<tb>TABLE 4<tb>Signal Set<sep>Signals<tb>(11)<sep>dcrClk, HOST_CLK, HOST_RESETreg, HOST_RESET<tb>(12)<sep>dcrClk, HOST_RESET, dcrRdEn_ack, dcrWrEn_ack<tb>(13)<sep>dcrClk, HOST_CLK, HOST_RESET, cntlRegWrEn,<tb><sep>dataRegLSW[23:0], cntlReg[15:0], hostMIIMrdy,<tb><sep>samplecycle<tb>(14)<sep>dcr_emac1sel, dcr_hostOpCode[1:0], dcr_hostMIIMsel,<tb><sep>dcr_hostReq, dcr_AddrFilRd, dcr_AddrFilWr,<tb><sep>dcr_AddrFilRdSel, dcr_hostAddr[9:0],<tb><sep>dRegMSWwe_eRd, dRegLSWwe_eRd, MIIMwrDataWE,<tb><sep>MIIMwrDataRE, IRstatusWE, IRstatusRE, IRenableWE,<tb><sep>IRenableRE, MIIMwrDataSel, configWr, configWrDone,<tb><sep>configRd, configRdDone, AddrFilWr, AddrFilWrDone,<tb><sep>AddrFilRd, AddrFilRdDone, MIIMwr, MIIMwrDone,<tb><sep>MIIMrd, MIIMrdDone, StatsRd, StatsRdDone,<tb><sep>dRegLSWwe_cfg, dRegLSWwe_Stats, dRegMSWwe_Stats,<tb><sep>dRegLSWwe_miim, dRegMSWwe_AF, dRegLSWwe_AF,<tb><sep>dcr_AFcamRd, dcr_AFcamRdSel, dcr_AFcamWr,<tb><sep>AFcamRdDone, AFcamWrDone<tb>(15)<sep>dcrClk, HOST_RESET, dcrRdEn_ack, dcrRdEn_negHost address filter read data signal 434, which in an implementation may be a 48-bit wide signal, and host read data signal 437, which in an implementation may be a 32-bit wide signal, are part of a host interface 118. A portion of bits of read data signal 434, such as bits [47:32], may be provided to bus 696, and other bits, such as 16 other bit lines coupled to ground 458, may be provided to bus 696 to provide padding for a bus width, such as a 32-bit width.Bus 696 may be coupled to a logic high input port of multiplexer 490. Another portion of bits of read data signal 434, such as bits [31:0], may be input to a logic high input port of multiplexer 491. Read data signal 437 may be input to respective logic low input ports of multiplexers 490 and 491.A DCR address filter CAM read select signal 511 may be provided as an input signal to multiplexer 490 to select as between inputs to provide read data MSW signal 513, which may be a 32-bit wide signal, as an output. Select signal 511 and a DCR address filter read select signal 512 may be logically ORed to provide a control select input to multiplexer 491 to provide a read data LSW signal 695, which may be a 32-bit wide data signal.MSW output from multiplexer 490 may be input to a logic low port of multiplexer 493. LSW output from multiplexer 491 may be provided to a logic low input port of multiplexer 492. Input to a logic high port of multiplexer 492 may be a read data host interface ("IF") register signal 539, which may be a 32-bit wide signal and which may be obtained from output of multiplexer 509. A host register read enable signal 517 may be provided as a control select signal to multiplexer 492 to provide an output therefrom to a logic low input port of multiplexer 494.DCR write data bus 514, which may be a 32-bit wide data bus, may be provided to respective logic high input ports of multiplexers 493 and 494, and as an input to control register 500 and to a logic low input port of multiplexer 495. A logic high input port of multiplexer 495 may be coupled to ground 458, and a host register access start signal 519 may be provided as a control select input to multiplexer 495 to provide an output therefrom to a logic low input port of multiplexer 496. A logic high input port of multiplexer 496 may be coupled to a logic high voltage level 697, and a host register access done signal 520 may be provided as a control select input to multiplexer 496.A MSW input write enable signal 515 may be input as a control select signal to multiplexer 493, and a LSW input write enable signal 518may be input as a control select signal to multiplexer 494. Output from multiplexer 493 is input to MSW data register 497. Output from multiplexer 494 is input to LSW data register 498. Output from multiplexer 496 is input to ready status register 499.Registers 497 through 500 may each be 32-bit wide registers clocked responsive to DCR clock signal 516. Outputs of registers 497 through 500 are provided to multiplexer 698, which is coupled to receive select signals 525, where select signals 525 include a data register MSW read enable signal, a data register LSW read enable signal, a ready status read enable signal, and a control register read enable signal for respectively selecting input from registers 497 through 500 for output from multiplexer 698. Output from multiplexer 698 is DCR read data signal 526, which may be a 32-bit wide signal.To provide a bypass mode, read data signal 526 may be input to a logic high input port of multiplexer 507 and input to a logic low input port of multiplexer 507 may be DCR write data bus 514. A DCR read signal 528 and a DCR read output enable signal 529 may be ANDed by AND gate 306, the output from which may be provided as a control select signal, namely DCR read data bus enable signal 530, to multiplexer 507. Output of multiplexer 507 is DCR read data bus 531, which may be a 32-bit wide data bus.Output from LSW data register 498 is LSW data register signal 532, which may be 32-bits wide and which may be input to a logic high input port of multiplexer 502, MIIM write data register 541, and interrupt request enable register 537. Host register access done signal 533 may be input to a logic low input port of multiplexer 502, and an interrupt request status write enable signal 534 may be provided as a control select input to multiplexer 502. Output of multiplexer 502 is provided to interrupt request status register 536. Registers 536, 537, and 541 may each be 32-bits wide and clocked responsive to DCR clock signal 516.Output from registers 541, 536, and 537 are provided to multiplexer 509. Control select signals 538, namely a MIIM write data read enable signal, an interrupt request status read enable signal, and an interrupt request enable read enable signal for respectively selecting an input from registers 541, 536 and 537 inputs, are provided to multiplexer 509 to provide as output read data host IF register signal 539.Output from LSW data register 498 and MIIM write data register 541 are respectively provided to a logic high input port and a logic low input port of multiplexer 508. An MIIM write data select signal 540 is provided as a control select signal input to multiplexer 508 to provide DCR/host write data signal 542, which may be a 32-bit wide signal.Accordingly, it should be appreciated that host read data or host address filter read data may be obtained from a host interface 118 and converted by bridge 113 to DCR read or write data, namely read data bus 531 or DCR/host write data signal 542. Moreover, DCR write data may be provided to bridge 113 and converted to host write data, namely DCR/host write data signal 542. Moreover, DCR bridge 113 may be in a bypass mode, where DCR write data bus 514 is output or converted to DCR read data bus 531.FIG. 4-5B is a table diagram depicting an exemplary embodiment of DCR address and bit assignments for DCR bridge 113. In this exemplary implementation, DCR bridge 113 uses four DCR registers 497 through 500 of FIG. 4-5A occupying four consecutive DCR addresses 523. Default values 521 and read or write capability 524 of DCR registers 497 through 500 are also listed. In an implementation, each of registers 497 through 500 is clocked responsive to DCR clock signal 516, and each of registers 497 through 500 has a 32 bit wide [0:31] output.With simultaneous reference to FIGS. 4-5A and 4-5B, DCR bridge 113 is further described. With respect to bits [0:15] of a ready status DCR register ("RDYstatus") 499, this register is a read-only register, though it is possible to write to this register for functional verification. With respect to bit [21] of DCR control register ("cntlReg") 500, in an exemplary implementation emac1Sel may be bit [10] of host bus 118 address bits, where a logic 0 is for EMAC0 and a logic 1 is for EMAC1.DCR most-significant word data register ("dataRegMSW") 497 is used in address filter register reads where return data contains a threshold number of bits, such as 48 bits for example. An example usage is a read of a unicast address register or one of the four multicast addresses in CAM. Again, CAM is not limited to memory, but may be registers such as MARs. In this exemplary implementation, dataRegMSW 497 receives the most significant read data bits [47:32] of host address filter read data [47:0] 434.In this exemplary implementation, dataRegMSW 497 is used in reading of statistics registers because the statistics registers are 64 bits wide. The most significant word of the statistics register (e.g., bits [63:32]) may be stored in dataRegMSW 497. DataRegMSW 497 facilitates consistent software programming, namely, when PPC405 processor 103 issues a host register read command, host interface 112 deposits the read data to DCR data registers, and then PPC405 processor 103 may issue a DCR read command to dataRegMSW 497 to bring the read data into a general-purpose register (GPR) of processor 103.A DCR least significant word data register ("dataRegLSW") 498 contains the least significant word, such as for example 32 bits of read or write data. Write data goes through dataRegLSW 498, and in an exemplary implementation, dataRegLSW 498 is programmed with write data before cntlReg 500 is programmed with a write command.Processor 103 commands for host register accesses may be written to cntlReg 500. Responsive to cntlReg 500 being programmed, host interface 112 may start to take action for a host register transaction. Hence, for a host register write, the sequence of programming in an implementation may be to put write data into dataRegLSW 498 first before programming cntlReg 500.RDYstatus register 499 contains EMAC host register read or write transaction status. Processor 103 may poll RDYstatus register 499 to determine whether an EMAC host register read or write is complete before it issues another EMAC host register access command, as DCR bridge 113 in this exemplary implementation is configured not to accept another DCR command from PPC405 processor 103 until an EMAC host register read or write that is in progress completes. In the instance of MIIM host register read or write, it may take multiple HOST_CLK signal 440 cycles for the EMAC MII data input/output ("MDIO") interface to serially shift in or out the read or write data. Furthermore, the MDIO clock ("MDC") frequency may be a fraction of HOST_CLK signal 440 frequency. MDC frequency may be less than approximately 2.5 MHz.PPC405 processor 103 is configured to time-out and simply execute another instruction if a DCR device does not assert a DCR acknowledge within 64 dcrClk signal 516 clock cycles. Hence, PPC405 processor 103 assumes that a DCR instruction is executed even though the instruction is still in progress or waiting. This leads to incorrect outcome when the presumed executed instruction's result is used.In addition to DCR registers 497, 498, 499 and 500, host interface 112 may use memory-mapped registers to assist in EMAC host register read or write transfers and thereby avoiding having to use more DCR registers.Table 5 lists an exemplary embodiment of a memory map for host interface memory-mapped registers and EMAC embedded host registers. Groups of registers, addresses for each group, and a description for each address are listed. The memory map of host registers is for when DCR bus 114 is used as a host bus for host register access.<tb>TABLE 5<tb>Group<sep>Address<sep>Description<tb>EMAC0<sep>0x000-0x044<sep>statistics registers<tb><sep>0x045-0x1FF<sep>reserved<tb><sep>0x200-0x37F<sep>EMAC core host registers<tb><sep>0x380-0x390<sep>address filter registers<tb>host<sep>0x3A0-0x3FC<sep>host interface memory mapped registers<tb>interface<tb>EMAC1<sep>0x400-0x444<sep>statistics registers<tb><sep>0x445-0x5FF<sep>reserved<tb><sep>0x600-0x77F<sep>EMAC core host registers<tb><sep>0x780-0x790<sep>address filter registersEMAC 110, EMAC 111 and host interface 112 are listed as the groups having memory mapped or embedded registers. EMACs 110 and 111 include addresses for memory-mapped statistics registers, embedded EMAC core host registers and memory mapped address filter registers. Host interface 112 includes memory-mapped registers. Notably, responsive to host bus 118 being used for host register access, memory-mapped host interface registers are not used because DCR bridge 113 is not used.In this exemplary implementation, interrupt request status register 536, interrupt request enable register 537, MIIM write data register 541 are all configured for 32 bit widths and are all clocked responsive to DCR clock signal 516. MIIM control register ("MIIMcntl") is a virtual register, which is configured to provide a decoded MIIM output address to the MDIO interface. Table 6 lists an exemplary embodiment of memory address assignments for host interface memory-mapped registers.<tb><sep>TABLE 6<tb><sep>Memory<sep>Host Interface<sep><tb><sep>Address<sep>Register Names<sep>Description<tb><sep>0x3A0<sep>IRstatus<sep>Interrupt request status register<tb><sep>0x3A4<sep>IRenable<sep>Interrupt request enable<tb><sep>0x3A8<sep>-<sep>Reserved<tb><sep>0x3AC<sep>-<sep>Reserved<tb><sep>0x3B0<sep>MIIMwrData<sep>Holds MIIM write data<tb><sep>0x3B4<sep>MIIMcntl<sep>Address decode to output MIIM<tb><sep><sep><sep>address to MDIO<tb><sep>0x3BC-<sep>-<sep>Reserved<tb><sep>0x3FCFIG. 4-5C is a table diagram listing an exemplary embodiment of definitions for memory-mapped registers. MIIMcntl register is not listed in FIG. 4-5C because it is not physically implemented; only its address is decoded to determine initiations of an MDIO register access.Each of registers 536, 537 and 541 has a read and write function. Bit assignments 505 and default values are listed in FIG. 4-5C. Host interface registers, such as IRstatus register 536 and IRenable register 537, are implemented so that a user may alternately choose to use an interrupt as a means to inform processor 103 that a read or write 504 to an EMAC host register is completed.When any bit 505 of IRstatus register 536 is set, DCR host completed interrupt request and DCR host done interrupt ("dcrhostdoneir") signal 407 (shown in FIG. 4-2) is asserted to raise an interrupt to processor 103, such as when an EMAC register access has completed. This facilitates processor 103 to process instructions, other than EMAC host read or write instructions, following the interrupt without having to spend time polling RDYstatus register 499 to find out when an EMAC host register read or write completes. This may be useful in a read or write to MIIM registers because MDC frequency is conventionally low compared to system clock frequency of processor 103 and conventionally approximately a hundred processor instructions may be executed in the time that it takes a MIIM register read or write to complete.MIIM write data ("MIIMwrData") register 541 is used to hold MIIM write data temporarily before it is output from EMAC core 123 for a MIIM register write. MIIMwrData register 541 allows DCR dataRegLSW 522 to be reused to reduce the number of DCR registers used and to facilitate software programming consistency.Table 7 lists an exemplary embodiment of a memory map for address filter registers.<tb>TABLE 7<tb>Mem-<sep><sep>Description<sep><sep><tb>ory<sep>Address Filter<sep>in Verilog<sep><sep>Read/<tb>Address<sep>Register Name<sep>Notation<sep>Default Value<sep>Write<tb>0x380<sep>UnicastAddrW0<sep>Unicast<sep>0x0000_0000<sep>R/W<tb><sep><sep>Address [31:0]<tb>0x384<sep>UnicastAddrW1<sep>{16'h0000,<sep>0x0000_0000<sep>R/W<tb><sep><sep>UnicastAddress<tb><sep><sep>[47:32]}<tb>0x388<sep>AddrTableConfigW0<sep>CAM data<sep>0x0000_0000<sep>R/W<tb><sep><sep>[31:0]<tb>0x38C<sep>AddrTableConfigW1<sep>{8'h00,<sep>0x0000_0000<sep>R/W<tb><sep><sep>CAMrnw,<tb><sep><sep>5'h00,<tb><sep><sep>CAMaddress<tb><sep><sep>[1:0],<tb><sep><sep>CAMdata<tb><sep><sep>[47:32]}<tb>0x390<sep>General Config<sep>{Promisicuous<sep>0x0000_0000<sep>R/W<tb><sep><sep>Mode bit, 31'<tb><sep><sep>h0000_0000}In an implementation, an address filter block contains a four-entry CAM/MAR for multicast address matching. As described below in additional detail, host interface 112 does not directly read or write to the CAM or MARs. Instead, the CAM/MAR data, CAM/MAR address and read/write bit is written to address filter registers, namely, read configuration address table and write configuration address table, to read or write CAM/MAR entries.In DCR bridge 113, DCR acknowledge generator ("dcrAckGen") block 551 generates DCR acknowledge ("dcrAck") 510 for a DCR access to host interface 112. DCR acknowledge 510 is generated by dcrAckGen 551 responsive to input 1506, namely, signal set (12) of Table 4.FIG. 4-6 is a state diagram depicting an exemplary embodiment of a state machine 551S of dcrAckGen 551. State machine 551S is reset responsive to reset signal 474, which places state machine 551S in idle state 546. State machine 551S transitions from idle state 546 to write acknowledge state 545 responsive to DCR write enable acknowledge signal ("dcrWrEn_ack") of signals 1506 being asserted. After which, state machine 551S from write acknowledge state 545 transitions back to idle state 546 at completion of an acknowledgment of a write to DCR registers.Responsive to DCR read enable acknowledgment ("dcrRdEn_ack") signal of signals 1506 being asserted, state machine 551S transitions from idle state 546 to read acknowledge state zero 547. From read acknowledge state zero 547, state machine 551S transitions to read acknowledge state one 548 responsive to a next clock cycle of DCR clock signal 516. From read acknowledge state one 548, state machine 551S transitions to idle state 546 responsive to a next clock cycle of DCR clock signal 516.State machine 551S stays in idle state 546 if neither DCR write enable acknowledgment signal nor DCR read enable acknowledgment signal are asserted. Output of state machine 551S, namely, dcrAck signal 510, is a logic 0 while in idle state 546 or read acknowledge zero state 547. In read acknowledgement one state 548 or write acknowledge state 545, dcrAck signal 510 is a logic 1.FIG. 4-7 is a state diagram depicting an exemplary embodiment of a state machine 552S of DCR read bypass multiplexer enable ("dcrRdBypassMuxEn") generator 552. Input signals 1505 to dcrRdBypassMuxEn generator 552 are listed as signal set (15) in Table 4.State machine 552S is reset responsive to reset signal 474, which puts state machine 552S in idle state 549. From idle state 549, state machine 552S outputs a logic zero for DCR read output enable signal 485. State machine 552S stays in idle state 549 for DCR read enable acknowledgment signal not being asserted. However, for state machine 552S receiving an asserted DCR read enable acknowledgment signal, state machine 552S transitions to enable state 550. Output of state machine 552S is a logic one for DCR read output enable signal 485 for enable state 550. State machine 552S stays in enable state 550 if DCR read enable inverted ("dcrRdEn_neg") signal is not asserted. If, however, DCR read enable inverted signal is asserted, state machine 552S transitions from enable state 550 to idle state 549.Referring again to FIG. 4-5A, a control signal, namely, DCR read output enable ("dcrRdOutEn") signal 485, is generated by dcrRdBypassMuxEn generator 552 responsive to signals 1505, namely, signal set (15) of Table 4, for reading from DCR registers. When dcrRdOutEn signal 485 is not asserted, input to DCR bus 114 is bypassed to output of DCR bus 114 in accordance with a DCR specification for a PPC405. Control generator ("cntlGen") block 588 generates control signals 1507, namely, signal set (14) of Table 4, for reading and writing to DCR registers and host interface memory mapped registers responsive to input signals 1504, namely, signal set (13) of Table 4.FIG. 4-8 is a block diagram depicting exemplary embodiments of logic blocks of control generator block 588 for generating control signals for reading from or writing to DCR bridge 113 to host bus 160 or 161. FIG. 4-9 is a block diagram depicting exemplary embodiments of logic blocks of control generator block 588 for generating control signals for reading or writing data from or to host bus 160 or 161 into DCR bridge 113. Simultaneous reference is made to FIGS. 4-5A, 4-8 and 4-9. Notably, due to differences in read or write timing, such as from or to a configuration register, MIIM register, statistics register, address filter configuration register or address filter CAM, separate controllers implemented with state machines may be used as indicated in this exemplary embodiment.Control generator host interface logic block 421 includes a DCR address decoder and logic for qualifying read and write control signals. FIGS. 4-33A and 4-33B is a code listing depicting an exemplary embodiment of logic block 421, with logic equations in Verilog RTL. Logic block 421 provides output signals 1513, namely, signal set (4) of Table 1, in response to input signals 1512, namely, signal set (3) of Table 1. FIG. 4-34 is a code listing depicting an exemplary embodiment of main bus control ("busCntlMain") block 553, with logic equations in Verilog RTL. Main bus control block 553 provides output signals 1511, namely, signals from signal set (2) of Table 1, in response to input signals 1510, namely, signals from signal set (1) of Table 1.Exemplary embodiments of state machines for configuration read/write bus controller ("configRWbusCntl") 554, MIIM read/write bus controller ("MIIMrwBusCntl") 555, statistics read bus controller ("StatsRbusCntl") 556, address filter read/write bus controller ("AFrwBusCntl") 557, and address filter content addressable memory read/write bus controller ("AFcamRWbusCntl") 558 are illustratively shown in FIGS. 4-10, 4-11, 4-12, 4-13, and 4-14, respectively.Inputs to each controller 554 through 558 include host clock signal 440 and reset signal 474. Other inputs to controller 554 are configuration read ("configRd") signal 559 and configuration write ("configWr") signal 560. Outputs from controller 554 are opcode configuration ("Optcode_cfg[1:0]") signals 564, host request configuration ("Req_cfg") signal 565, MIIM select configuration ("MIIMsel_cfg") signal 566, and DCR address enable configuration ("dcrAddEn_cfg") signal 567.Other inputs to controller 555 include MIIM read ("MIIMrd") signal 576 and MIIM write ("MIIMwr") signal 577. Outputs from controller 555 include MIIM opcode ("Opcode_miim[1:0]") signals 580, host request MIIM ("Req_miim") signal 581, MIIM select MII ("MIIMsel_mii") signal 582, MIIM write data select ("MIIMwrDataSel") signal 583, and DCR address enable MIIM ("dcrAddrEn_miim") signal 589.Another input to controller 556 is statistics read ("StatsRd") signal 561. Outputs from controller 556 include statistics opcode ("Opcode_Stats[1:0]") signals 568, request statistics ("Req_Stats") signal 569, MIIM select statistics ("MIIMsel_Stats") signal 570, and DCR address enable statistics ("dcrAddrEn_Stats") signal 571.Other inputs to controller 557 include address filter read ("AddrFilRd") signal 578 and address filter write ("AddrFilWr") signal 579. Outputs from controller 557 include DCR address filter read ("dcr_AddrFilRd") signal 584, DCR address filter read select ("dcr_AddrFilRdSel") signal 585, DCR address filter write ("dcr_AddrFilWr") signal 586, and DCR address enable address filter ("dcr_AddrEn_AF") signal 587.Other inputs to controller 558 include CAM read ("camRd") signal 562 and CAM write ("camWr") signal 563. Outputs from controller 558 include DCR address filter CAM read ("dcr_AFcamRd") signal 572, DCR address filter CAM read select ("dcr_AFcamRdSel") signal 573, DCR address filter CAM write ("dcr_AFcamWr") signal 574, and DCR address enable address filter CAM ("dcr_AddrEn_AFcam") signal 575. Again, MAR may be substituted for CAM in these signal descriptions.Read data received controller ("rdDrecvCntl") 591 has a state machine that starts the reading process for each of read type. Configuration read/write controller ("configRWcntl") 592, statistics read controller ("StatsRcntl") 594, MIIM read/write controller ("MIIMrwCntl") 593, address filter read/write controller ("AddrFilRWcntl") 595, and address filter CAM read/write controller ("AFcamRWcntl") 596 each include a state machine for each type of read. Exemplary embodiments of state machines for configuration read/write controller ("configRWcntl") 592, statistics read controller ("StatsRcntl") 594, MIIM read/write controller ("MIIMrwCntl") 593, address filter read/write controller ("AddrFilRWcntl") 595, and address filter CAM read/write controller ("AFcamRWcntl") 596 are illustratively shown in FIGS. 4-16, 4-17, 4-18, 4-19, and 4-20, respectively.Outputs 1515, namely, signals from signal set (2) from Table 1, from read data received controller 591 are provided responsive to inputs 1514, namely, signals from signal set (1) from Table 1, to read data received controller 591.Inputs to each controller 592 through 596 include host clock ("hostClk") signal 440 and reset ("Reset") signal 474. Other inputs to controller 592 include configuration read receive ("configRdR") signal 597 and configuration write receive ("configWrR") signal 598. Outputs from controller 592 include data register most significant word write enable configuration ("dRegMSWwe_cfg") signal 602, data register least significant word write enable configuration ("dRegLSWwe_cfg") signal 603, configuration read done ("configRdDone") signal 604, and configuration write done ("configWrDone") signal 605.Other inputs to controller 593 include MIIM read receive ("MIIMrdR") signal 613, MIIM write receive ("MIIMwrR") signal 590 and MIIM ready ("MIIM_rdy") signal 614. Outputs from controller 593 include data register most significant word write enable MIIM ("dRegMSWwe_miim") signal 617, data register least significant word write enable MIIM ("dRegLSWwe_miim") signal 618, MIIM read done ("MIIMrdDone") signal 619, and MIIM write done ("MIIMwrDone") signal 620. Another input to controller 594 is statistics read receive ("StatsRdR") signal 599. Outputs from controller 594 are data register most significant word write enable statistics ("dRegMSWwe_Stats") signal 606, data register least significant word write enable statistics ("dRegLSWwe_Stats") signal 607, and statistics read done ("StatsRdDone") signal 608.Other inputs to controller 595 include address filter read receive ("AddrFilRdR") signal 615 and address filter write receive ("AddrFilWrR") signal 616. Outputs from controller 595 include data register most significant word write enable address filter ("dRegMSWwe_AF") signal 621, data register least significant word write enable address filter ("dRegLSWwe_AF") signal 622, address filter read done ("AddrFilRdDone") signal 623, and address filter write done ("AddrFilWrDone") signal 624.Other inputs to controller 596 include CAM read receive ("camRdR") signal 600 and CAM write receive ("camWrR") signal 601. Outputs from controller 596 include data register most significant word write enable address filter CAM ("dRegMSWwe_AFcam") signal 609, data register least significant word address filter CAM ("dRegLSWwe_AFcam") signal 610, address filter CAM read done ("AFcamRdDone") signal 611, and address filter CAM write done ("AFcamWrDone") signal 612.Returning to FIG. 4-5A, sample cycle generator block 488 generates a sample cycle signal 489 in response to input signals 1503, namely, signal set (11) from Table 4. Sample cycle signal 489 notifies DCR bridge 113 as to when to sample read data from a host clock signal 440 domain.FIG. 4-10 is a state diagram depicting an exemplary embodiment of a state machine 554S of configuration read/write bus controller 554. State machine 554S is reset responsive to reset signal 474, which places state machine 554S in idle state 630. State machine 554S transitions from idle state 630 to configuration read ("ConfigRead") state 631 responsive to configuration read signal 559 being asserted. After which, state machine 554S from configuration read state 631 transitions back to idle state 630 at a completion of a read to EMAC configuration registers.Responsive to configuration write signal 560 being asserted, state machine 554S transitions from idle state 630 to configuration write ("ConfigWrite") state 632. From configuration write state 632 state machine 554S transitions back to idle state 630 at a completion of a write to DCR registers.State machine 554S stays in idle state 630 if neither configuration read signal 559 nor configuration write signal 560 is asserted. All outputs of state machine 554S, such as opcode configuration signals 564, host request configuration signal 565, MIIM select configuration signal 566, and DCR address enable configuration signal 567, are logic 0 in idle state 630. Outputs opcode configuration signals 564 and DCR address enable configuration signal 567 are logic {1,0} and logic 1, respectively, in configuration read state 631, and are respectively logic {0,0} and logic 1 in configuration write state 632. Host request configuration signal 565 and MIIM select configuration signal 566 outputs are both logic 0 in configuration read state 631 and in configuration write state 632.Opcode configuration signal 564 is a 2-bit wide signal, host request configuration signal 565, MIIM select configuration signal 566, and DCR address enable configuration signal 567 are 1-bit wide signals. States, namely, idle state 630, configuration read state 631 and configuration write state 632 of state machine 554S, for signal outputs 564 through 567 of configuration read/write bus controller 554 are set forth below in Table 8. Table 8 lists state machine 554S status of output signals for each of the states in FIG. 4-10.<tb>TABLE 8<tb><sep>Opcode_cfg<sep><sep><sep><tb><sep>[1:0]<sep>Req_cfg<sep>MIIMsel_cfg<sep>dcrAddrEn_cfg<tb>IDLE<sep>00<sep>0<sep>0<sep>0<tb>Con-<sep>10<sep>0<sep>0<sep>1<tb>figRead<tb>Cin-<sep>00<sep>0<sep>0<sep>1<tb>figWriteFIG. 4-11 is a state diagram depicting an exemplary embodiment of a state machine 555S of MIIM read/write bus controller 555. State machine 555S is reset responsive to reset signal 474, which places state machine 555S in idle state 633. State machine 555S transitions from idle state 633 to MIIM read 1 state 634 responsive to MIIM read signal 576 being asserted. On a next clock cycle of host clock signal 440, state machine 555S from MIIM read 1 state 634 transitions to MIIM read 2 state 635 for a completion of a read to MIIM registers. State machine 555S transitions from MIIM read 2 state 635 back to idle state 633 responsive to MIIM ready signal 614 being asserted, namely, indicating completion of this read. State machine 55S stays in MIIM read 2 state 635 if MIIM ready signal 614 is not asserted.Responsive to MIIM write signal 577 being asserted, state machine 555S transitions from idle state 633 to MIIM write 1 state 636. From MIIM write 1 state 636 state machine 555S transitions to MIIM write 2 state 637 on a next clock cycle of host clock signal 440. State machine 555S transitions from MIIM write 2 state 637 back to idle state 633 responsive to MIIM ready signal 614 being asserted, namely, indication a completion of this write to MIIM registers. State machine 555S stays in MIIM write 2 state 637 if MIIM ready signal 614 is not asserted.State machine 555S stays in idle state 633 if neither MIIM read signal 576 nor MIIM write signal 577 are asserted. Outputs of state machine 555S, such as MIIM opcode signals 580, host request MIIM signal 581, MIIM select MII signal 582, MIIM write data select signal 583, and DCR address enable MIIM signal 589 are logic 0 in idle state 633. Outputs MIIM opcode signals 580 are logic {1,0}, and host request MIIM signal 581, MIIM select MII signal 582 and DCR address enable MIIM signal 589 are logic 1, and output MIIM write data select signal 583 is a logic 0 in MIIM read 1 state 634.In MIIM read 2 state 635 output MIIM select MII signal 582 is a logic 1 and outputs MIIM opcode signals 580, host request MIIM signal 581, MIIM write data select signal 583, and DCR address enable MIIM signal 589 are all logic 0. In MIIM write 1 state 636 outputs 580 are {0,1} and outputs 581 through 583 and 589 are logic 1. In MIIM write 2 state 637 output MIIM select MII signal 582 is a logic 1 and outputs MIIM opcode signals 580, host request MIIM signal 581, MIIM write data select signal 583, and DCR address enable MIIM signal 589 are all logic 0.Outputs MIIM opcode signal 580 is a 2-bit wide signal, outputs host request MIIM signal 581, MIIM select MII signal 582, MIIM write data select signal 583, and DCR address enable MIIM signal 589 are all 1-bit wide signals. States, namely, idle state 633, MIIM read 1 state 634, MIIM read 2 state 635, MIIM write 1 state 636, and MIIM write 2 state 637, of state machine 555S for signal outputs 580 through 583 and 589 of MIIM read/write bus controller 555 are set forth below in Table 9. Table 9 lists state machine 555S status of output signals for each of the states in FIG. 4-11.<tb>TABLE 9<tb><sep>Opcode_miim<sep><sep><sep><sep><tb><sep>[1:0]<sep>Req_miim<sep>MIIMsel_miim<sep>MIIMwrDataSel<sep>dcrAddrEn_miim<tb>IDLE<sep>00<sep>0<sep>0<sep>0<sep>0<tb>MIIMread1<sep>10<sep>1<sep>1<sep>0<sep>1<tb>MIIMread2<sep>00<sep>0<sep>1<sep>0<sep>0<tb>MIIMwrite1<sep>01<sep>1<sep>1<sep>1<sep>1<tb>MIIMwrite1<sep>00<sep>0<sep>1<sep>0<sep>0FIG. 4-12 is a state diagram depicting an exemplary embodiment of a state machine 556S of statistics read bus controller 556. State machine 556S is reset responsive to reset signal 474, which places state machine 556S in idle state 638. State machine 556S transitions from idle state 638 to statistics read ("StatsRead") state 639 responsive to statistics read signal 561 being asserted. After which, state machine 556S from statistics read state 639 transitions back to idle state 638 at a completion of a read from external FPGA-based statistics registers.State machine 556S stays in idle state 638 if no statistics read signal 561 is asserted. Outputs of state machine 556S, such as statistics opcode signals 568, request statistics signal 569, MIIM select statistics signal 570, and DCR address enable statistics signal 571, are logic 0 in idle state 630. Outputs request statistics signal 569 and DCR address enable statistics signal 571 are both logic 1 and outputs statistics opcode signals 568 and MIIM select statistics signal 570 are both logic 0 in statistics read state 639.Outputs statistics opcode signal 568 is a 2-bit wide signal, outputs request statistics signal 569, MIIM select statistics signal 570, and DCR address enable statistics signal 571 are 1-bit wide signals. States, namely, idle state 638 and statistics read state 639 of state machine 556S, for signal outputs 568 through 571 of statistics read bus controller 556 are set forth below in Table 10. Table 10 lists state machine 556S status of output signals for each of the states in FIG. 4-12.<tb>TABLE 10<tb><sep>Opcode_Stats<sep><sep><sep><tb><sep>[1:0]<sep>Req_Stats<sep>MIIMsel_Stats<sep>dcrAddrEn_Stats<tb>IDLE<sep>00<sep>0<sep>0<sep>0<tb>Stats-<sep>00<sep>1<sep>0<sep>1<tb>ReadFIG. 4-13 is a state diagram depicting an exemplary embodiment of a state machine 557S of address filter read/write bus controller 557. State machine 557S is reset responsive to reset signal 474, which places state machine 557S in idle state 640. State machine 557S transitions from idle state 640 to address filter read 1 ("AFreadi") state 641 responsive to address filter read signal 578 being asserted. On a next clock cycle of host clock signal 440, state machine 557S transitions from address filter read 1 state 641 to address filter read 2 ("AFread2") state 642 for completion of a read to address filter registers. State machine 557S transitions from address filter read 2 state 642 back to idle state 640 at a completion of this read to address filter registers.Responsive to address filter write signal 579 being asserted, state machine 557S transitions from idle state 640 to address filter write ("AFwrite") state 643. After which, state machine 557S from address filter write state 643 transitions back to idle state 640 at a completion of a write to address filter registers.State machine 557S stays in idle state 640 if neither address filter read signal 578 nor address filter write signal 579 is asserted. Outputs of state machine 557S, such as DCR address filter read signal 584, DCR address filter read select signal 585, DCR address filter write signal 586, and DCR address enable address filter signal 587, are logic 0 in idle state 640.In address filter read 1 state 641, DCR address filter read signal 584 and DCR address enable address filter signal 587 are both logic 1, and DCR address filter read select signal 585 and DCR address filter write signal 586 are both logic 0. In address filter read 2 state 642, output DCR address filter read select signal 585 is a logic 1 and DCR address filter read signal 584, DCR address filter write signal 586, and DCR address enable address filter signal 587 are all logic 0. In address filter write state 643, DCR address filter read signal 584 and DCR address filter read select signal 585 are both logic 0, and DCR address filter write signal 586 and DCR address enable address filter signal 587 are both logic 1.Outputs of state machine 557S, namely, DCR address filter read signal 584, DCR address filter read select signal 585, DCR address filter write signal 586, and DCR address enable address filter signal 587, are 1-bit wide signals. States, namely, address filter read 1 state 641, address filter read 2 state 642, and address filter write state 643, of state machine 557S for signal outputs 584 through 587 of address filter read/write bus controller 557 are set forth below in Table 11. Table 11 lists state machine 557S status of output signals for each of the states in FIG. 4-13.<tb>TABLE 11<tb><sep>dcr_AddrFilRd<sep>dcr_AddrFilRdSel<sep>dcr_AddrFilWr<sep>dcrAddrEn_AF<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>AFread1<sep>1<sep>0<sep>0<sep>1<tb>AFread2<sep>0<sep>1<sep>0<sep>0<tb>AFwrite<sep>0<sep>0<sep>1<sep>1FIG. 4-14 is a state diagram depicting an exemplary embodiment of a state machine 558S of address filter content addressable memory read/write bus controller 558. State machine 558S is reset responsive to reset signal 474, which places state machine 558S in idle state 644. State machine 558S transitions from idle state 644 to address filter content addressable memory read 1 ("AFcamRd1") state 645 responsive to CAM read signal 562 being asserted. On a next clock cycle of host clock signal 440, state machine 558S from AFcam read 1 state 645 transitions to AFcam read 2 ("AFcamRd2") state 646 for completion of a read to address filter CAM registers. State machine 558S transitions from AFcamRd2 state 646 back to idle state 644 at a completion of this read to address filter CAM registers.Responsive to CAM write signal 563 being asserted, state machine 558S transitions from idle state 644 to AFcam write ("AFcamWr") state 647. After which, state machine 558S from AFcamWr state 647 transitions back to idle state 644 at a completion of a write to address filter CAM registers.State machine 558S stays in idle state 644 if neither CAM read signal 562 nor CAM write signal 563 is asserted. Outputs of state machine 558S, such as DCR address filter CAM read signal 572, DCR address filter CAM read select signal 573, DCR address filter CAM write signal 574, and DCR address enable address filter CAM signal 575 are logic 0 in idle state 644.In AFcam read 1 state 645, DCR address filter CAM read signal 572 and DCR address enable address filter CAM signal 575 are both logic 1, and DCR address filter CAM read select signal 573 and DCR address filter CAM write signal 574 are both logic 0. In AFcamRd2 state 646, DCR address filter CAM read select signal 573 is a logic 1, and DCR address filter CAM read signal 572, DCR address filter CAM write signal 574, and DCR address enable address filter CAM signal 575 are all logic 0. In AFcamWr state 647, DCR address filter CAM read signal 572 and DCR address filter CAM read select signal 573 are both logic 0, and DCR address filter CAM write signal 574 and DCR address enable address filter CAM signal 575 are both logic 1.Outputs of state machine 558S, namely, DCR address filter CAM read signal 572, DCR address filter CAM read select signal 573, DCR address filter CAM write signal 574, and DCR address enable address filter CAM signal 575, are 1-bit wide signals. States, namely, AFcamRd1 state 645, AFcamRd2 state 646, and AFcamWr state 647, of state machine 558S for signal outputs 572 through 575 of address filter content addressable memory read/write bus controller 558 are set forth below in Table 12. Table 12 lists state machine 558S status of output signals for each of the states in FIG. 4-14.<tb>TABLE 12<tb><sep>dcr_AFcamRd<sep>dcr_AFcamRdSel<sep>dcr_AFcamWr<sep>dcrAddrEn_AFcam<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>AFcamRd1<sep>1<sep>0<sep>0<sep>1<tb>AFcamRd2<sep>0<sep>1<sep>0<sep>0<tb>AFcamWr<sep>0<sep>0<sep>1<sep>1FIG. 4-15 is a state diagram depicting an exemplary embodiment of a state machine 591S of read data received controller 591. State machine 591S is reset responsive to reset signal 474, which places state machine 591S in idle state 648.State machine 591S stays in idle state 648 if neither a host register read ("hostRegRd") signal, a host register write ("hostRegWr") signal, a content addressable memory read ("camRd") signal, nor a content addressable memory write ("camWr") signal of signals 1514 is asserted.State machine 591S transitions from idle state 648 to start read data received ("startRdDrecv") state 649 responsive to a host register read signal of signals 1514 being asserted. After which, state machine 591S from start read data received state 649 transitions back to idle state 648 at a completion of initialization to receive read data from registers.State machine 591S transitions from idle state 648 to start write done ("startWrDone") state 650 responsive to either a host register write signal or a content addressable memory write signal of signals 1514 being asserted. After which, state machine 591S from start write done state 650 transitions back to idle state 648 at a completion of a write to host registers.State machine 591S transitions from idle state 648 to start content addressable memory read data received ("startCAMrdDrecv") state 651 responsive to a content addressable memory read signal of signals 1514 being asserted. After which, state machine 591S from start content addressable memory read data received state 651 transitions back to idle state 648 at a completion of initialization to receive read data from CAM registers.Table 13 lists state machine 591S status of output signals for each of the states in FIG. 4-15.<tb>TABLE 13<tb>State/Output<sep>Idle 648<sep>startRdDrecv 649<sep>startWrDone 650<sep>startCAMRdDrecv 651<tb>MIIMrdR<sep>0<sep>MIIMrdReg<sep>0<sep>0<tb>StatsRdR<sep>0<sep>StatsRdReg<sep>0<sep>0<tb>configRdR<sep>0<sep>configRdReg<sep>0<sep>0<tb>AddrFilRdR<sep>0<sep>AddrFilRdReg<sep>0<sep>0<tb>MIIMwrR<sep>0<sep>0<sep>MIIMwrReg<sep>0<tb>configWrR<sep>0<sep>0<sep>configWrReg<sep>0<tb>AddrFilWrR<sep>0<sep>0<sep>AddrFilWrReg<sep>0<tb>camRdR<sep>0<sep>0<sep>0<sep>camRdReg<tb>camWrR<sep>0<sep>0<sep>camWrReg<sep>0In idle state 648, all outputs of state machine 591S are logic 0. In start read of data received state 649, status of MIIMwrR, configWrR, AddrFilWrR, camRdR, and camWrR output signals of state machine 591S are all logic zero, and status of MIIMrdR, StatsRdR, configRdR, and AddrFilRdR output signals of state machine 591S are respectively the status or content of their associated register, namely, MIIMrdReg, StatsRdReg, configRdReg, and AddrFilRdReg, respectively.In start write done state 650, status of MIIMwrR, configWrR, AddrFilWrR, and camWrR output signals of state machine 591S are respectively the status or content of their associated register, namely, MIIMwrReg, configWrReg, AddrFilWrReg, and camWrReg, respectively, and status of MIIMrdR, StatsRdR, configRdR, AddrFilRdR, and camRdR output signals of state machine 591S are all logic 0. In start CAM read of data received state 651, all outputs of state machine 591S are logic 0, except for camRdR output which is the status or content of its respective register, namely, camRdReg.FIG. 4-16 is a state diagram depicting an exemplary embodiment of a state machine 592S of configuration read/write controller 592. State machine 592S is reset responsive to reset signal 474, which places state machine 592S in idle state 652. State machine 592S transitions from idle state 652 to configuration read 1 ("ConfigRd1") state 653 responsive to configuration read receive signal 597 being asserted. After which, state machine 592S from configuration read 1 state 653 transitions back to idle state 652 at a completion of this read to host configuration registers.Responsive to configuration write reset signal 598 being asserted, state machine 592S transitions from idle state 652 to configuration write 1 ("ConfigWr1") state 654. From configuration write 1 state 654 state machine 592S transitions back to idle state 652 at a completion of this write to host configuration registers.State machine 592S stays in idle state 652 if neither configuration read receive signal 597 nor configuration write receive signal 598 is asserted. Outputs of state machine 592S, such as data register most significant word write enable configuration signal 602, data register least significant word write enable configuration signal 603, configuration read done signal 604, and configuration write done signal 605, are logic 0 in idle state 652.Data register most significant word write enable configuration signal 602 is a logic 0 in both configuration read 1 state 653 and configuration write 1 state 654. In configuration read 1 state 653, data register least significant word write enable configuration signal 603 and configuration read done signal 604 are both logic 1, and output configuration write done signal 605 is a logic 0. In configuration write 1 state 654, data register most significant word write enable configuration signal 602, data register least significant word write enable configuration signal 603, and configuration read done signal 604 are all logic 0 and output configuration write done signal 605 is a logic 1.Output signals of state machine 592S, namely, outputs 602 through 605 are 1-bit wide signals. States, namely, idle state 652, configuration read 1 state 653 and configuration write 1 state 654, are set forth below in Table 14. Table 14 lists state machine 592S status of output signals for each of the states in FIG. 4-16.<tb>TABLE 14<tb><sep>dRegMSWwe_cf<sep>dRegLSWwe_cfg<sep>configRdDone<sep>configWrDone<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>ConfigRd1<sep>0<sep>1<sep>1<sep>0<tb>CinfigWr1<sep>0<sep>0<sep>0<sep>1FIG. 4-17 is a state diagram depicting an exemplary embodiment of a state machine 594S of statistics read controller 594. State machine 594S is reset responsive to reset signal 474, which places state machine 594S in idle state 655. State machine 594S stays in idle state 655 if statistics read ready signal 599 is not asserted.State machine 594S transitions from idle state 655 to statistics read 1 ("Sr1") state 656 responsive to statistics read ready signal 599 being asserted. For each clock cycle of host clock signal 440 after state machine 594S is in statistics read 1 state 656, state machine 594S transitions to a next state. For example, from statistics read 1 state 656, state machine 594S transitions to statistics read 2 ("Sr2") state 657. From statistics read 2 state 657 state machine 594S transitions to statistics read 3 ("Sr3") state 658. From statistics read 3 state 658 state machine 594S transitions to statistics read 4 ("Sr4") state 659. From statistics read 4 state 659 state machine 594S transitions to statistics read 5 ("Sr5") state 660. From statistics read 5 state 660 state machine 594S transitions to statistics read 6 ("Sr6") state 661. From statistics read 6 state 661 state machine 594S transitions to statistics read 7 ("Sr7") state 662 for completion of a read to statistics registers. State machine 594S from statistics read 7 state 662 transitions back to idle state 655 at a completion of this read to statistics registers.All outputs of state machine 594S, such as data register most significant word write enable statistics signal 606, data register least significant word write enable statistics signal 607, and statistics read done signal 608 are logic 0 in idle state 630. Outputs 606 through 608 of state machine 594S are all logic 0 in statistics read 1 state 656 through statistics read 5 state 660. Data register most significant word write enable statistics signal 606 and statistics read done signal 608 outputs are logic 0, and data register least significant word write enable statistics signal 607 output is a logic 1, in statistics read 6 state 661. Data register most significant word write enable statistics signal 606 and statistics read done signal 608 outputs are logic 1, and data register least significant word write enable statistics signal 607 output is a logic 0, in statistics read 7 state 662.Outputs 606 through 608 of state machine 594S are all 1-bit wide signals. States, namely, states 656 through 662, of state machine 594S for signal outputs 606 through 608 of statistics read controller 594 are set forth below in Table 15. Table 15 lists state machine 594S status of output signals for each of the states in FIG. 4-17.<tb><sep>TABLE 15<tb><sep><sep>dRegMSWwe_Stats<sep>dRegLSWwe_Stats<sep>StatsRdDone<tb><sep>IDLE<sep>0<sep>0<sep>0<tb><sep>Sr1<sep>0<sep>0<sep>0<tb><sep>Sr2<sep>0<sep>0<sep>0<tb><sep>Sr3<sep>0<sep>0<sep>0<tb><sep>Sr4<sep>0<sep>0<sep>0<tb><sep>Sr5<sep>0<sep>0<sep>0<tb><sep>Sr6<sep>0<sep>1<sep>0<tb><sep>Sr7<sep>1<sep>0<sep>1FIG. 4-18 is a state diagram depicting an exemplary embodiment of a state machine 593S of MIIM read/write controller 593. State machine 593S is reset responsive to reset signal 474, which places state machine 593S in idle state 663. State machine 593S stays in idle state 663 if neither of MIIM read receive signal 613 nor MIIM write ready signal 590 is asserted.State machine 593S transitions from idle state 663 to MIIM read 1 ("MIIMr1") state 664 responsive to MIIM read receive signal 613 being asserted. State machine 593S stays in MIIM read 1 state 664 if MIIM ready signal 614 is not asserted. State machine 593S transitions from MIIM read 1 state 664 to MIIM read 2 ("MIIMr2") state 665 responsive to MIIM ready signal 614 being asserted. State machine 593S transitions from MIIM read 2 state 665 back to idle state 663 for a completion of this read receive from MIIM registers.State machine 593S transitions from idle state 663 to MIIM write 1 ("MIIMw1") state 667 responsive to MIIM write ready signal 590 being asserted. State machine 593S stays in MIIM write 1 state 667 if MIIM ready signal 614 is not asserted. State machine 593S transitions from MIIM write 1 state 667 to MIIM write 2 ("MIIMw2") state 668 responsive to MIIM ready signal 614 being asserted. State machine 593S transitions from MIIM write 2 state 668 back to idle state 663 for a completion of this write to MIIM registers.Outputs of state machine 593S, such as data register most significant word write enable MIIM signal 617, data register least significant word write enable MIIM signal 618, MIIM read done signal 619, and MIIM write done signal 620 are logic 0 in idle state 663, in MIIMr1 state 664 and in MIIMw1 state 667.Data register least significant word write enable MIIM signal 618 and MIIM read done signal 619 outputs are logic 1, and data register most significant word write enable MIIM signal 617 and MIIM write done signal 620 outputs are logic 0, in MIIM read 2 state 665. Data register most significant word write enable MIIM signal 617, data register least significant word write enable MIIM signal 618 and MIIM read done signal 619 outputs are logic 0, and MIIM write done signal 620 output is logic 1, in MIIM write 2 state 668.Outputs 617 through 620 of state machine 593S are 1-bit wide signals. States of state machine 593S for signal outputs 617 through 620 of MIIM read/write controller 593 are set forth below in Table 15. Table 15 lists state machine 593S status of output signals for each of the states in FIG. 4-18.<tb>TABLE 15<tb><sep>dRegMSWwe_miim<sep>dRegLSWwe_miim<sep>MIIMrdDone<sep>MIIMwrDone<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>MIIMr1<sep>0<sep>0<sep>0<sep>0<tb>MIIMr2<sep>0<sep>1<sep>1<sep>0<tb>MIIMw1<sep>0<sep>0<sep>0<sep>0<tb>MIIMw2<sep>0<sep>0<sep>0<sep>1FIG. 4-19 is a state diagram depicting an exemplary embodiment of a state machine 595S of address filter read/write controller 595. State machine 595S is reset responsive to reset signal 474, which places state machine 595S in idle state 669. State machine 595S transitions from idle state 669 to address filter read 1 ("AddrFilRd1") state 670 responsive to address filter read receive signal 615 being asserted. After which, state machine 595S from address filter read 1 state 670 transitions back to idle state 669 at a completion of a read to address filter registers.Responsive to address filter write ready signal 616 being asserted, state machine 595S transitions from idle state 669 to address filter write 1 ("AddrFilWrl") state 671. From address filter write 1 state 671 state machine 595S transitions back to idle state 669 at a completion of a write to address filter registers.State machine 595S stays in idle state 669 if neither address filter read receive signal 615 nor address filter write ready signal 616 is asserted. Outputs of state machine 595S, such as data register most significant word write enable address filter signal 621, data register least significant word write enable address filter signal 622, address filter read done signal 623, and address filter write done signal 624, are logic 0 in idle state 669.Data register most significant word write enable address filter signal 621 and address filter write done signal 624 outputs are both logic 0, and data register least significant word write enable address filter signal 622 and address filter read done signal 623 outputs are both logic 1, in AddrFilRd1 state 670. Data register most significant word write enable address filter signal 621, data register least significant word write enable address filter signal 622 and address filter read done signal 623 outputs are all logic 0, and address filter write done signal 624 output is logic 1, in AddrFilWrl state 671.Outputs of state machine 595S, namely, outputs 621 through 624 are 1-bit wide signals. States, namely, idle state 669, address filter read 1 state 670 and address filter write 1 state 671, are set forth below in Table 16. Table 16 lists state machine 595S status of output signals for each of the states in FIG. 4-19.<tb>TABLE 16<tb><sep>dRegMSWwe_AF<sep>dRegLSWwe_AF<sep>AddrFilRdDone<sep>AddrFilWrDone<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>AddrFilRd1<sep>0<sep>1<sep>1<sep>0<tb>AddrFilWr1<sep>0<sep>0<sep>0<sep>1FIG. 4-20 is a state diagram depicting an exemplary embodiment of a state machine 596S of address filter CAM read/write controller 596. State machine 596S is reset responsive to reset signal 474, which places state machine 596S in idle state 672. State machine 596S transitions from idle state 672 to address filter content addressable memory read 1 ("AFcamRd1") state 673 responsive to CAM read receive signal 600 being asserted. State machine 596S from AFcamRd1 state 673 transitions back to idle state 672 at a completion of this read to CAM registers.State machine 596S transitions from idle state 672 to address filter content addressable memory write 1 ("AFcamWr1") state 674 responsive to CAM write ready signal 601 being asserted. On the next clock cycle of host clock signal 440, state machine 596S transitions from AFcamWr1 state 674 to address filter content addressable memory write 2 ("AFcamWr2") state 675 for a write to CAM registers. State machine 596S transitions from AFcamWr2 state 675 back to idle state 672 at a completion of this write to CAM registers.State machine 596S stays in idle state 672 if neither CAM read receive signal 600 nor CAM write ready signal 601 is asserted. Outputs of state machine 596S, such as data register most significant word write enable address filter CAM signal 609, data register least significant word address filter CAM signal 610, address filter CAM read done signal 611, and address filter CAM write done signal 612, are logic 0 in idle state 672 and in AFcamWr1 state 674.Data register most significant word write enable address filter CAM signal 609, data register least significant word address filter CAM signal 610, and address filter CAM read done signal 611 outputs are all logic 1, and address filter CAM write done signal 612 output is a logic 0, in AFcamRd1 state 673. Data register most significant word write enable address filter CAM signal 609, data register least significant word address filter CAM signal 610, and address filter CAM read done signal 611 outputs are all logic 0, and output address filter CAM write done signal 612 is a logic 1, in AFcamWr2 state 675.Outputs of state machine 596S, namely, outputs 609 through 612 are 1-bit wide signals. States, namely, states 673 through 675, of state machine 596S for signal outputs 609 through 612 of address filter CAM read/write controller 596 are set forth below in Table 17. Table 17 lists state machine 596S status of output signals for each of the states in FIG. 4-20.<tb>TABLE 17<tb><sep>dRegMSWwe_AFcam<sep>dRegLSWwe_AFcam<sep>AFcamRdDone<sep>AFcamWrDone<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>AFcamRd1<sep>1<sep>1<sep>1<sep>0<tb>AFcamWr1<sep>0<sep>0<sep>0<sep>0<tb>AFcamWr2<sep>0<sep>0<sep>0<sep>1FIGS. 4-21A through 4-21C are timing diagrams for respective exemplary instances of generation of a sample cycle pulse 489. In FIG. 4-21A, host clock signal 440 has a period which is three times longer than the period of DCR clock signal 516. A sample cycle pulse 544A is generated responsive to each falling edge of host clock signal 440, or generally at one-half the period of host clock signal 440.In FIG. 4-21B, host clock signal 440 has a period which is four times longer than the period of DCR clock signal 516. A sample cycle pulse 544B is generated responsive to the first falling edge of DCR clock signal 516 immediately following a falling edge of host clock signal 440.In FIG. 4-21C, host clock signal 440 has a period which is five times longer than the period of DCR clock signal 516. A sample cycle pulse 544C is generated responsive to the first falling edge of DCR clock signal 516 immediately after a first rising edge of DCR clock signal 516 that immediately follows a falling edge of host clock signal 440.Accordingly, it should be appreciated that by bridging DCR registers with a finite state machine, which may be broken up into several finite state machines, several DCR registers may be mapped to a significantly larger address space, such as control registers. Moreover, DCR bridge emulates a set of signals, namely, platform independent host interface signals, for access to such control registers. In other words, DCR bridge 113 maps a small register address space, such as four DCR registers, to a significantly larger register address space, such as control and status registers of EMAC 110, and mapping the significantly larger address space to the small register address space.Register AccessEMAC core host registers, address filter registers, statistic registers, and MIIM registers may be accessed. The description that follows is for EMAC0 host registers; however, the same description applies to EMAC1 host registers.As mentioned above, an access to EMAC host register may take several dcrClk signal 516 cycles, because a system clock of processor 103 may run at a higher frequency than host clock signal 440. As a result, a polling or interrupt may be used to ensure that an EMAC register access is completed before processor 103 issues another host interface access.Some of the EMAC0 core host registers are Receive Configuration Word 0, Receive Configuration Word 1, Transmit Configuration, Flow Control Configuration and Management Configuration. From an exemplary flow for one of these registers, flows for all read/write access to EMAC0 core host registers will be understood. Additionally, continuing the example of PPC405 for processor 103, it will be assumed that access is to Device Control Registers ("DCRs").FIG. 4-22 is a flow diagram depicting an exemplary embodiment of a receive configuration word register read access flow 700. At 701, a read command and read address for EMAC0 core host register is set up. This may be for a DCR control register, such as control register 500 of FIG. 4-5A. A read bit may be registered, as well as the read address, for this set up. At 702, a DCR write to the DCR control register is done to instruct host interface 112 to execute the read command. In other words, the write to the DCR control register is done to read the EMAC0 core host register.At 703, polling or waiting for an interrupt is done by processor 103 for confirmation of completion of the read. At 704, host interface 112 deposits the data read in a data register. In an implementation, the least significant word is read first, and thus read data is deposited into DCR dataRegLSW 498. A DCR read may be done from dataRegLSW 498 to retrieve the read data deposited.FIG. 4-23 is a flow diagram depicting an exemplary embodiment of a receive configuration word register write access flow 710. At 711, write data to the EMAC core host register is set up. The write data may be set up for dataRegLSW 498.At 712, the write command address for the EMAC core host register is set up. The write bit and write command address may be registered in Register File register for this set up. At 713, write data is put in a data register, such as DCR dataRegLSW 498. At 714, a write command is issued from processor 103 to host interface 112 to do a DCR write to control register 500 to instruct host interface 112 to write the data in dataRegLSW 498 into an EMAC core host register. Note that the write data may be written into dataRegLSW 498 before writing a command to control register 500. At 715, processor 103 polls or waits for an interrupt for another host interface instruction.Statistics registers may be implemented in the FPGA fabric 101. However, host interface 112 has logic as described above used to read the statistics registers. The statistics registers may be read only.FIG. 4-24 is a flow diagram depicting an exemplary embodiment of a multicast frames received okay register read flow ("statistics register read flow") 720. To read statistics registers, at 721 a set up for a read command and a statistics register address is done. This set up may be done for control register 500 with registering a read bit and a statistics register address.At 722, a DCR write to cntlReg 500 is done to instruct host interface 112 to start a statistics register read. This may be done by issuing a read command from processor 103 to host interface 112.At 723, polling or waiting for an interrupt may be done by processor 103 to determine if a read has been completed in response to the read command issued. Host interface 112 gets data read from the statistics register addressed and puts such data in data registers, such as in dataRegMSW 497 and dataRegLSW 498 in executing a read command. At 724, DCR bridge 113 reads from dataRegMSW 497 to obtain the most significant word of the read data, and at 725, DCR bridge 113 reads from dataRegLSW 498 to obtain the least significant word of the read data.FIG. 4-25 is a flow diagram depicting an exemplary embodiment of a MIIM register read flow 730. At 731, a physical layer address and a register address are set up and written into a data register, such as into DCR dataRegLSW 498. At 732, a host register read enable bit is set, such as to a logic 0, and a MIIM control address is set up.At 733, a DCR write to control register 500 is done thereby initiating a read of the MIIM register addressed. At 734, processor 103 polls the DCR RDYstatus register 499 or waits for an interrupt to determine whether the read has completed. When the data read completes, host interface 112 deposits the read data in dataRegLSW 498. Processor 103 may then do a DCR read on dataRegLSW 498 to get the MIIM register read data.FIG. 4-26 is a flow diagram depicting an exemplary embodiment of a MIIM register write flow 740. At 741, MIIM write data is set up.At 743, a Register File register is set up for writing thereto. A host write enable bit is set, such as to logic 1, and an address for MIIM write data register 541 is set.At 744, MIIM data is written to a DCR data register, such as dataRegLSW 498. At 745, the data from dataRegLSW 498 is transferred to host memory-mapped MlIMwrData register 541 by doing a DCR write to cntlReg 500 with the host write enable bit and MIIMwrData register address set.At 746, a physical layer device address and a register address are set up. At 747, the physical layer device address and the register address are written into DCR dataRegLSW 498. At 748, a write to DCR cntlReg 500 is done with host write enable bit set and MIIM control register address set to start a host interface write to MIIMwrData register 541. At 749, processor 103 polls the DCR RDYstatus register 499 or waits for an interrupt for another host interface instruction.Reads and writes to address filter registers for a unicast address register and general configuration registers are the same steps as reads and writes to EMAC core host registers, and thus such reads and writes are not repeated. However, reads and writes to address filter CAM is slightly different.FIG. 4-27 is a flow diagram depicting an exemplary embodiment of a host interface CAM entry read flow 760. At 761, CAM read/write bit is set, such as to a logic 1; a CAM address is set. At 761, a CAM data field is cleared, such as to "0". At 762, the CAM read/write and address bits are registered by a DCR write to dataRegLSW 498. At 763, a host register write enable bit is set, and a read configuration address table address is set.At 764, a DCR write to cntlReg 500 is done. This write initiates a read of a host interface CAM entry, namely, a read of a register associated with the read configuration address table address. Responsive to the read initiated, host interface 112 deposits CAM entry read data, such as for CAM entry 1, upper bits in DCR dataRegMSW 497 and deposits CAM entry read data lower bits in DCR dataRegLSW 498. At 765, processor 103 polls the DCR RDYstatus register 499 or waits for an interrupt for completion of the read of the host interface CAM entry.To obtain deposited read data, at 766 processor 103 issues a DCR read of dataRegMSW 497 and dataRegLSW 498 to get the CAM entry data. This may be done in two steps, where for example upper bits from dataRegMSW 497 are obtained first, and then lower bits from dataRegLSW 498 are obtained. Again, though the term CAM is used, it should be appreciated that it may be replaced with MAR throughout herein.FIG. 4-28 is a flow diagram depicting an exemplary embodiment of a host interface CAM entry write flow 770. At 771, CAM data is set up and obtained. At 772, a DCR write to dataRegLSW 498 with the CAM data is done. At 773, a write to a write configuration address table is set up.At 774, a host register write enable bit is set and an address to the write configuration address table is set up. At 775, a DCR write to cntlReg 500 is done with the EMAC host register write enable bit set, such as to a logic 1, and an address field set to the write configuration address table address. This commands host interface 112 to write CAM data from DCR dataRegLSW 498 into an address filter register, namely, the write configuration address table register associated with the write configuration address table address.At 776, a CAM read/write bit is cleared, such as set to logic 0, and a CAM address field is set, such as to logic 1 for CAM entry 1. At 777, CAM data remaining is set up. At 778, a DCR write is done to place CAM write enable, CAM address and CAM data remaining into dataRegLSW 498.At 779, a write to the read configuration address table is set up. At 780, the host register enable bit is set and a read configuration address table address is set up. At 781, a DCR write to cntlReg 500 is done with EMAC host register write enable bit set, such as to logic 1, and an address field set to the read configuration address table. This DCR write to cntlReg 500 commands host interface 112 to put the write data in dataRegLSW 498 into the CAM entry, such as CAM entry 1, which in this example may be a register associated with the read configuration address table address.FIG. 4-29 is a block diagram depicting an exemplary embodiment of host interface CAM entry read flow 760. Host interface 112 includes DCR registers 751. Notably, in this exemplary implementation four DCR registers 497 through 500, as previously described, are used. A user's software program causes information to be loaded into dataRegLSW 498 as generally indicated by arrow 726. A user's program causes information to be loaded into cntlReg 500 as generally indicated by arrow 727. Once information is written into DCR registers 498 and 500, as previously described, hardware writes content of dataRegLSW 498 into register 718 of host interface registers 705. As previously mentioned, register 718 is a register associated with an address table configuration entry. Responsive to a CAM read/write bit and CAM address bits written into register 718, CAM 706 deposits CAM data 716 into dataRegMSW 497 and deposits CAM data 717 into dataRegLSW 498, as respectively generally indicated with arrows 728 and 729.FIG. 4-30 is a block diagram depicting an exemplary embodiment of host interface CAM entry write flow 770. Information is written into dataRegLSW 498 by a user's software program, as generally indicated with arrow 737. Information is written into cntlReg 500 by a user's software program, as generally indicated with arrow 738. Contents of dataRegLSW 498 is written into register 750 of host interface registers 705, as generally indicated with arrow 739. Register 750 is associated with a configuration address table, as previously described.Information is again written into dataRegLSW 498 by a user's software program, as generally indicated with arrow 737, and information is again written into cntlReg 500 by a user's software program, as generally indicated with arrow 738. Hardware writes dataRegLSW 498 content into register 718 of host interface registers 705, as generally indicated with arrow 767. Register 718 is associated with a configuration address table, as previously described. Responsive to a CAM read/write bit and CAM address bits written into register 718, hardware writes content from register 718 into location 716 of CAM 706, as generally indicated with arrow 768, and writes content from register 750 into location 717 of CAM 706, as generally indicated with arrow 769.In both FIGS. 4-29 and 4-30, some numerical examples have been provided for purposes of clarity by way of example. For example, CAM 706 is illustratively shown as a 48 bit wide memory that is four entries deep; address locations for five host interface registers 705 are illustratively shown; and four DCR registers 751 having two-bit addresses are illustratively shown. However, it should be understood that other bit values, addresses, and numbers of registers/memory size may be used. Furthermore, though a CAM 706 is described, it should be understood that CAM functionality may be provided with circuits other than memory, such as registers and comparators.FIG. 4-31 is a high-level block diagram depicting an exemplary embodiment of host interface 112 coupled to a physical layer device 119D. Physical layer device 119D includes MIIM registers 754. Data and control information are provided to data least significant word register 498 and control register 500 of DCR registers 751, as generally respectively indicated by arrows 755 and 756. This data and control information may be provided by a user software program. This may be done one time for initialization of host interface 112.After data is provided to data register least significant word 498 as generally indicated by arrow 755 and control information is provided to control register 500 as generally indicated by arrow 756, data may be transferred from data least significant word register 498 to management configuration register 759 of host interface registers 705, as generally indicated by arrow 757. Host interface 112 writes data from data register least significant word 498 into management configuration register 759.Data and control information is again provided to data least significant word register 498 and control register 500 of DCR registers 751, as generally respectively indicated by arrows 755 and 756. Again, this may be done by a user software program, though not for initialization this time.After control information is passed to control register 500 in this second instance, host interface 112 captures MIIM read data in data register least significant word 498 responsive to physical layer device 119D asserting host MIIM ready signal 408. This capturing of data is generally indicated by arrow 758 where data from a register of MIIM registers 754 is transferred to data least significant word register 498. Notably, a physical layer device may be located internal or external to a programmable logic device in which an EMAC, such as EMAC 110, is located.FIG. 4-32 is a high-level block diagram depicting an exemplary embodiment of interfacing between host interface 112 and physical layer device 119D for a write to an EMAC 110 or 111. Data and control information are provided to data least significant word register 498 and control register 500 as respectively indicated by arrows 792 and 793. This may be done once for initialization of host interface 112, and may be done by a user software program.After control information is written to control register 500, as generally indicated by arrow 793, host interface 112 writes data from data least significant word register 498 to management configuration register 759 as generally indicated by arrow 791.On a second iteration of writing data to data register least significant word 498 and control information to control register 500 by a user software, host interface 112 initiates a write of data in data least significant word register 498 to MIIM write data register 752, as generally indicated with arrow 795.On a third iteration of writing data to data register least significant word 498 and control information to control register 500 by user software, host interface 112 initiates a write of data in MIIM write data register 752 to a register of MIIM registers 754, as generally indicated with arrow 794. This write may be provided to physical layer device 119D via an MDIO interface.Client InterfaceReturning to FIG. 1, by providing an embedded EMAC, clock frequency may be increased to higher than approximately 125 MHz with an implementation employing standard cells in processor block 102. In an implementation, EMAC 110 may be clocked at approximately twice the clock frequency ("overclocking") than that of supporting logic in FPGA 100, so that EMAC 110 is capable of approximately a doubling of data rate output to enhance data throughput.In an implementation, supporting logic in FPGA 100 is run at approximately 125 MHz because it is the clock frequency that FPGA fabric 101 supports, and thus existing supporting logic does not have to be extensively redesigned for migration to an FPGA 100 having one or more embedded EMACs. For purposes of clarity by way of example and not limitation, it will be assumed that supporting logic in FPGA 100 is operated at half the clock frequency of embedded EMAC 110, other than at the boundaries between EMAC 110 and FPGA fabric 101. Accordingly, width of client interfaces 127 and 128 is doubled, for example from an 8 bit width to a 16 bit width, to compensate for the slower clock frequency of FPGA fabric 101 to maintain enhanced data throughput of EMAC 110.To maintain backward compatibility, datapath widths of client interfaces 127 and 128 may be selectable, such as for example to be 8 bits, when both EMAC 110 and a user design instantiated in FPGA fabric 101 ("the client") are running at the same clock frequency, such as for example approximately 125 MHz. Selection of datapath width of client interfaces 127 and 128 may be independently controlled by input pins to processor block 102 for each receive and transmit direction, because transmit and receive may run independently from one another and thus may operate at different frequencies.Mode select signals may be provided to FPGA fabric 101 via input tie-off pins of processor block 102. Input tie-off pins could be tied to a particular value when an FPGA is configured or could be varied if they are controlled by FPGA logic.Client Interface-Transmit-SideFIG. 5A is a high-level block diagram depicting an exemplary embodiment of a transmit-side ("Tx") client interface 810. Client interface 810 includes EMAC0 110. EMAC0 110 includes EMAC core 123 and Tx datapath 127D, namely Tx client interface 127 of FIG. 1A. EMAC core 123 includes Tx engine 820. A Tx clock signal 821 and a divided version thereof, namely Tx divided clock signal 822, are provided from EMAC core 123 to transmit client interface 127 (e.g., Tx DP 127D). Other signals provided from EMAC core 123 to transmit client interface 127 include a select mode signal 825, which may be for selecting a 16 bit mode for example.It should be understood that configurable logic of FPGA 100 may operate even maximally at a frequency which is substantially less than that achievable by an EMAC 110 embedded in FPGA 100. For example, configurable logic of FPGA 100 may have a maximum frequency of operation of approximately 125 MHz, and embedded EMAC 110 may have a maximum frequency of operation of approximately twice or greater than that of FPGA 100. Thus, by having a wider data width for configurable logic, frequency of operation of embedded EMAC 110 may be greater than that of configurable logic. However, embedded EMAC 110 may also be used for communication to networks, backplanes or other media outside of FPGA 100. Embedded EMAC 110 may be capable of data rates greater than 1.25 Gigabits per second, which is greater than the current Ethernet standard, namely, approximately 1.0 Gigabits per second. Accordingly, for example, FPGA 100 may be coupled to another medium, such as a backplane, to operate at non-standard data rates, such as in excess of 1.25 Gigabits per second. Thus, transmit and receive client interfaces described herein should not be considered as only being coupled for communication with configurable logic of FPGA 100, but may be used for communication external to FPGA 100, including communication at non-standard data rates.From Tx engine 820, transmit collision signal 829 and transmit retransmit signal 830 are provided to Tx client interface 127 (e.g., Tx DP 127D). Transmit client interface 127 is configured to provide transmit collision signal 837 and transmit retransmit signal 838 responsive to transmit collision signal 829 and transmit retransmit signal 830 respectively. A transmit collision signal is to indicate a collision on a medium, and a retransmit signal is to indicate a frame to retransmit owing to aborting transmission of the frame due to the collision.Transmit client interface 127 is configured to provide transmit acknowledge output signal 832 responsive to transmit acknowledge signals 823 and 824. A transmit acknowledge signal is a handshake signal, which for example may be asserted after an EMAC accepts a first byte of data of a transmitted frame. Data, such as from FPGA fabric 101, to be transmitted, may be provided to transmit client interface 127 via transmit data input signal 833, transmit data valid signal most significant word input signal 834, and transmit data valid input signal 835. Additionally, transmit underrun signal 836 may be asserted by a client to force an EMAC to insert an error code to corrupt the then current frame and then fall back to an idle transmission state. For example, an aborted transfer can occur if a first-in, first-out buffer stack ("FIFO") coupled to a client interface empties before a frame is completely transmitted.Transmit client interface 127 is configured to provide transmit underrun signal 828 responsive to transmit engine 820 responsive to transmit underrun signal 836. Transmit client interface 127 is configured to provide transmit data valid signal 827 to transmit engine 820 responsive to transmit data valid signals 834 and 835. Transmit client interface 127 is configured to provide transmit data signal 826 to transmit engine 820 responsive to transmit data input signal 833. Tx client interface 127 generates TX_DATA_VALID signal 827 to indicate to EMAC core 123 that data input to Tx engine 820 is valid.In an implementation, transmit data input signal 833 may actually be 16 signals, namely a 16 bit wide input, where transmit client interface 127 is configured to relay such data to transmit engine 820 via transmit data signal 826 and a fraction of such input data width, such as for example transmit data signal 826 may be an 8 bit wide signal. Accordingly, it should be appreciated that FPGA fabric 101 may operate at a slower frequency that EMAC 110, thereby allowing EMAC 110 to have a higher data throughput though processing data in a width that is less than the input data width.A transmit inter-frame gap ("IFG") delay signal 816 may be provided from FPGA fabric 101 to transmit engine 820 for adjustment of delay between frames. Such a signal may be a plurality of signals for a particular bit width, such as for example an 8 bit width.Transmit engine 820 may be configured for a Media Independent Interface ("MII") and in particular a Gigabit MII ("GMII"). For purposes of clarity by way of example and not limitation, it will be assumed that Tx engine 820 is configured for a GMII. A Gigabit transmit clock signal 811 is provided to transmit engine 820 from a user. Responsive to clock signal 811, transmit engine 820 is configured to provide GMII transmit clock signal 812. Transmit engine 820 is further configured to provide GMII transmit enable signal 813, GMII transmit data signal 814, and GMII transmit error signal 815. GMII transmit data signal 814 is responsive to transmit data signal 826, and in an implementation may have the same bit width, such as 8 signals for an 8 bit wide output.Tx client interface 810 may convert txFirstByte signal 839 and txUnderrun 836 from TX_DIV2_CLK 822 clock domain to TX_CLK 821 clock domain, and convert TX_COLLISION signal 829 and TX_RETRANSMIT 830 from TX_CLK 821 clock domain to TX_DIV2_CLK 822 clock domain when EMAC is operating in 16-bit mode. Tx first byte signal 839 may be asserted when a first byte of a frame is transmitted via Tx client interface 127.FIG. 5C is a schematic diagram depicting an exemplary embodiment of a transmit-side client interface client interface 127 (e.g., Tx DP127D). Transmit divide by two clock signal 822 is provided as clock signal input to registers 882, 883, and 893. Transmit clock signal 821 is provided as a clock input to registers 884, 885, 894, and 895. An inverted version of transmit clock signal 821, namely transmit inverted clock signal 861, is provided as clock input to register 881.Data input to register 881 is transmit divide by two clock signal 822. Data input to register 882 is transmit data valid most significant word input signal 834. Data input to register 883 is transmit data valid input signal 833. Data input to registers 893 and 895 is transmit data input signal 883, which for example may be a 16 bit wide input. Output of register 881 is transmit divide by two clock registered signal 864, which is provided as input to AND gate 886 and AND gate 891, and is provided as a control select input to multiplexer 889. Output of register 882 is data valid most significant word registered signal 862, which is provided as an input to datapath multiplexer controller 896 and to a logic high input of multiplexer 889. Output of register 883 is data valid registered signal 863, which is provided as an input to datapath multiplexer controller 896 and to AND gate 891.Output of AND gate 891 is a control select signal input to multiplexer 892. Output of register 893 is registered data signal 865 which may be a 16 bit wide output. Data registered signal 865 is provided to a logic high input of multiplexer 892 and to multiplexer 897. Data registered signal 865 may be divided for inputting to multiplexer 897 in an implementation directed to specific designated binary input ports. Output of data register 895 is provided to a logic low level input of multiplexer 898 and is transmit data 8 bit mode registered signal 869. Output of multiplexer 892 is provided to register 894.Output of register 894 is fed back to a logic low state input of multiplexer 892 and is data registered two signal 868, which may be a 16 bit wide data signal in an implementation. Data registered two signal 868 is provided as data input to multiplexer 897. In an implementation, data registered two signal 868 may be divided in half, with one half going to one binary logic designation of multiplexer 897 and the other half going to a different binary designation input of multiplexer 897. Output from datapath multiplexer controller 896 is select signals, such as select signals S0, S1, S2, S3, respectively referenced as signal 872-875, which are provided as input to multiplexer 897. Thus, for example, signal S3 may be used for selecting data bits [15:8] input to port 0001 of multiplexer 897. Continuing this exemplary implementation, each select signal 872 through 875 would be for selecting a different portion of either data registered signal 865 or 868 for output from multiplexer 897.Output from multiplexer 897 is transmit data 16 bit mode to 8 bit mode signal 876. Accordingly, in an implementation, data width of transmit data 16 bit mode to 8 bit mode signal 867 would be an 8 bit wide signal which may be provided to a logic high input of multiplexer 898. Select 16 bit mode signal 825 may be provided as a control select signal to multiplexer 898. The other input to multiplexer 898, namely transmit data 8 bit mode registered signal 869, is provided to a logic low input of multiplexer 898.Either of inputs 876 or 869 may be selected responsive to select mode signal 825 to provide transmit data signal 826, which in an implementation would be an 8 bit wide data output. Another input to AND gate 886 is transmit acknowledge signal 824, and output from AND gate 886 is acknowledged at divide by two clock signal being logic high 866, which is provided as a control select input to multiplexer 887. Another input to multiplexer 887 may be tied to a logic high state. Output of multiplexer 887 is provided as an input, such as a logic zero input to multiplexer 888. Another input to multiplexer 888 may be tied to a logic low state. Output of multiplexer 888 may be provided as data input to register 884. Output of multiplexer 889 is provided to data register 885, the output of which is data valid most significant word registered two signal 867, which is provided as an input to datapath multiplexer controller 896 and fed back as a data input to multiplexer 889.Data valid inputs TX_DV_MSW_IN 834 and TX_DV_IN 835 and data input TX_DATA_IN[15:0] 833 from FPGA fabric 101 may be registered immediately responsive to TX_DIV2_CLK 822 in Tx client interface 810 to facilitate timing in a design instantiated in FPGA fabric 101. Sel16bMode signal 825 is for selecting a data width mode, for example such as whether a Tx client interface is operating in a 16-bit mode or 8-bit mode.A datapath multiplexer controller ("dpMuxCntl") 896 generates datapath control signals S0, S1, S2, and S3 872 through 875, respectively, to appropriately convert for example a 16-bit wide data input to 8-bit wide data output, such as TX_DATA[7:0] 826, to Tx Engine 820 of EMAC core 123.Tx client interface 810 may have to handle a number, for example four, instances of input data. In an instance, TX_ACK 824 is asserted while TX_DIV2_CLK 822 is at a logic high level, and transmit data is an even number of bytes. In another instance, TX_ACK 824 is asserted while TX_DIV2_CLK 822 is at a logic high level, and transmit data is an odd number of bytes. In yet another instance, TX_ACK 824 is asserted while TX_DIV2_CLK 822 is at a logic low level, and transmit data is an even number of bytes. And in still yet another instance, TX_ACK 824 is asserted while TX_DIV2_CLK 822 is at a logic low level, and transmit data is an odd number of bytes.An acknowledge ("ACKatDiv2CkHiReg") signal 871 is asserted when TX_DIV2_CLK signal 822 is registered at a logic high level. Acknowledge signal 871 may be used to determine when TX_ACK 824 is asserted with respect to phase of TX_DIV2_CLK 822.FIG. 5D is a state diagram depicting an exemplary embodiment of a state machine 900 for dpMuxCntl block 896. State machine 900 is reset responsive to reset signal 874, which places state machine 900 in an idle state 907. State machine 900 stays in idle state 907 until a data valid register ("DVIdReg") signal 863 is asserted.State machine 900 transitions from idle state 907 to an odd octet transmission state A1 901 responsive to signal DVIdReg 863 being asserted. State machine stays in A1 state 901 until Tx_Ack signal 824 is asserted. If only one data octet is being transmitted, if a data valid most significant word register ("DVIdMSWreg") signal 862 is not asserted and Tx_Ack signal 824 is asserted, state machine 900 transitions from state A1 901 back to idle state 907. If two or more data octets are transmitted, DVldMSWreg signal 862 may be maintained in an asserted state when Tx_Ack signal 824 is asserted causing state machine 900 to transition from state A1 901 to an even octet transmission state A2 902. State machine 900 transitions from state A2 902 back to idle state 907, if DVIdReg signal 863 is deasserted.If ACKatDiv2CkHiReg signal 871 is asserted while DVIdReg signal 863 is being asserted, state machine 900 transitions from state A2 902 to an odd octet transmission state A3 903. Notably, Tx_Ack signal 824 is asserted while Tx_Div2_Clk signal 822 is in a logic high state for this transition. If DVldMSWreg signal 862 is deasserted at this juncture meaning that the current transmission is done, state machine 900 transitions from state A3 903 back to idle state 907.If DVldMSWreg signal 862 is still asserted, providing an even number of data octets is being transmitted, state machine 900 transitions from state A3 903 to an even octet transmission state A4 904. If DVldMSWreg signal 862 is then deasserted, state machine 900 transitions from state A4 904 back to odd octet transmission state A3 903.If both DVIdReg signal 863 and ACKatDiv2CkHiReg signal 871 are not asserted while state machine 900 is in state A2 902, and Tx_Ack signal 824 is asserted while Tx_Div2_Clk signal 822 is in a logic low state, state machine 900 transitions from state A2 902 to an odd octet transmission state A5 905. If DVldMSWreg signal 862 is deasserted, providing this transmission of an odd number of data octets is done, state machine 900 transitions from state A5 905 back to idle state 907. If DVldMSWreg signal 862 is asserted for transmission of an even number of data octets, state machine 900 transitions from state A5 905 to an even octet transmission state A6 906.If DVIdReg signal 863 is asserted for transmission of an odd number of data octets, state machine 900 transitions from state A6 906 back to state A5 905. If DVIdReg signal 863 is deasserted, providing this transmission of an even number of data octets is done, state machine 900 transitions from state A6 906 back to idle state 907.All four outputs of state machine 900, such as outputs S0 through S3, are logic 0 in idle state 907. Output S0 is a logic 1 and outputs S1 through S3 are all logic 0 in both states A1 901 and A5 905. Output S1 is a logic 1 and outputs S0, S2, and S3 are all logic 0 in both states A2 902 and A6 906. Output S2 is a logic 1 and outputs S0, S1, and S3 are all logic 0 in state A3 903. Output S3 is a logic 1 and outputs S0 through S2 are all logic 0 in state A4 904.Outputs of state machine 900, namely, outputs S0 through S4, are 1-bit wide signals. States, namely, states A1 901 through A6 906, of state machine 900 for signal outputs S0 through S3 of dpMuxCntl block 896are set forth below in Table 18. Table 18 lists state machine 900 status for output signals for each of the states in FIG. 5D.<tb>TABLE 18<tb><sep>S0<sep>S1<sep>S2<sep>S3<tb>IDLE<sep>0<sep>0<sep>0<sep>0<tb>A1<sep>1<sep>0<sep>0<sep>0<tb>A2<sep>0<sep>1<sep>0<sep>0<tb>A3<sep>0<sep>0<sep>1<sep>0<tb>A4<sep>0<sep>0<sep>0<sep>1<tb>A5<sep>1<sep>0<sep>0<sep>0<tb>A6<sep>0<sep>1<sep>0<sep>0FIG. 5J-1 is a schematic diagram depicting an exemplary embodiment of a transmit data valid generator 1020. Transmit data valid generator 1020 receives data valid registered signal 863 to a logic high input of multiplexer 1021 and to data valid generator 1027. Output of multiplexer 1021 is provided to register 1022 which is clocked responsive to transmit clock signal 821. Output of register 1022 is data valid registered two signal 1034 which is provided to an input of inverter 1024 and to a logic low input of multiplexer 1021.Multiplexer 1021 is provided a registered clock signal, namely transmit divide by two clock registered signal 864, as a control select signal input. Data valid registered signal 863 is provided to an input of AND gate 1025 along with the output of inverter 1024 to provide data valid start pulse signal 1033. Data valid start pulse signal 1033 is provided as an input to data valid generator 1027. Other inputs to data valid generator 1027 are transmit acknowledge signal 824, data valid most significant word registered signal 862, and acknowledge at a logic high state of divide by two clock signal 866.Data valid generator 1027 is clocked responsive to an inputted transmit clock signal 821. Output of data valid generator 1027 is transmit data valid signal 1032 for a 16 bit data width mode. This transmit data valid for 16 bit data width mode signal 1032 is provided as an input to a logic high input of multiplexer 1026. Transmit data valid input signal 835 is provided to register 1023 which is clocked responsive to transmit clock signal 821. Output of register 1023 is data valid 8 bit wide mode registered signal 1031. Data valid 8 bit wide mode registered signal 1031 is provided to a logic low input of multiplexer 1026. Select 16 bit mode signal 877 is provided as a control select input to multiplexer 1026 to select as between the above-mentioned inputs to provide transmit data valid signal 827 as an output.FIG. 5J-2 is a state diagram depicting an exemplary embodiment of a state machine 1040 for data valid generator 1027. State machine 1040 is reset responsive to reset signal 874, which places state machine 1040 in an idle state 1046. State machine 1040 stays in idle state 1046 until a data valid start pulse ("DVldStart_p") signal 1033 is asserted.If DVldStart_p signal 1033 is asserted, state machine 1040 transitions from idle state 1046 to a first data octet state B1 1041. State machine 1040 stays in state B1 1041 until TX_ACK signal 824 is asserted. If TX_ACK signal 824 is asserted while DVldMSWreg signal 862 is not asserted for only one data octet being transmitted, state machine 1040 transitions from state B1 1041 back to idle state 1046.If ACKatDiv2CkHi signal 866 is asserted, state machine 1040 transitions from state B1 1041 to state B2 1042. State machine 1040 stays in state B2 1042 if DVIdReg signal 863 and DVIdMSWreg signal 862 are both asserted for continued data input for the then current transmission. If DVIdReg signal 863 is deasserted for an even number of data octets being transmitted, state machine 1040 transitions from state B2 1042 back to idle state 1046. If DVIdMSWreg signal 862 is deasserted for an odd number of data octets being transmitted, state machine 1040 transitions from state B2 1042 to state B3 1043. From state B3 1043 state machine 1040 transitions back to idle state 1046 at a completion of the then current transmission.If, while in state B1 1044, ACKatDiv2CkHi signal 866 is deasserted, state machine 1040 transitions from state B1 1041 to state B4 1044. State machine 1040 stays in state B4 1044 if DVIdReg signal 863 and DVIdMSWreg signal 862 are both asserted for continued data input for the then current transmission. If DVIdReg signal 863 and DVIdMSWreg signal 862 are both deasserted for an even number of data octets being transmitted, state machine 1040 transitions from state B4 1044 back to idle state 1046. If DVIdMSWreg signal 862 is asserted while DVIdMSWreg signal 862 is deasserted for an odd number of data octets being transmitted, state machine 1040 transitions from state B4 1044 to state B5 1045. From state B5 1045 state machine 1040 transitions back to idle state 1046 at a completion of the then current transmission.The output of state machine 1040, which is output TX_DATA_VALID signal 827, is a logic 0 in idle state 1040 and in states B3 1043, B4 1044, and B5 1045. Output TX_DATA_VALID signal 827 is a logic 1 or a logic 0, namely, the content of a data valid register ("DVIdReg"), in state B1 1041 and in state B2 1042.The output of state machine 1040, namely, output TX_DATA_VALID signal 827, is a 1-bit wide signal. States, namely, states B1 1041 through B5 1045, of state machine 1040 for output TX_DATA_VALID signal 827 are set forth below in Table 19. Table 19 lists state machine 1040 status for TX_DATA_VALID signal 827 for each of the states in FIG. 5J-2.<tb>TABLE 19<tb><sep>TX_DATA_VALID<tb>IDLE<sep>0<tb>B1<sep>DVIdReg<tb>B2<sep>DVIdReg<tb>B3<sep>0<tb>B4<sep>0<tb>B5<sep>0FIGS. 5E, 5F, 5G and 5H are respective output timing diagrams of exemplary embodiments of either even or odd transmit data byte lengths for when transmit client interface 127 is in an 16-bit mode. In FIG. 5E, TX_ACK signal 824 is asserted when TX_DIV2_CLK signal 822 is generally at a logic high level and TX_DATA[7:0] signal 826 has an even number of bytes. Notably, TX_DV_IN signal 835 and TX_DV_MSW_IN signal 834 are generally asserted, raised to a logic high level, at 930 and de-asserted, lowered to a logic low level, at 950.In FIG. 5F, TX_ACK signal 824 is asserted when TX_DIV2_CLK signal 822 is generally at a logic high level and TX_DATA[7:0] signal 826 has an odd number of bytes. Notably, TX_DV_IN signal 835 and TX_DV_MSW_IN signal 834 are generally asserted at 930, but TX_DV_MSW_IN signal 834 is de-asserted at 970 prior to TX_DV_IN signal 835 which is de-asserted at 950.In FIG. 5G, TX_ACK signal 824 is asserted when TX_DIV2_CLK signal 822 is generally at a logic low level and TX_DATA[7:0] signal 826 has an even number of bytes. Notably, TX_DV_IN signal 835 and TX_DV_MSW_IN signal 834 are generally asserted at 930 and de-asserted at 970.In FIG. 5H, TX_ACK signal 824 is asserted when TX_DIV2_CLK signal 822 is generally at a logic low level and TX_DATA[7:0] signal 826 has an odd number of bytes. Notably, TX_DV_IN signal 835 and TX_DV_MSW_IN signal 834 are generally asserted at 930, but TX_DV_MSW_IN signal 834 is de-asserted at 990 prior to TX_DV_IN signal 835 which is de-asserted at 970.FIG. 5I is an output timing diagram depicting an exemplary embodiment of a bypass mode for when transmit client interface 127 is in an 8-bit mode. In a bypass mode, TX_DV_MSW_IN signal 834 is maintained de-asserted, and TX_DV_IN signal 835 is asserted generally at 930 and de-asserted generally at 970. In this example, TX_DATA[7:0} signal 826 has an even number of bytes, though an odd number of bytes may be used in bypass mode.Client Interface-Receive SideFIG. 5B is a high-level block diagram depicting an exemplary embodiment of a receive-side ("Rx") client interface 840. Rx client interface 840 includes EMAC0 110. EMAC0 110 includes EMAC core 123 and Rx client datapath 128D, namely Rx client interface 128 of FIG. 1A. EMAC core 123 includes Rx engine 850. GMII receive clock signal 841 is provided to Rx engine 850 along with GMII Rx data valid signal 842, GMII Rx data signal 843, and GMII Rx error signal 844.In an implementation, GMII Rx data signal 843 may be an 8 bit wide signal or signals to provide an 8 bit wide input. EMAC core 123 is configured to provide Rx clock signal 278 responsive to GMII Rx clock signal 841. EMAC core 123 provides Rx clock signal 278 to Rx client interface 128. Additionally, EMAC core 123 may be configured to provide a divided version of Rx clock signal 278, such as Rx divided by two clock signal 845 to Rx client interface 128.Rx engine 850 is configured to provide Rx data signal 846 and Rx data valid signal 847 responsive to GMII Rx data signal 843 and GMII Rx data valid signal 842, respectively. Rx engine 850 is further configured to provide Rx good frame signal 848 and Rx bad frame signal 849 to Rx client interface 128. Rx data signal 846 and Rx data valid signal 847 are provided from Rx engine 850 to Rx client interface 128. EMAC core 123 is configured to provide a select mode signal 851 to Rx client interface 128. Select mode signal 851 may in an implementation be for selecting a 16 bit wide mode for data processing.Rx client interface 128 (e.g., Rx DP128D) is configured to provide data output signal 852 responsive to Rx data signal 846. In an implementation, Rx data signal 843 and Rx data signal 846 may each be 8 bits wide and data output signal 852 may be a 16 bit wide signal, namely, 16 signals provided in parallel to provide the 16 bit wide output.Rx client interface 128 is configured to provide data valid most significant word output signal 853 and data valid output signal 854 responsive to Rx data signal 846 and Rx data valid signal 847. Rx client interface 128 is configured to provide Rx good frame output signal 855 and Rx bad frame output signal 856 responsive to Rx good frame signal 848 and Rx bad frame signal 849, respectively. Rx client interface 128 converts RX_GOOD_FRAME and RX_BAD_FRAME signals 848 and 849 from an RX_CLK signal 841 domain to an RxDiv2Clk signal 845 domain. A good frame signal may be asserted after the last byte of data is received to indicate reception of a compliant frame. A bad frame signal may be asserted after the last byte of data is received to indicate reception of a non-compliant frame.Rx client interface 128 obtains RX_DATA_VALID 847 and RX_DATA[7:0] 846 from a physical layer interface, such as for an Ethernet. In an implementation, this data signaling may be at a frequency up to approximately 250 MHz when an overclocking or a 16-bit mode is used. By registering upon receipt RX_DATA_VALID 847 and RX_DATA[7:0] 846 in Rx client interface 128, design timing is simplified. In an implementation, Rx client interface 128 may assemble two data octets for output to FPGA fabric 101 in 16-bit increments so that FPGA fabric 101 can be run at half of the clock frequency of incoming data while maintaining data throughput.FIG. 5K is a schematic diagram depicting an exemplary embodiment of an Rx client interface 128 (e.g., Rx DP128D). Rx client interface 128 outputs two data valid signals, namely dataVIdOut 854 and dataVldMSWout 853 to indicate validity of assembled data, such as the two data octets in the above example. Sel16bMode signal 851 indicates whether the Rx client interface 128 is used a particular mode, such as in a 16-bit or an 8-bit mode. Rx client interface 128 processes instances of input data where RX_DATA_VALID 847 is asserted, such as for a received frame having even or odd number of data octets. Example embodiments of receive output timing are described below.In FIG. 5K, receive clock signal 278 is provided as a clock input to multiplexer select register A 1051, receive data valid register A 1052, and receive data register A 1053. Data input to multiplexer select register A 1051 is multiplexer select register A input signals 1081, which are described below in additional detail. Receive data valid signal 847 is provided as data input to receive data valid register A 1052. Receive data signal 846 is provided as data input to register 1053.Receive clock signal 278 is provided as an input to inverter 1084, the output of which is receive inverted clock signal 1083. Receive inverted clock signal 1083 is provided as a clock input to multiplexer select register 1054, receive data valid register 1055, receive data register 1056, multiplexer select register 1059, data valid register 1060, data register 1061, multiplexer select register two 1062, data valid register two 1063, and data register two 1064. Output of multiplexer select register A 1051 is provided as data input to multiplexer select register 1054.Output of receive data valid register A 1052 is provided to data input of receive data valid register 1055 and to a logic low input port of multiplexer 1073. Output of receive data register A 1053 is provided as data input to receive data register 1056. Notably, registers 1053 and 1056 may represent multiple registers for processing eight-bit width of data. Moreover, output of receive data register A 1053 is provided to bus 1085. Another eight-bit wide input coupled to ground 1089 is provided to bus 1085 to create a sixteen-bit wide bus output coupled to a logic low input port of multiplexer 1075.Output of multiplexer select register 1054 is provided as data input to multiplexer select register 1059. Output of receive data valid register 1055 is provided to an input port of AND gate 1057. Output of receive data register 1056 is provided to an input of AND gate 1058. Initially, select sixteen-bit mode signal 851 is provided as inputs to AND gates 1057 and 1058. Output of AND gate 1057 is data valid sixteen-bit mode signal 1087. Output of AND gate 1058 is data sixteen-bit mode signal 1092. Output of AND gate 1057 is provided as a data input to data valid register 1060 and to an input of AND gate 1066. Output of AND gate 1058 is provided to data register 1061 and to bus 1088.Output of multiplexer select register 1059 is provided as data input to multiplexer select register two 1062. Out of data valid register 1060 is provided to data input of data valid register two 1063, to a logic low input port of multiplexer 1067, and to an input port of AND gate 1065. In addition to data valid sixteen-bit mode signal 1087 provided to an input port of AND gate 1066, multiplexer select register signal 1091 is provided as another input to AND gate 1066.Output of data register 1061 is provided to data register two 1064 and to buses 1088 and 1086. Output of data register two 1064 is provided to bus 1086. Accordingly, data register output from register 1061 in combination with data sixteen-bit mode signal 1092, provides a sixteen-bit wide input bus to a logic low port of multiplexer 1069. Moreover, data register output from register 1064, in combination with the output from register 1061, provided to bus 1068 provides a sixteen-bit wide input bus to a logic high port of multiplexer 1069.Multiplexer select register two 1062 output is provided as a control select signal input to multiplexers 1061, 1068, and 1069 to select between logic low and logic high input ports for output respectively from such multiplexers. Output of multiplexer select register two 1062 is multiplexer select register two signal 1090. Output from data valid register two 1063 is provided to a logic high input port of multiplexer 1067 and to an input of AND gate 1065. Output from AND gate 1065 is provided to a logic high input port of multiplexer 1068. Output of AND gate 1066 is provided to a logic low input port of multiplexer 1068.Output of multiplexer 1067 is data sixteen-bit valid signal 1095, and is provided as data input to register 1070. Output from multiplexer 1068 is provided as data input to register 1071 and is data sixteen-bit valid most significant word signal 1096. Output from multiplexer 1069 is data sixteen-bit signal 1097, which is provided as a data input to register 1072.Registers 1070, 1071, and 1072 are clocked responsive to receive divided by two clock signal 845. Output of register 1070 is data sixteen-bit valid register signal 1098 which is provided to a logic high input port of multiplexer 1073. Output of register 1071, namely 16-bit data valid MSW register signal 1099, is provided to a logic high input port of multiplexer 1074. A logic low input port of multiplexer 1074 is coupled to ground 1089. Output of register 1072 is data sixteen-bit register signal 1079, which is provided a logic high input port of multiplexer 1075. Output from bus 1085 is provided to a logic low input port of multiplexer 1075. Multiplexers 1073, 1074, and 1075 are provided select sixteen-bit mode signal 851 as a control select input. Output of multiplexer 1073 is data valid output signal 854. Output of multiplexer 1074 is data valid most significant word output signal 853. Output of multiplexer 1075 is data output signal 852.FIG. 5L is a schematic diagram depicting an exemplary embodiment of a circuit implementation of multiplexer select register A 1051. Receive divide by two clock signal 854 is provided as a data input to register 1101. Notably, registers as described herein may be implemented with flip-flops. Moreover, such flip-flops may have resets, which reset signals are not shown for purposes of clarity. Register 1101 is clocked responsive to receive inverted clock signal 1083.Output of register 1101 is receive divide by two clock register signal 1111, which is input to AND gate 1102. Receive data valid signal 847 and inverted receive data valid register A signal 1115 are provided as inputs to AND gate 1103. Notably, inverted receive data valid register A signal 1115 may be data output of receive data valid register A 1052. Output of AND gate 1103 is start pulse signal 1112, which is provided as an input to AND gate 1102. Another input to AND gate 1102 is select sixteen-bit mode signal 851. Output of AND gate 1102 is provided as a control select input to multiplexer 1105.A logic high input port of multiplexer 1105 is coupled to a logic high bias voltage 1117. A logic low input port of multiplexer 1105 is coupled to receive a feedback output, namely multiplexer select register A output signal 1118. Output of multiplexer 1105 is provided to a logic low input port of multiplexer 1106. A logic high input port of multiplexer 1106 is coupled to a logic low bias, such as ground 1089.An inverted receive data valid signal 1113 and receive data valid register A signal 1114, which may be data output of receive data valid register A 1052 of FIG. 5K, are input to AND gate 1104. Output of AND gate 1104 is end pulse signal 1116 which is provided as a control select input to multiplexer 1106. Output of multiplexer 1106 is provided as a data input to register 1107. Register 1107 is clocked responsive to receive clock signal 278. Output of register 1107 is multiplexer select register A signal 1118.FIGS. 5M, 5N, 5O and 5P are respective output timing diagrams of exemplary embodiments of either even or odd receive data byte lengths for when receive client interface 128 is in an 16-bit mode. In FIG. 5M, RX_DATA_VALID signal 847 is generally asserted when RxDiv2Clk signal 845 is generally at a logic high level and RX_DATA[7:0] signal 846 has an even number of bytes. Notably, RX_DATA VALID signal 847 is generally asserted at 1010 and maintained at the logic high level for reception of all data bytes of RX_DATA[7:0] signal 846, after which RX_DATA VALID signal 847 is de-asserted after/during reception of the last data byte generally at 1120.In FIG. 5N, RX_DATA_VALID signal 847 is generally asserted when RxDiv2Clk signal 845 is generally at a logic high level and RX_DATA[7:0] signal 846 has an odd number of bytes. Notably, RX_DATA VALID signal 847 is generally asserted at 1010 and maintained at the logic high level for reception of all data bytes of RX_DATA[7:0] signal 846, after which RX_DATA VALID signal 847 is de-asserted after/during reception of the last data byte generally at 1140.FIG. 5O, RX_DATA_VALID signal 847 is asserted when RxDiv2Clk signal 845 is generally at a logic low level and RX_DATA[7:0] signal 846 has an even number of bytes. Notably, RX_DATA VALID signal 847 is generally asserted at 1150 and maintained at the logic high level for reception of all data bytes of RX_DATA[7:0] signal 846, after which RX_DATA VALID signal 847 is de-asserted after/during reception of the last data byte generally at 1170.In FIG. 5P, RX_DATA_VALID signal 847 is asserted when RxDiv2Clk signal 845 is generally at a logic low level and RX_DATA[7:0] signal 846 has an odd number of bytes. Notably, RX_DATA VALID signal 847 is generally asserted at 1150 and maintained at the logic high level for reception of all data bytes of RX_DATA[7:0] signal 846, after which RX_DATA VALID signal 847 is de-asserted after/during reception of the last data byte generally at 1120.FIG. 5Q is an output timing diagram depicting an exemplary embodiment of a bypass mode for when receive client interface 128 is in an 8-bit mode. In a bypass mode, RX_DATA VALID signal 847 is generally asserted at 1150 and maintained at the logic high level for reception of all data bytes of RX_DATA[7:0] signal 846, after which RX_DATA VALID signal 847 is de-asserted after/during reception of the last data byte generally at 1120. Both assertion and de-assertion of RX_DATA VALID signal 847 generally occur while RxDiv2Clk signal 845 is either at a logic low level or at a logic high level. In this example, RX_DATA[7:0} signal 846 has an odd number of bytes, though an even number of bytes may be used in bypass mode. Furthermore, notably, signals 853, 1090, 1095, 1096, and 1197 are not used in this bypass mode, i.e., maintained de-asserted.It should be understood that each transmit and receive data pathway may be configured for a bit width, such as for example 8 or 16 bits wide, with each such pathway being synchronous to a clock, respectively such as a TX_CLK or a RX_CLK, for independent full-duplex operation.Physical Layer InterfaceReturning to FIG. 1, EMAC 110 can be configured to interface to MII/GMII/MGT physical layer ("PHY") interfaces. Because EMAC 110 uses one and only one PHY interface 119 in operation at a time, all I/O pins for each of PHY interfaces 119 are not used simultaneously. At the same time, processor block 102 has a finite number of I/O pins available due to routing channel requirements in FPGA 100, and thus I/O pins are shared with between processor 103 and other functional blocks in Processor block 102.Because processor block 102 has limited number of I/O pins available at the ASIC Processor block-FPGA fabric boundary, FPGA fabric 101 may use FPGA cells to interface to the ASIC/processor block 102 for routing connectivity to ASIC/Processor block 102. These FPGA cells are for interfacing processor block 102 to FPGA fabric 101, namely connecting processor block I/O ports to FPGA fabric routing. FPGA termination cell width determines the number of I/O pins possible. As a result, for EMAC 110 to support PHY interfaces 119, PHY I/O pins are re-used ("pin muxing") for each PHY interface 119.Pin muxing reduces the number of I/O pins employed by each EMAC 110 and EMAC 111 by 39 pins each. With a total reduction of 78 I/O pins in PHY interfaces 119, along with output pin reductions in statistics interfaces 116, two EMACs 110 and 111 may be implemented in processor block 102 where before there would only be room for one EMAC.An example implementation for output pin muxing for processor block 102 in a Verilog RTL logic equation listing is: EMAC_phyRgmii[7:0] = TIE_configVec[70] ? {RGMII_TXD_FALLING[3:0], RGMII_TXD_RISING[3;0]}: GMII_TXD[7:0]; EMAC-phyTxEn = TIE_configVec[70] ? RGMII_TX_CTL_RISING: GMII_TX_EN; EMAC-phyTxEr = TIE_configVec[70] ? RGMII_TX_CTL_FALLING: GMII_TX_ER; EMAC-phyTxD[7:0]= (TIE_configVec[68] TIE_configVec[69]) ? TXDATA[7:0]: EMAC_phyRgmii[7:0]; TIE_configVec[68] = CORE_HAS_GPCS TIE_configVec[69] = CORE_HAS_SGMII TIE_configVec[70] = CORE_HAS_RGMIIAn implementation of this example may result in a reduction of 18 output pins.An example implementation for input pin muxing for processor block 102 in a Verilog RTL logic equation listing is: GMI_RX_CLK = PHY_emacRxClk; GMII_RX_CLK = PHY_emacRxClk; GMII_COL = PHY_emacCol; TXRUNDISP = PHY_emacCol; GMII_RXD[7:0] = PHY_emacRxD[7:0]; RGMII_RXD_FALLING[3:0] = PHY_emacRxD[7:4]; RGMII_RXD_RISING[3:0] = PHY_emacRxD[3:0]; RXDATA[7:0] = PHY_emacRxD[7:0]; GMII_RX_DV = PHY_emacRxDV; RGMII_RX_CTL_RISING = PHY_emacRxDV; RXREALIGN = PHY_emacRxDV; GMII_RX_ER = PHY_emacRxEr; RGMII_RX_CTL_FALLING = PHY_emacRxEr;An implementation of this example may result in a reduction of 21 input pins.Statistics InterfaceFIG. 7A is a high-level block diagram depicting an exemplary embodiment of a transmit-side ("Tx") of a statistics interface 1240, which forms a portion of statistics interface 116 of FIG. 1. From transmit engine 820, transmit statistics vector signal 1241 is provided to transmit statistics multiplexer 125. For purposes of clarity by way of example, it will be assumed that transmit statistics vector signal 1241 is a thirty-two-bit wide signal, though other bit widths may be used. For example, transmit statistics vector 1241 may be a thirty-two-bit wide vector which is provided to transmit statistics multiplexer 125. A portion of transmit statistics vector 1241 may be siphoned off to provide transmit statistics byte valid signal 1243. For example, the thirtieth bit of transmit statistics vector 1241 may be used to provide transmit statistics byte valid signal 1243.Transmit engine 820 provides transmit statistics valid signal 1242 to transmit statistics multiplexer 125. A transmit clock signal 821 is provided from EMAC core 123 to transmit statistics multiplexer 125. Outputs from transmit statistics multiplexer 125 are transmit statistics vector output 1244 and transmit statistics valid output 1245. Outputs 1244 and 1245 from a portion of transmit-side statistics interface 1240.EMAC 110 generates statistics for data traffic responsive to transmitting and receiving each Ethernet frame. At the end of each such frame, EMAC 110 outputs associated transmit and receive statistics vectors to logic configured and FPGA fabric 101 or subsequent collection and processing of such data traffic. In FIG. 6 there is an example implementation of logic instantiated in FPGA fabric for collection and processing of data traffic.FIG. 6 is a high-level block diagram depicting an exemplary embodiment of EMAC 110 statistics registers, which may be read via a DCR bus. As previously mentioned, processor block 102 is a region isolated for embedded circuitry within FPGA fabric 101. However, processor block 102 may be external to FPGA fabric 101, though access to FPGA fabric 101 is part of implementing embedded circuitry within processor block 102. EMAC client transmit statistics valid signal 1212 is provided via statistics interface 116 from processor block 102. EMAC client Tx statistics valid signal 1212 may be for EMAC 110 or EMAC 111, where a number sign ("#") as indicated in the drawing is for either a zero or a one to designate one of the two EMACs.EMAC client transmit statistics signal 1213 is provided from processor block 102 via statistics interface 116. Signals 1212 and 1213 are provided to transmit statistics de-multiplexer 1211 which is instantiated in configurable logic of FPGA fabric 101. Output of transmit statistics de-multiplexer 1211 is transmit statistics valid signal 1215 and transmit statistics vector signal 1216, which in an exemplary implementation may be a thirty-two-bit wide vector signal.Signals 1215 and 1216 are provided to a statistics processing unit 1220, such as a plurality of statistics counters 1220. Statistics counters 1220 may be instantiated in configurable logic of FPGA fabric 101. EMAC client receive statistics valid signal 1221 is provided from processor block 102 via statistics interface 116. EMAC client receive statistics signal 1214, which in an exemplary implementation may be a seven-bit wide signal, is provided from processor block 102 via statistics interface 116.Receive statistics de-multiplexer 1229 receives signals 1221 and 1214. Receive statistics de-multiplexer 1229 is instantiated in configurable logic of FPGA fabric 101. Output from receive statistics de-multiplexer 1229 is receive statistics valid signal 1228 and receive statistics vector signal 1217. In an exemplary implementation, receive statistics vector signal 1217 may be a twenty-seven-bit wide signal. Signals 1228 and 1217 are provided to statistics processing unit, which may be implemented as a plurality of statistics counters 1220 instantiated in configurable logic of FPGA fabric 101.Host MIIM ready signal 1223 is output from statistics counter 1220 responsive to processor 103 (shown in FIG. 1) read via signals 1218. Host MIIM ready signal 1223 is output to processor block 102 via host bus 118 (shown in FIG. 1), where it may be received as host MIIM select signal 450. Host read data signal 1222, which in an exemplary implementation may be a thirty-two-bit wide signal, is output from statistics counter 1220 responsive to processor 103 (shown in FIG. 1) read via signals 1218. Host read data signal 1222 is received to processor block 102 via host bus 118 (shown in FIG. 1) as host write data signal 438.Output from host interface 112 from processor block 102 is host read data signal 445 and host MIIM ready signal 446. Host MIIM ready signal 446 is received by statistics counter 1220 as HOST MIIM select signal 1219. Host read data signal 445 is received by counter 1220 and broken up into signals 1218. Signals 1218 may in an exemplary implementation include a sixteen-bit hex signal, a host request signal, a two-bit wide host opcode signal, a two-bit binary signal, a host EMAC select signal, and a ten-bit wide host address signal. Signals to and from statistics counters 1220are provided to and from host interface 112.Returning to FIG. 7A, because processor block 102 has a limited number of I/O pins available, EMACs 110 and 111 output statistics vectors in a small number of bits for each transmit or receive clock cycle. For example, statistics interface 116 may output seven bits per receive clock cycle. The example of seven bits is selected as an inter-frame gap delay may be as small as four receive clock cycles, and thus a subsequently received frame may be substantially short, i.e., a packet that contains no data. Transmission of a receive statistics vector to logic instantiated in FPGA fabric 101 thus can be completed in four receive clock cycles to provide sufficient time for statistics processing units instantiated in FPGA fabric 101 to accumulate receive statistics provided via statistics interface 116.For example for a transmit statistics vector, statistics output may be one bit per transmit clock cycle. A one bit per transmit clock cycle was selected as an example because a transmit side does not have the same restriction as the receive side. As mentioned with reference to FIG. 6, de-multiplexers are instantiated in FPGA fabric 101 to de-multiplex statistics bit output from statistics interface 116. Notably, multiplexing and de-multiplexing of statistics output introduces time delays before statistics vectors may be processed by a statistics collection unit instantiated in FPGA fabric 101. However, because such statistics collection unit need not be synchronized to the received or transmitted frame that generated the statistics, statistics accumulation proceeds independently from a transmit or receive frame. Statistics processing is configured to complete before a next statistics output such that the next statistics output may be processed.Continuing the above example, multiplexing of a statistics vector reduces transmit statistics interface from thirty-two output pins to one output pin and reduces receive statistics interface from twenty-seven output pins to seven output pins. Accordingly, a total reduction of fifty-one output pins in implementation may be obtained for each EMAC statistics interface 116. The I/O pin reduction for implementation of PHY interfaces 119 as well as the reduction of output pins for statistics interfaces 116 facilitates integrating more than one EMAC, such as EMACs 110 and 111, within processor block 102.FIG. 7B is a high-level block diagram depicting an exemplary embodiment of a receive-side statistics interface 1260. Receive engine 850 provides receive statistics vector signal 1261 to receive statistics multiplexer 126. For purposes of clarity by way of example and not limitation, it will be assumed that receive statistics vector 1261 is a twenty-seven-bit wide signal which is provided to receive statistics multiplexer 126, though other bit widths may be used. A portion of receive statistics vector signal 1261 may be used to provide receive statistics byte valid signal 1263. For example, the twenty-second bit of receive statistics vector signal 1261 may be used to provide receive statistics byte valid signal 1263.Receive engine 850 outputs receive statistics vector 1261 and receive statistics valid signal 1262. Receive statistics valid signal 1262 is provided to receive statistics multiplexer 126. A portion of receive statistics vector 1261 is provided to receive statistics multiplexer 126, and one bit of receive statistics vector 1261 is provided to receive-side statistics interface 1260 as receive statistics byte valid signal 1263. In an implementation, receive statistics byte valid signal 1263 may be a single bit wide signal such as the twenty-second bit of receive statistics vector signal 1261, where all the bits, such as bits zero through twenty-six of receive statistics vector signal 1261 may be provided to receive statistics multiplexer 126.Receive statistics multiplexer 126 may be clocked responsive to receive clock signal 278 from EMAC core 123. Output from receive statistics multiplexer 126 is receive statistics vector output signal 1264 and receive statistics valid output signal 1265. Receive statistics vector output signal 1264 in an implementation may be a 7-bit wide signal.FIG. 7C is a block/schematic diagram depicting an exemplary embodiment of transmit statistics multiplexer 125. Transmit statistics vector signal 1241 is input to a logic high port of multiplexer 1282. Transmit statistics valid signal 1242 is provided as a control signal input to multiplexer 1282 to select as between high and low logic level ports, and is provided to transmit statistics multiplexer controller 1281. Other inputs to transmit statistics multiplexer controller 1281 are transmit reset signal 1285 and transmit clock signal 821.Output from transmit statistics multiplexer controller 1281 is select signal 1289, which for example may be a five-bit wide select signal, which is provided as a control select input to multiplexer 1284. Output from transmit statistics multiplexer controller 1281 is transmit statistics valid output signal 1243.Output of multiplexer 1282 is transmit statistics vector multiplex signal 1291, which in continuing the above example may be a thirty-two-bit wide output. Output of multiplexer 1282 is provided to register 1283 which is clocked responsive to transmit clock signal 821. Output of register 1283 is provided to bus 1287. Continuing the above example, bus 1287 may be a thirty-two-bit wide bus for providing respective inputs, such as thirty-two respective inputs to multiplexer 1284. Any of such thirty-two inputs to multiplexer 1284 may be selected responsive to a five-bit wide select signal 1289 for output as transmit statistics vector output signal 1244. All output of register 1283 is fed back to multiplexer 1282 for input on a logic low port thereof. For example, this feedback may be thought of transmit statistics vector registered signal 1292, which may be a thirty-two bit wide signal.FIG. 7D is a state diagram depicting an exemplary embodiment of a state machine 1300 for transmit statistics multiplexer controller 1281 of FIG. 7C. State machine 1300 is put into idle state 1305 responsive to transmit reset signal 1285 being asserted. State machine 1300 stays in idle state 1305 responsive to transmit statistics valid signal 1242 not being asserted. Notably, while in idle state 1305, state machine 1300 outputs, namely transmit statistics valid output signal 1243 and select signal 1289 are respectively logic 0 and logic 00000 in the above-described exemplary implementation. If, however, transmit statistics valid signal 1242 is asserted, state machine 1300 transitions from idle state 1305 to state S1 1301.State machine 1300 in state S1 1301 has transmit statistics valid output signal 1243 equal to a logic 1 and select signal 1289 equal to 00000 in the above-described exemplary implementation. State machine 1300 in state S2, which transitions from state S1 to S2 responsive to the next clock cycle of transmit clock 821, has outputs of transmit statistics valid output signal 1243 equal to a logic 1 and select signal 1289 equal to 00001 in the above-described exemplary implementation.Accordingly, for each subsequent transmit clock signal 821 cycle, state machine 1300 proceeds to subsequent states incrementing select signal 1289. Skipping ahead to the last two states for the exemplary implementation, at state S31 1303, transmit statistics valid output signal will be a logic 1 and select signal 1289 will be a 10001. On the next clock cycle of transmit clock 821, state machine 1300 will transition from state S31 1303 to state S32 1304 where transmit statistics valid output signal 1243 will be logic 1 and select signal 1289 will be a 10000 for the above-described exemplary implementation. After all bits on bus 1287 of FIG. 7C have been incrementally selected for transmit statistics vector output signal 1244, on the next transmit clock signal 821 cycle, state machine 1300 will transition from state S32 1304 back to idle state 1305.FIG. 7E is a timing diagram depicting an exemplary embodiment of timing for transmit side statistics interface 1240 of FIG. 7A. Generally at 1287, transmit statistics valid signal 1242 is pulsed. In response, transmit statistics vector 1241 information is passed to transmit statistics vector output 1244 one bit at a time because transmit statistics valid output signal 1245 is asserted responsive to pulsing of transmit statistics valid signal 1242. Bits 1288 associated with a transmit statistics vector are provided via transmit statistics vector output signal 1244 while transmit statistics valid output signal 1245 is asserted, as was described with reference to state machine 1300 of FIG. 7D.FIG. 7F is a block/schematic diagram depicting an exemplary embodiment of receive statistics multiplexer 126. Receive statistics valid signal 1262 is provided as a control select input to multiplexer 1351 and to receive statistics multiplexer controller 1354. Provided to a logic high input port of multiplexer 1351 is receive statistics vector signal 1261, which in the above exemplary implementation is a twenty-seven bit wide signal. Other inputs to receive statistics multiplexer controller 1354 are receive reset signal 1355 and receive clock signal 278. Outputs from receive statistics multiplexer controller 1354 include select signal 1358 and receive statistics valid output signal 1265. In an exemplary implementation, select signal 1358 is a two-bit wide signal for selecting one of four input port groupings of multiplexer 1353 for output.Output of multiplexer 1351 is receive statistics vector multiplex signal 1356, which in an exemplary implementation is a twenty-seven-bit wide signal. Receive statistics vector multiplex signal 1356 is provided to register 1352, which registers are clocked via receive clock signal 278. Output of register 1352 is provided to bus 1322 and fed back to a logic low input port of multiplexer 1351. Output of register 1352 is receive statistics vector registered signal 1357, which in the exemplary implementation is a twenty-seven-bit wide signal. Continuing the above example, the twenty-seven bits output from register 1352 may be grouped as bits zero through six, seven through thirteen, fourteen through twenty, and twenty-one through twenty-six and then recycling of bit zero. Select signal 1358 selects one of these groupings for output from multiplexer 1353, and then select signal 1358 is incremented to select another group, and so on and so forth. In this manner, output of multiplexer 1353 is Rx statistics vector output signal 1264, which may be a seven-bit-wide output signal in an exemplary implementation.FIG. 7G is a state diagram depicting an exemplary embodiment of a state machine 1370 for receive statistics multiplexer controller 1354 of FIG. 7F. Responsive to assertion of receive reset signal 1335, state machine 1370 is put in idle state 1375. State machine 1370 stays in idle state 1375 until receive statistics valid signal 1262 is asserted. In other words, state machine 1370 stays in idle state 1375 responsive to the non-assertion of receive statistics valid signal 1262.Responsive to assertion of receive statistics valid signal 1262, state machine 1370 transitions from idle state 1375 to state S1 1371. Outputs of state machine 1370, namely receive statistics valid output signal 1265 and select signal 1358, in state S1 1371 respectively are a logic 1 and a logic 00. On a subsequent receive clock signal 1278 cycle, state machine 1370 transitions from state S1 1371 to state S2 1372. Accordingly, receive statistics valid output signal 1265 is maintained at a logic 1 level, and select signal 1358 is incremented to a 01 to select the next grouping of seven bits for output from multiplexer 1353 for receive statistics vector output signal 1264. From state S2 1372, state machine 1370 responsive to the next receive clock signal 278 cycle transitions to state S3 1373.In state S3 1373, state machine 1370's outputs are a logic 1 and a logic 11 for receive statistics valid output signal 1265 and select signal 1358, respectively. On the next receive clock signal 278 cycle, state machine 1370 transitions from state S3 1373 to state S4 1374. In state S4 1374, outputs of state machine 1370 are a logic 1 and a logic 10 for receive statistics valid output signal 1265 and select signal 1358, respectively. On the next receive clock signal 278 cycle, state machine 1370 transitions from S4 1374 back to idle state 1375.FIG. 7H is a timing diagram depicting an exemplary embodiment of timing for receive statistics multiplexer 126 of FIG. 7B. Receive statistics valid signal 1262 is pulsed generally at 1381, and in response, receive statistics vector output signal 1264 will be passed data and receive statistics valid output signal 1265 will be held at a logic high state generally through 1383 for data 1384. Receive statistics vector signal 1261 generally at 1382 provides data for receive statistics vector output signal 1264. Notably, each portion of data 1384, which in this example there are four respective portions of data 1384, is provided on each clock cycle of receive clock signal 278 after pulsing receive statistics valid signal 1262.Address FilterFIG. 8 is a high-level block diagram depicting an exemplary embodiment of address filter 129 of FIG. 1. Receive data early signal 1421, which in an exemplary implementation may be an eight-bit wide signal, along with receive data valid early signal 1429, are provided to receive client interface 128. Output of receive client interface 128 is provided to CAM 1401, broadcast address module 1402, pause address module 1403, unicast address module 1404, and pause address module 1405. Pause address module 1403 is a factory setting, which may be hard-wired or programmed, whereas pause address module 1405 is configured for inputting an address by a user. Notably, a pause address may be asserted, for example by a client circuit instantiated in configurable logic of FPGA fabric 101, to transmit a pause frame.Provided to unicast address module 1404 is TIE unicast address signal 1422, which in an exemplary implementation may be a forty-eight-bit wide address signal. Provided to pause address module 1405 is receive pause address signal 1423, which in an exemplary implementation may be a forty-eight-bit wide address signal. As mentioned above, CAM 1401 may be implemented as a plurality of registers with comparison logic, though a CAM may be used.Address decode, read/write control logic and host control registers ("decode/control circuitry") 1406 is coupled to CAM 1401 and unicast address module 1404. Output from decode/control circuitry 1406 is provided to CAM 1401 and unicast address module 1404. Decode/control circuitry 1406 may be coupled to host bus 160. Notably, DCR bus input to decode/control circuitry 1406 may be provided via host bus 160 or bidirectional communication with a processor external to processor block 102.TIE address filter enable signal 1425 may be provided to decode/control circuitry 1406 to provide padding for multicast and unicast addresses. Recall that CAM 1401 is for providing unicast addressing. Output from CAM 1401, broadcast address module 1402, pause address module 1403, unicast address module 1404, and pause address module 1405 are provided to respective OR trees 1407 and 1408.A mode that accepts any destination address, what is commonly known as "promiscuous mode", may be invoked responsive to promiscuous mode signal 1412, which is provided from decode/control circuitry 1406 to OR tree 1408. Output of OR tree 1408 is address valid early signal 1428. Output from OR tree 1407 is provided as frame drop inverted signal 1426, which signal is provided as a data input to register 1409. Register 1409 is clocked responsive to receive clock signal 278. Data output of register 1409 is provided as an input to inverter 1411, the output of which is frame drop signal 1427.Frame data is passed to a client through the Rx data/control client interface. If address filter 129 is activated, only frames having an address matching an address in address filter 129 are passed to the client. Frames with non-matching addresses are dropped, which dropping is indicated to the client via frame drop signal 1427 being asserted. Notably, when promiscuous mode is invoked, address filter 129 is disabled for filtering addresses though frame drop signal 1427 may still be asserted.Receive pause address signal 1423 may be obtained from embedded EMAC host registers. For example, EMAC host register may include a receive configuration word one register, which in an exemplary implementation may support storing a 32-bit wide address, and a receive configuration word zero register, which in an exemplary implementation may support storing a 16-bit wide address to determine if an incoming destination address matches a stored address for purposes of rejecting or accepting the incoming received frame.Host bus 160 may include a version of host bus 118 signals and address filter access signals, as listed for example as signal set (5) in Table 1. For example, host bus 160 may include a host clock signal 440, a host address signal, a host write enable signal, a host read enable signal of signals 464, a host write data signal, a host address filter CAM read signal, an internal MGT host reset signal, and address filter read data signal 434 or 468, 487. In an exemplary implementation, a host address signal may be a ten-bit wide signal; a host read/write data signal may be a thirty-two-bit wide signal; and an address filter read data signal may be a forty-eight-bid wide signal.Each hard core EMAC contains a receive address filter. Address filtering is for rejecting any incoming receive frame that does not have an acceptable destination address. When a packet is rejected, the packet's data is not passed to client.In this exemplary embodiment, there is programmable unicast destination address matching via unicast address module 1404, programmable multicast address matching via CAM 1401, broadcast address recognition via broadcast address module 1402, an optional/programmable pass-through mode with address filter 129 disabled via pause address module 1405, and pause control frame address recognition via pause address module 1403. When address filtering is activated, address filter 129 may have unicast address matching, multi-cast address matching with CAM, broadcast address recognition, pause control frame address recognition and programmable pause frame address matching all activated.Address filter 129 can be programmed to promiscuous mode to accept all incoming frames. In an exemplary implementation, CAM 1401 contains four entries for multicast address matching though fewer or more entries may be used and CAM size may be adjusted accordingly. Notably, a broadcast address and a pause control address are fixed to respective predefined values for broadcast address module 1402 and pause address module 1403.Having an embedded ASIC block 102 in an FPGA, facilitates implementation of tie-off pins. A tie-off pin may be used to provide TIE address filter enable signal 1425 to activate or deactivate address filtering. TIE_addrFilEn signal 1425 can be programmed when the FPGA is configured. For example, when TIE_addrFilEn signal 1425 is tied to logic high, address filtering is active. A host processor may overwrite this tie-off value by programming a new value through host bus 118 (shown in FIG. 1) as coupled to host bus 160. Notably, processor 103 may change tie-off pin values via a write to address filter 129 via DCR bus 114 and host interface 112.Additionally, tie-off pins may be used to provide a particular unicast address for address filter matching, such as via TIE unicast address signal 1422. This allows address filter 129 to start functioning without having to program a unicast address through host bus 160. To change a unicast address, a host processor can program in a new unicast address through host bus 160.TIE unicast Addr[47:0] signal 1422 provides a unicast address that can be programmed into address filter 129 via tie-off pins when the FPGA is configured. Notably, processor 103 may change tie-off pin values via a write to address filter 129 via DCR bus 114 and host interface 112.Again, tie-off pins may be set to a value when the FPGA is configured, and thus use of these tie-off pins allows address filter 129 to start functioning with a unicast address and with address filtering activated or deactivated without any management action from a host processor. Accordingly, address filter 129 may start functioning with a unicast address or in a promiscuous mode without the need for host processor intervention. In an exemplary implementation, address filter 129 may be implemented with standard layout cells in ASIC block 102 to provide an efficient implementation compared to implementation in FPGA programmable logic, thereby resulting in increasing FPGA resource availability for instantiation of a user design.Address filter 129 makes use of the early versions of pipelined received data valid and data signals, namely RX_DATA_VALID_early 1429 and RX_DATA_early[7:0] 1421, so address filter 129 has time to compare a received destination address with an addresses stored in address filter 129 and so EMAC core 123 host registers, namely Receive Configuration Word 0[31:0] and Receive Configuration Word 1 [15:0], have time to determine whether to accept or reject an incoming receive frame.Frame drop signal 1427 indicates to a receive-side of client interface 117 to reject an incoming frame. ADDRESS_VALID_early signal 1428 indicates to EMAC core 123 to pass an incoming frame to a receive-side of client interface 117.FPGAsBelow are some examples of FPGAs in which EMACs 110 and 111 may be implemented. FIG. 9 is a simplified illustration of an exemplary FPGA. The FPGA of FIG. 9 includes an array of configurable logic blocks (LBs 2801a-2801i) and programmable input/output blocks (I/Os 2802a-2802d). The LBs and I/O blocks are interconnected by a programmable interconnect structure that includes a large number of interconnect lines 2803 interconnected by programmable interconnect points (PIPs 2804, shown as small circles in FIG. 9). PIPs are often coupled into groups (e.g., group 2805) that implement multiplexer circuits selecting one of several interconnect lines to provide a signal to a destination interconnect line or logic block. Some FPGAs also include additional logic blocks with special purposes, e.g., DLLs, RAM, and so forth.One such FPGA, the Xilinx Virtex(R) FPGA, is described in detail in pages 3-75 through 3-96 of the Xilinx 2000 Data Book entitled "The Programmable Logic Data Book 2000" (hereinafter referred to as "the Xilinx Data Book"), published April, 2000, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference. (Xilinx, Inc., owner of the copyright, has no objection to copying these and other pages referenced herein but otherwise reserves all copyright rights whatsoever.) Young et al. further describe the interconnect structure of the Virtex FPGA in U.S. Pat. No. 5,914,616, issued Jun. 22, 1999 and entitled "FPGA Repeatable Interconnect Structure with Hierarchical Interconnect Lines", which is incorporated herein by reference in its entirety.One such FPGA, the Xilinx Virtex(R)-II FPGA, is described in detail in pages 33-75 of the "Virtex-II Platform FPGA Handbook", published December, 2000, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference.One such FPGA, the Xilinx Virtex-II Pro(TM) FPGA, is described in detail in pages 19-71 of the "Virtex-II Pro Platform FPGA Handbook", published Oct. 14, 2002 and available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference.As FPGA designs increase in complexity, they reach a point at which the designer cannot deal with the entire design at the gate level. Where once a typical FPGA design comprised perhaps 5,000 gates, FPGA designs with over 100,000 gates are now common. To deal with this complexity, circuits are typically partitioned into smaller circuits that are more easily handled. Often, these smaller circuits are divided into yet smaller circuits, imposing on the design a multi-level hierarchy of logical blocks.Libraries of predeveloped blocks of logic have been developed that can be included in an FPGA design. Such library modules include, for example, adders, multipliers, filters, and other arithmetic and DSP functions from which complex designs can be readily constructed. The use of predeveloped logic blocks permits faster design cycles, by eliminating the redesign of duplicated circuits. Further, such blocks are typically well tested, thereby making it easier to develop a reliable complex design.Some FPGAs, such as the Virtex FGPA, can be programmed to incorporate blocks with pre-designed functionalities, i.e., "cores". A core can include a predetermined set of configuration bits that program the FPGA to perform one or more functions. Alternatively, a core can include source code or schematics that describe the logic and connectivity of a design. Typical cores can provide, but are not limited to, digital signal processing functions, memories, storage elements, and math functions. Some cores include an optimally floorplanned layout targeted to a specific family of FPGAs. Cores can also be parameterizable, i.e., allowing the user to enter parameters to activate or change certain core functionality.As noted above, advanced FPGAs can include several different types of programmable logic blocks in the array. For example, FIG. 10 illustrates an FPGA architecture 2900 that includes a large number of different programmable tiles including multi-gigabit transceivers (MGTs 2901), configurable logic blocks (CLBs 2902), random access memory blocks (BRAMs 2903), input/output blocks (IOBs 2904), configuration and clocking logic (CONFIG/CLOCKS 2905), digital signal processing blocks (DSPs 2906), specialized input/output blocks (I/O 2907) (e.g., configuration ports and clock ports), and other programmable logic 2908 such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. Some FPGAs also include dedicated processor blocks (PROC 2910).In some FPGAs, each programmable tile includes a programmable interconnect element (INT 2911) having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element (INT 2911) also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of FIG. 10.For example, a CLB 2902 can include a configurable logic element (CLE 2912) that can be programmed to implement user logic plus a single programmable interconnect element (INT 2911). A BRAM 2903 can include a BRAM logic element (BRL 2913) in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured embodiment, a BRAM tile has the same height as four CLBs, but other numbers (e.g., five) can also be used. A DSP tile 2906 can include a DSP logic element (DSPL 2914) in addition to an appropriate number of programmable interconnect elements. An IOB 2904 can include, for example, two instances of an input/output logic element (IOL 2915) in addition to one instance of the programmable interconnect element (INT 2911). As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 2915 are manufactured using metal layered above the various illustrated logic blocks, and typically are not confined to the area of the input/output logic element 2915.In the pictured embodiment, a columnar area near the center of the die (shown shaded in FIG. 10) is used for configuration, clock, and other control logic. Horizontal areas 2909 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA.Some FPGAs utilizing the architecture illustrated in FIG. 10 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic. For example, the processor block PROC 2910 shown in FIG. 10 spans several columns of CLBs and BRAMs.Note that FIG. 10 is intended to illustrate only an exemplary FPGA architecture. The numbers of logic blocks in a column, the relative widths of the columns, the number and order of columns, the types of logic blocks included in the columns, the relative sizes of the logic blocks, and the interconnect/logic implementations included at the top of FIG. 10 are purely exemplary. For example, in an actual FPGA more than one adjacent column of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic.While FPGA examples have been used to illustrate some embodiments of the present invention, the scope of the present invention is not limited to FPGAs. Other embodiments include other types of PLDs besides FPGAs. Further embodiments include an IC having programmable logic or programmable interconnections or both coupled to an embedded EMAC. Hence the IC, for some embodiments of the present invention, may not be what is called an FPGA, but may have circuits with some or all functions the same as or similar to an FPGA that are coupled to the embedded EMAC.Notably, program(s) of the program product defines functions of embodiments in accordance with one or more aspects of the invention and can be contained on a variety of signal-bearing media, such as computer-readable media having code, which include, but are not limited to: (i) information permanently stored on non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM or DVD-RAM disks readable by a CD-ROM drive or a DVD drive); (ii) alterable information stored on writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or read/writable CD or read/writable DVD); or (iii) information conveyed to a computer by a communications medium, such as through a computer or telephone network, including wireless communications. The latter embodiment specifically includes information downloaded from the Internet and other networks. Such signal-bearing media, when carrying computer-readable instructions that direct functions of one or more aspects of the invention represent embodiments of the invention.While the foregoing describes exemplary embodiment(s) in accordance with one or more aspects of the invention, other and further embodiment(s) in accordance with the one or more aspects of the invention may be devised without departing from the scope thereof, which is determined by the claim(s) that follow and equivalents thereof. Claim(s) listing steps do not imply any order of the steps. Trademarks are the property of their respective owners. Headings are provided merely for organizational clarity and are not intended in anyway to limit the scope of the disclosure under them.
Provided are techniques for simulating an aperture in a digital imaging device, the aperture simulation generated by a multi-diode pixel image sensor. In one aspect, a method includes detecting lightincident on a first light sensitive region on a first photodiode of a pixel, and detecting light incident on a second light sensitive region on a second photodiode of the pixel. The method further includes combining, for each pixel, signals from the first and second light sensitive regions, and generating, for a first aperture setting, a first image based at least in part on the light received from the first light sensitive region, and generating, for a second aperture setting, a second image based at least in part on the light received from the second light sensitive region.
1.A device for iris simulation, which includes:Pixel array, each pixel includes:A first photodiode including a first photosensitive area configured to detect light incident on the first photosensitive area; andA second photodiode, which includes a second photosensitive area configured to detect light incident on the second photosensitive area, wherein the first photosensitive area is at least partially formed by the second photosensitive area around;A signal mixer coupled to each pixel and configured to combine signals from the first photodiode and the second photodiode for each pixel, from the first photodiode and the second photodiode The signal is in response to light incident on the first photosensitive area and the second photosensitive area, and the signal indicates that the first light energy incident on the first photosensitive area and the light incident on the second photosensitive area The second light energy on; andAt least one logic circuit coupled to the signal mixer and configured to simulate aperture control based on: (i) a simulation setting for a first aperture is based at least in part on the first light-sensitive area A light energy produces a first image; and (ii) the simulated setting for a second aperture produces a second image based at least in part on the second light energy incident on the second photosensitive area.2.The device of claim 1, wherein the second photosensitive area is larger than the first photosensitive area.3.The device of claim 1, wherein the at least one logic circuit is further configured to be based on the first light energy incident on the first photosensitive area and the first light energy incident on the second photosensitive area The combination of the second light energy produces the second image.4.The device of claim 1, further comprising a third photodiode, the third photodiode comprising a third photosensitive area configured to detect light incident on the third photosensitive area , Wherein the first photosensitive area is at least partially surrounded by the third photosensitive area, and wherein the signal mixer is further configured to combine from (i) the first photosensitive area, (ii) the second photosensitive area And (iii) the signal of the third photosensitive area, wherein the signals from the first, second, and third photodiodes are responsive to the first, second, and third photosensitive areas incident on each pixel And wherein the logic circuit is further configured to generate a third image based at least in part on the signal from the third photodiode.5.The device of claim 4, wherein the at least one logic circuit is further configured to:Comparing the second image with the third image, the second image being captured by the second photosensitive area, and the third image being captured by the third photosensitive area;Determining the phase difference between the second image and the third image; andCalculate the register value corresponding to the phase difference, wherein the signal mixer is based on the register value combination from (i) the first photosensitive area, (ii) the second photosensitive area, and (iii) the first photosensitive area The signal of three photosensitive areas.6.The device of claim 4, whereinThe first photosensitive area on each pixel is located at a center position relative to the second photosensitive area and the third photosensitive area,The second photosensitive area is located on the left side of the first photosensitive area,The third photosensitive area is located on the right side of the first photosensitive area, andThe second photosensitive area and the third photosensitive area at least partially surround the first photosensitive area.7.The device of claim 1, further comprising a microlens array arranged with respect to the pixel array such that each pixel receives light propagating through at least one microlens.8.7. The device according to claim 7, wherein each microlens includes a flat surface and a spherical convex surface, and wherein the first photosensitive area is arranged relative to the microlens such that the center of the first photosensitive area is aligned with the center of the first photosensitive area The vertices of the spherical convex surface of the microlens are vertically aligned.9.A method for simulating an aperture through a pixel array, each pixel includes a first photodiode and a second photodiode, and the method includes:Detecting light incident on the first photosensitive area on the first photodiode;Detecting light incident on a second photosensitive area on the second photodiode, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area;The signals from the first photodiode and the second photodiode are combined for each pixel, and the signals from the first photodiode and the second photodiode respond to incident on the first photosensitive region and the The light on the second photosensitive area, the signal indicating the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area;The simulated setting for the first aperture generates a first image based at least in part on the first light energy incident on the first photosensitive area; andThe simulated setting for the second aperture produces a second image based at least in part on the second light energy incident on the second photosensitive area.10.The method of claim 9, wherein the second photosensitive area is larger than the first photosensitive area.11.The method according to claim 9, further comprising generating a light energy based on a combination of the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area The second image.12.The method of claim 9, further comprising:Detecting light incident on a third photosensitive area on a third photodiode, wherein the first photosensitive area is at least partially surrounded by the third photosensitive area;Combining signals from (i) the first photosensitive area, (ii) the second photosensitive area, and (iii) the third photosensitive area, wherein the signal from the third photosensitive area responds to incident on The light on the third photosensitive area and indicate the third light energy incident on the third photosensitive area on each pixel; andA third image is generated based at least in part on the third light energy incident on the third photosensitive area.13.The method of claim 12, wherein the third photosensitive area is larger than the first photosensitive area.14.The method according to claim 12, further comprising based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the The combination of the third light energy on the third photosensitive area produces the third image.15.The method according to claim 9, further comprising a micro lens array arranged with respect to the pixel array such that each pixel receives light propagating through at least one micro lens.16.A device for simulating an aperture through a pixel array, the device comprising:Means for detecting light incident on the first photosensitive area;A device for detecting light incident on a second photosensitive area, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area;Means for combining, for each pixel, a signal responsive to light incident on the first photosensitive area and the second photosensitive area, the signal indicating the first light energy incident on the first photosensitive area And the second light energy incident on the second photosensitive area;A device for simulating setting for a first aperture to generate a first image based at least in part on the first light energy incident on the first photosensitive area; andA device for generating a second image based at least in part on the second light energy incident on the second photosensitive area is configured for the simulation of the second aperture.17.The device of claim 16, wherein:The device for detecting light incident on the first photosensitive area is a first photodiode;The device for detecting light incident on the second photosensitive area is a second photodiode;The device for combining signals is an analog signal mixer; andThe device for generating the first image and the second image is a logic circuit.18.The device of claim 16, wherein the second photosensitive area is larger than the first photosensitive area.19.The apparatus according to claim 16, further comprising generating a light source based on a combination of the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area The second image.20.The device of claim 16, further comprising:A device for detecting light incident on a third photosensitive area, wherein the first photosensitive area is at least partially surrounded by the third photosensitive area;A device for combining signals from (i) the first photosensitive area, (ii) the second photosensitive area, and (iii) the third photosensitive area, wherein the signal from the third photosensitive area In response to the third light energy incident on the third photosensitive area on each pixel; andA device for generating a third image based at least in part on the third light energy incident on the third photosensitive area.21.The device of claim 20, wherein the third photosensitive area is larger than the first photosensitive area.22.The device according to claim 20, further comprising based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the The combination of the third light energy on the third photosensitive area produces the third image.23.A non-transitory computer-readable storage medium including instructions that, when executed by a processor of a device, cause the device to:Detecting light incident on the first photosensitive area on the first photodiode;Detecting light incident on the second photosensitive area on the second photodiode, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area;The signals from the first photodiode and the second photodiode are combined for each pixel, and the signals from the first photodiode and the second photodiode respond to incident on the first photosensitive region and the The light on the second photosensitive area, the signal indicating the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area;The simulated setting for the first aperture generates a first image based at least in part on the first light energy incident on the first photosensitive area; andThe simulated setting for the second aperture produces a second image based at least in part on the second light energy incident on the second photosensitive area.24.The non-transitory computer-readable storage medium of claim 23, wherein the second photosensitive area is larger than the first photosensitive area.25.The non-transitory computer-readable storage medium according to claim 23, further comprising instructions for causing the device to perform the following operations:The second image is generated based on a combination of the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area.26.The non-transitory computer-readable storage medium of claim 23, wherein the second image is generated based on the following formula:(Es+Eb)·(a0)+Es·(1-a0)Where Es is the first light energy incident on the first photosensitive area, Eb is the second light energy incident on the second photosensitive area, and a0 is the first light energy between zero and one. A configurable register value.27.The non-transitory computer-readable storage medium according to claim 23, further comprising instructions for causing the device to perform the following operations:Detecting light incident on a third photodiode including a third photosensitive area, wherein the first photosensitive area is at least partially surrounded by the third photosensitive area;Combining signals from (i) the first photosensitive area, (ii) the second photosensitive area, and (iii) the third photosensitive area, wherein the signal from the third photosensitive area responds to incident on The light on the third photosensitive area and indicate the third light energy incident on the third photosensitive area on each pixel; andA third image is generated based at least in part on the third light energy incident on the third photosensitive area.28.The non-transitory computer-readable storage medium of claim 27, wherein the third photosensitive area is larger than the first photosensitive area.29.The non-transitory computer-readable storage medium of claim 27, further comprising instructions that cause the device to perform the following operations:Based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the third light energy incident on the third photosensitive area The combination of energy produces the second image.30.The non-transitory computer-readable storage medium of claim 27, wherein the third image is generated based on the following formula:(Es)·(a0)+(Es+Em)·(a1)+(Es+Em+Eb)·(a2)Where Es is the first light energy incident on the first photosensitive area, Eb is the second light energy incident on the second photosensitive area, and Em is incident on the third photosensitive area Of the third light energy, and a0 is the first configurable register value between zero and one, a1 is the second configurable register value between zero and one, and a2 is between zero and one The third configurable register value.
Method, device, equipment and storage medium for iris simulationTechnical fieldThe systems and methods disclosed herein relate to iris simulation, and more specifically, to simulating iris control using a multi-diode pixel design.Background techniqueIn photography, controlling the amount of light is achieved using a variable opening (or aperture) through which light enters the camera and the shutter time. However, this requires a camera instrument with additional mechanical features that allow the user to adjust the variable opening from the lens or another part of the camera. The size of the aperture affects the depth of field (DOF). A small aperture setting (for example, a high f-number, for example, f/22) can increase the sharpness of distant objects, or in other words increase the DOF, which means that more elements from the foreground to the background image become sharply focused of. As we all know, small apertures are also used for landscape photos. Larger apertures may create a bokeh effect when taking pictures. This can create a different perception of the depth of the photo, drawing the viewer into the picture. When the camera uses a larger aperture to focus on a point in the scene, the part of the scene that is not in focus may look extremely blurry relative to the object in focus.Although mobile cameras such as digital cameras and mobile phone cameras have become more popular, mobile cameras are generally not characterized by a variable aperture due to size and cost considerations.Summary of the inventionThe systems, methods, devices, and computer program products discussed herein each have several aspects, and a single aspect of the several aspects is not only responsible for its desired attributes. Without limiting the scope of the invention as expressed by the appended claims, some features are briefly discussed below. After considering this discussion, and to be precise, after reading the section entitled "Detailed Description", it will be understood how the advantageous features of the present invention include aperture simulations using multi-diode pixel elements.In one aspect, there is provided an apparatus for aperture simulation, which includes: a pixel array, each pixel includes: a first photodiode, which includes a first photosensitive area, the first photosensitive area is configured to detect incident on Light on the first photosensitive area; and a second photodiode comprising a second photosensitive area configured to detect light incident on the second photosensitive area, wherein the first photosensitive area is at least partially formed by the first photosensitive area Surrounded by two photosensitive areas; a signal mixer, which is coupled to each pixel and is configured to combine for each pixel signals from the first and second photodiodes in response to light incident on the first and second photosensitive areas, so The signal indicates the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area; and at least one logic circuit (for example, a processor, an adder, a multiplier, and/or the like ), the at least one logic circuit is coupled to the signal mixer and is configured to simulate aperture control based on: (i) the simulation setting for the first aperture is based at least in part on the first light energy incident on the first photosensitive area Generating a first image; and (ii) the simulated setting for the second aperture generates a second image based at least in part on the second light energy incident on the second photosensitive area.The following are non-limiting examples of some features and embodiments of such aperture simulation devices. For example, the aperture simulation device may include a second photosensitive area larger than the first photosensitive area. In some examples, at least one logic circuit is configured to generate a second image based on a combination of first light energy incident on the first photosensitive area and second light energy incident on the second photosensitive area.The aperture simulation device may include a third photodiode that includes a third photosensitive area configured to detect light incident on the third photosensitive area, wherein the first photosensitive area is at least partially composed of The third photosensitive area surrounds, wherein the signal mixer is further configured to combine the signals from the first, second, and third photodiodes in response to incident on the first, second, and third photosensitive areas on each pixel The light comes from the signals of the first, second, and third photodiodes, and wherein the logic circuit is further configured to generate a third image based at least in part on the third light energy incident on the third photosensitive area. The third photosensitive area may be larger than the first photosensitive area.The aperture simulation device may include at least one logic circuit configured to be based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the third The combination of the third light energy on the photosensitive area produces a third image.The aperture simulation device may include a microlens array arranged with respect to the pixel array such that each pixel receives light propagating through at least one microlens, wherein each microlens includes a flat surface and a spherical convex surface, and wherein The first light sensing element is arranged relative to the microlens such that the center of the first light sensing element is vertically aligned with the center of the microlens.In another aspect, a method for simulating an aperture through an image pixel array is provided. Each image pixel includes a first photodiode and a second photodiode, including: detecting light incident on a first photosensitive area on the first photodiode Light; detecting light incident on the second photosensitive area on the second photodiode, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area; for each pixel combination in response to incident on the first and second photosensitive areas The light from the first and second photodiodes indicates the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area; the simulation setting for the first aperture is at least The first image is generated based in part on the first light energy incident on the first photosensitive area; and the second image is generated based at least in part on the second light energy incident on the second photosensitive area for the second aperture simulation setting.For some embodiments, the second photosensitive area is larger than the first photosensitive area. For some embodiments, the method of simulating an aperture may include generating a second image based on a combination of the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area.For some embodiments, the method of simulating the aperture may include a third photodiode that includes a third photosensitive area configured to detect light incident on the third photosensitive area, wherein the third photodiode includes a third photosensitive area configured to detect light incident on the third photosensitive area. A photosensitive area is at least partially surrounded by a third photosensitive area, wherein the signal mixer is further configured to combine signals from the first, second, and third photodiodes in response to the first, second, and third photodiodes incident on each pixel. The light on the third photosensitive area comes from the signals of the first, second, and third photodiodes, and wherein the logic circuit is further configured to generate a third light energy based at least in part on the third light energy incident on the third photosensitive area. image. For some embodiments, the third photosensitive area is larger than the first photosensitive area.For some embodiments, the method of simulating the aperture may include a method based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the third light energy incident on the third photosensitive area. The combination of produces a third image.For some embodiments, the method of simulating an aperture may include using a microlens array that is arranged relative to the pixel array such that each pixel receives light propagating through at least one microlens.In another aspect, a system for simulating an aperture through an image pixel array is provided, which includes: a device for detecting light incident on a first photosensitive area; and a device for detecting light incident on a second photosensitive area. The device, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area; a device for combining a signal responsive to light incident on the first and second photosensitive areas for each pixel, the signal indicating the incident on the first The first light energy on a photosensitive area and the second light energy incident on the second photosensitive area; used to simulate the setting for the first aperture to generate the first light energy based at least in part on the first light energy incident on the first photosensitive area An image device; and a device for generating a second image based at least in part on the second light energy incident on the second photosensitive area for the second aperture simulation setting.For some embodiments, the device for detecting light incident on the first photosensitive area is a first photodiode, and the device for detecting light incident on the second photosensitive area is a second photodiode, so The device for responding to the light combined signal incident on the first and second photosensitive regions is an analog signal mixer, and the device for generating the first image and the second image is a logic circuit.For some embodiments, the second photosensitive area is larger than the first photosensitive area. For some embodiments, generating the second image is based on a combination of the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area.For some embodiments, the apparatus includes means for detecting light incident on the third photosensitive region, wherein the first photosensitive region is at least partially surrounded by the third photosensitive region, and wherein the means for combining is further configured to Combining the signals from the first, second and third photosensitive areas, the signals from the first, second and third photosensitive areas in response to the light incident on the first, second and third photosensitive areas on each pixel, And wherein the device for generating the first image and the second image is further configured to generate a third image based at least in part on the third light energy incident on the third photosensitive area. For some embodiments, the third photosensitive area is larger than the first photosensitive area. For some embodiments, the device includes a combination based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the third light energy incident on the third photosensitive area Generate a third image.In another aspect, a non-transitory computer-readable storage medium is provided, which includes instructions executable by a logic circuit of a device, so that the device: detects incident light on a first photosensitive area on a first photodiode Light; detecting light incident on the second photosensitive area on the second photodiode, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area; for each pixel combination in response to incident on the first and second photosensitive areas The light from the first and second photodiodes indicates the first light energy incident on the first photosensitive area and the second light energy incident on the second photosensitive area; the simulation setting for the first aperture is at least The first image is generated based in part on the first light energy incident on the first photosensitive area; and the second image is generated based at least in part on the second light energy incident on the second photosensitive area for the second aperture simulation setting.For some embodiments, the second photosensitive area is larger than the first photosensitive area. For some embodiments, the non-transitory computer-readable storage medium may include causing the device to generate based on a combination of first light energy incident on the first photosensitive area and second light energy incident on the second photosensitive area. Instructions for the second image. For some embodiments, the non-transitory computer-readable storage medium may include instructions that cause the device to detect light incident on a third photodiode that includes a third photosensitive area, wherein the first photosensitive area is at least partially formed by the third photodiode. Surrounding the photosensitive area, wherein the signal mixer is further configured to combine the signals from the first, second, and third photodiodes in response to the light incident on the first, second, and third photosensitive areas on each pixel from The signals of the first, second, and third photodiodes, and generate a third image based at least in part on the third light energy incident on the third photosensitive area.In some embodiments, the third photosensitive area is larger than the first photosensitive area. In some embodiments, the non-transitory computer-readable storage medium may contain a device that causes the device to be based on the first light energy incident on the first photosensitive area, the second light energy incident on the second photosensitive area, and the third photosensitive area. The combination of the third light energy on the area generates the instruction for the third image. In some embodiments, the second image is generated based on the following formula:(Es+Eb)·(a0)+Es·(1-a0)Where Es is the first light energy incident on the first photosensitive area, Eb is the second light energy incident on the second photosensitive area, and a0 is the first configurable register value between zero and one.In some embodiments, the third image is generated based on the following formula:(Es)·(a0)+(Es+Em)·(a1)+(Es+Em+Eb)·(a2)Where Es is the first light energy incident on the first photosensitive area, Eb is the second light energy incident on the second photosensitive area, Em is the third light energy incident on the third photosensitive area, and a0 is the The first configurable register value between zero and one, a1 is the second configurable register value between zero and one, and a2 is the third configurable register value between zero and one.Description of the drawingsFigure 1 illustrates an example ray tracing of light entering a camera lens and directed to multiple multi-diode pixels in an image pixel.Figure 2 illustrates an example set of columns of a multi-diode pixel and a set of circuits for each column.3A to 3C illustrate exemplary schematic diagrams of two diode pixels.Figure 4 illustrates a set of three example pixel positions in an image pixel array and the corresponding diode configuration according to the pixel positions.Figure 5 illustrates a multi-diode pixel including three light-sensing surfaces.6 illustrates an exemplary schematic diagram for a three-diode pixel that can combine energy collected from a small photodiode with energy collected from a middle diode and energy collected from a large diode.FIG. 7A illustrates an exemplary configuration of three photosensitive surfaces of a multi-diode pixel.Figure 7B illustrates an exemplary array of multi-diode pixels containing three photosensitive surfaces for combined aperture simulation and phase detection autofocus.8A to 8D are illustrations of Bayer color filter patterns on a 2x2 array of multi-diode pixels.9A to 9B illustrate a method of using an analog aperture device.Detailed waysThe size of the camera can be reduced by reducing the size of the camera components or eliminating one or more of the components. For example, the aperture structure (sometimes a combination of shutter and aperture, that is, aperture shutter) can be removed together to form a compact digital camera that is easily integrated with other devices. Although some compact mobile devices include digital cameras with apertures, reducing the aperture to fit compact implementations often poses challenges. First of all, the compact aperture structure is very complicated, so there is a risk of cracking or clogging. Secondly, the shape of the aperture in the prior art is not completely circular, which will have the effect of distortion in the picture. In addition, the weight and size of the aperture cannot be easily reduced by conventional means. The additional components required by the aperture on the camera may increase the thickness of the camera. In addition, due to the complexity of the aperture structure, the manufacture of compact aperture implementations can be complicated and time-consuming.Accordingly, in embodiments such as camera phones, aperture simulation will reduce costs and free up space, while still allowing manual and automatic aperture adjustments. Therefore, it may be desirable to simulate an aperture in the digital image pixels in order to capture an image containing high DOF, but the image may also contain, for example, a bokeh effect or the like.The following detailed description relates to certain specific embodiments of the present invention. However, the present invention can be implemented in many different ways. It should be obvious that the aspects herein can be implemented in various forms, and any specific structure, function, or both disclosed herein are only representative. Based on the teachings herein, those skilled in the art should understand that the aspects disclosed herein can be implemented independently of any other aspects, and two or more of these aspects can be combined in various ways. For example, any number of aspects set forth herein can be used to implement a device or method of practice. In addition, by using other structures, functionality, or structure and functionality other than one or more of the aspects set forth herein or different from one or more of the aspects set forth herein, it is possible to implement This equipment may be able to practice this method.The examples, systems, and methods described herein are described with respect to digital camera technology. The systems and methods described herein can be implemented on a variety of different photosensitive devices or image pixels. These include general or special image pixels, environments, or configurations. Examples of photosensitive devices, environments, and configurations that can be suitable for use with the present invention include, but are not limited to, semiconductor charge coupled devices (CCD) or in complementary metal oxide semiconductor (CMOS) or N-type metal oxide semiconductor (NMOS) technology Active pixel sensors, all of them can be closely related in a variety of applications including: digital cameras, handheld or laptop computer devices, and mobile devices (e.g., phones, smart phones, personal data assistants ( PDA), Ultra Mobile Personal Computer (UMPC), and Mobile Internet Device (MID)).System OverviewFigure 1 depicts an example ray tracing 100 of the focus situation. The light travels from the focal point in the target scene 130, travels through the lens 125 to focus the target scene 130 onto an image sensor containing a plurality of pixel 120 elements, and then falls into the small photodiode 115 and the large photodiode 116 incident on each pixel 120 . Digital cameras can contain additional lens elements. Figure 1 illustrates a single lens 125 element for explanatory purposes. As explained, the pixels receive light from the left direction L(X) and the right direction R(X) of the lens 125. Each pixel may include a multi-diode micro lens (MDML) 105 overlying the photosensitive area of the pixel 120. In some embodiments, each MDML 105 may contain a polymer between 1 micrometer and 10 micrometers on each pixel 120, which has a flat surface and a spherical convex surface to refract light. In another embodiment, each MDML 105 may have a non-spherical shape or any other shape designed to focus light into the photodiode of the pixel. The array of MDML 105 can be used for the overlying pixel array to increase the light collection efficiency of the large photodiode 116 and the small photodiode 115. Specifically, the MDML 105 can collect and focus the light incident on the pixel to the small photodiode 115.Still referring to FIG. 1, the large photodiode 116 and the small photodiode 115 of the pixel 120 may be overlaid with a color filter 110, so that each pixel 120 individually detects the wavelength of light associated with a different color. For example, the pixel 120 may be designed to detect a first, second, or third color (e.g., red, green, or blue wavelength). To achieve this, each pixel 120 in the pixel array may be covered by a single color filter (for example, a red, green, or blue filter). A single color filter may be arranged in a pattern to form a color filter array (CFA) on the pixel array such that each individual filter in the CFA is aligned with one individual pixel 120 in the array. Correspondingly, each pixel in the array can detect the monochromatic light corresponding to the filter aligned with it. An example of a CFA pattern is the Bayer CFA, where the array part is composed of alternating rows of red and green filters and alternating blue and green filters. Each color filter corresponds to a pixel 120 in the underlying pixel array. In Bayer CFA, half of the color filters are green filters, a quarter of the color filters are blue filters, and a quarter of the color filters are red filters. The use of the green filter as much as twice the red filter and blue filter, respectively, simulates the greater ability of the human eye to see green light compared to red and blue light. Each pixel in the Bayer CFA is sensitive to light of a different color compared to its closest neighbor. For example, the closest neighbors of each green filter are the red filter and the blue filter, the closest neighbors of each red filter are the green filters, and each The closest neighbor of a blue filter is the green filter. Because the closest neighbor of each filter has a different color identity compared to it, only one corresponding pixel is overlaid on each filter. The color filter material is composed of dyes or more common pigments to define the frequency spectrum of the color filter 110. The size of each color filter may correspond to the size of the pixel, for example, a 1:1 ratio. However, in another embodiment, each color filter 110 may be larger or smaller than the corresponding pixel 120. For example, in the ratio of the size of the color filter 110 to the size of the pixel 120, the size of the color filter 110 can be represented by any integer or decimal number. In this embodiment, each pixel of the image sensor may include a plurality of color filter elements 110, wherein each color filter element 110 of the plurality of color filter elements is overlaid with a photodiode. In this configuration, the color filter elements may contain patterns of colors similar to those discussed in further detail below with reference to FIGS. 8A to 8D.Still referring to FIG. 1, the pixel 120 may include two photodiodes: a small photodiode 115 and a large photodiode 116. A pinned photodiode can be used as an example of such a light-sensing element, but it should be clear to those skilled in the art that other light-sensing elements can also be used. The pixel 120 may further include other readout elements, which may work individually for each photodiode or two diodes may share some common readout elements. This can cause the fill factor of the photodiode to increase. These pixels can be repeatedly materialized in the horizontal direction with a fixed pixel pitch, so as to form a row of pixels. Each imager may include a plurality of rows or such pixels having substantially the same pixel pitch in the vertical direction as in the horizontal direction, so as to form a two-dimensional pixel array 200. In one embodiment, the large photodiode 116 may substantially at least partially surround the small photodiode 115. The surface area of the small photodiode 115 can make it a component of the size of the large photodiode 116. The term "substantially" as used herein indicates a tolerance within 10% of the indicated measurement value or position.Figure 2 illustrates two example columns of pixels 120, where each column contains circuits for reading and conditioning analog signals 225, 230 from a large photodiode 116 and a small photodiode 115, respectively. The circuit may sequentially receive analog signals from each pixel of the column. In an alternative embodiment, the pixels 120 may be read row by row by a circuit for combining analog signals at each row. In both of these embodiments, the pixels from each row can be read simultaneously resulting in faster processing of digital images. In another alternative embodiment, the image sensor may contain one or more sets of circuits for receiving and conditioning the analog signals 225, 230, where in the case of one set of circuits, each pixel 120 is read sequentially .Still referring to FIG. 2, the circuit may include an analog signal mixer 205 for receiving analog signals 225, 230 generated by each pixel 120. The analog mixer may include a non-linear circuit that forms and outputs one or more new frequencies from one or both of the analog signals 225, 230 it receives. In one embodiment, the analog signal mixer 205 receives two of the analog signals 225, 230 as input signals and outputs a signal that is the sum of the two analog signals 225, 230. The signal mixer 205 can multiply multiple received analog signals by a factor and perform an additional step of summing the resulting signals. For example, the signal mixer can generate the sum of two input analog signals 225, 230 and multiply the resulting signal by a factor between 0 and 1.Still referring to FIG. 2, the circuit may also include a charge transfer amplifier 210 coupled to receive the analog signal output generated by the analog signal mixer 205. The amplifier 210 may amplify the analog signal output to generate an amplified pixel voltage signal to increase the intensity of the pixel analog signal 225, 230 value (for example, voltage or current). The charge transfer amplifier 210 generates a pixel voltage signal having an increased voltage amplitude compared to the voltage signal generated by the small photodiode 115 and the large photodiode 116 of each pixel and provides the increased voltage value to an analog-to-digital conversion circuit (ADC) 215. The integration of the charge transfer amplifier in the pixel 120 may have the effect of increasing the sensitivity level of each of the pixels 120, and thus provide a digital image sensor with increased sensitivity and dynamic range. The operation of the charge transfer amplifier can be controlled by the control signal generated by the analog signal mixer 205. The control signal may also be a common signal for driving a column or row 235 of pixels in the image sensor pixel array 200, or a common driving signal for driving the pixels 120 in the entire pixel array 200.Still referring to FIG. 2, the ADC 215 may be coupled to the output of the amplifier 210. ADC 215 can be shared among rows or columns 235 of pixels. The amplified pixel 120 value can be converted into a digital signal to be read and processed by a digital circuit, because the processing speed with respect to information and an effective transmission digital circuit can provide advantages compared to an analog circuit. Each ADC 215 may perform analog-to-digital conversion of the output voltage signal of the amplifier 210 to obtain a digitized pixel 120 voltage signal indicating the amount of exposure of each of the small photodiode 115 and the large photodiode 116 in each pixel 120. ADC 215 can be implemented using any known A/D conversion technology and can have any accuracy (for example, 8, 10, or 16 bits or more). The ADC 215 may be controlled by a clock (CLK) signal and digitize the analog pixel voltage signal when triggered by the CLK signal. The image sensor may include other control circuits such as a clock generation circuit and other global control circuits not shown in FIG. 2. The ADC circuit can output the digitized analog pixel voltage signal to the buffer 220. Before the buffer provides the digital data from the ADC 215 to the logic circuit, the buffer may temporarily store the data. The logic circuit may include, for example, one or more of a processor, an application specific integrated circuit (ASIC), and/or an image signal processor (ISP). The logic circuit may include, for example, an adder circuit or a multiplier circuit or both or components thereof, where the adder circuit and/or the multiplier circuit may function in the digital domain or the analog domain or both.In one aspect, the pixel array 200, the analog signal mixer 205, and the amplifier 210 together can perform functions including: (1) photon-to-charge conversion; (2) image charge accumulation; (3) accompanied by mixed signal The signals of the amplified charges are mixed; (4) the amplified mixed signals are converted into digital signals; and (5) the digital signals representing the charges of the pixels 120 are stored in the buffer.In another aspect, the analog signals 225, 230 from the large photodiode 116 and the small photodiode 115, respectively, can be separately converted from analog signals to digital signals without using the analog signal mixer 205. In such a configuration, the digital signals of both the large photodiode 16 and the small photodiode 115 are mixed after digitization of the corresponding analog signal by the ISP or the system associated with the processor (SoC).Example pixel architectureFIG. 3A illustrates an example pixel 120 that includes light-sensing elements of different set sizes. Figure 3A is only an example and should not be used to represent the correct scale. An image sensor using such pixels 120 can simulate an aperture by using an arrangement of different sensing elements, as discussed below. One method that can be used to provide increased dynamic range is to provide pixels with two light sensing elements per pixel, a small photodiode 115 in the center, and a large photodiode 116 at least partially surrounding the small photodiode 115. In this diagram of the pixel 120, the large photodiode 116 may be referred to as Dlarge, and the small photodiode 115 may be referred to as Dsmall. The pixel 120 may further include other readout elements, which may work individually for each photodiode or two diodes may share some common readout elements. This can cause the fill factor of the photodiode to increase. These pixels can be repeatedly materialized in the horizontal direction with a fixed pixel pitch, so as to form a row of pixels.An image sensor including pixels (for example, pixels 120) having different sensing elements may be different from the aforementioned image sensors in a number of methods. For example, the large photodiode 116 and the small photodiode 115 of the visible image sensor may have different integration times. For example, the larger photodiode 116 may have a longer integration time than the small photodiode 115, and vice versa. In another example, both the large photodiode 116 and the small photodiode 115 may have substantially the same integration time, or may be user-configurable. The term "substantially" as used herein indicates a tolerance within 10% of the indicated measurement value.Figure 3B illustrates an example circuit based on a low-noise 4-transistor (4T) pixel, and can include separate transmission gates, a mixer large (Mxl) 304 and a mixer small (Mxs) 308 for the large photodiodes 116Dlarge and small, respectively The photodiode 115Dsmall. As illustrated by the dashed line for the large photodiode 116Dlarge and the shaded area of the small photodiode 115Dsmall, the diodes may have different sizes, with the larger size being used for the large photodiode 116Dlarge. Although a circular shape is described for the large photodiode 116, in some aspects, it may be preferable to have a more controlled shape for each diode (for example, a rounded rectangular shape) in order to facilitate charge transfer. Other circuits supporting the pixel may include a reset transistor, a main reset (Mrst) 325, and a readout branch, which consists of a source follower transistor, a main source follower (Msf) 330, a row selection transistor, and a main selector (Msel) 335 composition.Still referring to FIG. 3B, in this type of pixel 120, incoming photons are converted into electron and hole pairs in the silicon substrate. The photoelectrons are then collected by two photodiodes Dlarge and Dsmall. The integration time of the large photodiode 116 Dlarge or the small photodiode 115Dsmall or both may start at time T0. At this time, reset (RST), large transfer field (XRFL) and small transfer field (XRFS) may be high for the amount of time, turning on the transistors Mrst 325, Mxs 308, and Mxl304. This may empty all electrons in the photodiodes 115, 116 and set them to a predetermined voltage. Once XRFL and XRFS are set to low voltage, Mxs 308 and Mxl 304 are turned off, and the photodiode starts to collect photoelectrons and the voltage drops. In general, the rate of accumulation of such photoelectrons is proportional to the amount of incident light impinging on the large photodiode 116 and the small photodiode 115, and is therefore a function of both the light intensity and the area of the photodiode.As mentioned above, the large photodiode 116 may be configured to collect light for a defined period of time. While the large photodiode 116 collects electrons, the small photodiode 115 can also collect electrons, but these may not be used. The small photodiode 115Dsmall can be reset by setting both RST and XRFS to high values. This reset can discard any photoelectrons that Dsmall has already collected, and can instruct Dsmall to start collecting photoelectrons again.In addition, Dsmall can be configured to collect light for a period of time. While Dsmall collects electrons, Dlarge can also collect electrons, but these may not be used. The large photodiode 116Dlarge can be reset by setting both RST and XRFL to high values. This reset can discard any photoelectrons that Dlarge has already collected, and instruct Dlarge to start collecting photoelectrons again.At the end of the integration time, correlated double sampling (CDS) operation can be used to read out the accumulated charge on the diode. To this end, the first Mrst 325 is turned on by setting RST to high, which sets the floating node (FN) to the reset voltage (CELLHI bias threshold of Mrst 325). After this, the SEL signal can be set high, which can turn on Msel 335 to enable pixel readout. If the bus is connected to a current source, the Msf 330 acts as a source follower, causing the bus voltage to track the voltage of FN. Once the reset voltage of FN has been read, Mxl 304 is turned on by setting XRFL to high, thereby dumping all the photoelectrons collected in Dlarge 116 to FN, thereby reducing the voltage of FN. After that, the bus voltage can follow the reduced voltage of FN, and if the SEL is set high, the second readout can be performed by the source follower. The difference between the two readouts can be used to determine the precise voltage change on node FN due to the photoelectrons collected by Dlarge. Additional column circuits can also be used to store such information and to enable further processing, such as amplification, digitization, and other processing. In general, CDS operation can reduce the effects of transistor changes and certain temporal noise that can be present. In some aspects, the time difference between two XRFL pulses (one for reset and one for readout) may represent the integration time of the large photodiode 116. Once the large photodiode 116Dlarge has been read out, another CDS operation may be performed to read out the small photodiode 115Dsmall. This operation may be similar to the operation described above with respect to the large photodiode 116.In the CDS operation for reading out Dsmall, the Mxs 308 can be turned on by setting XFRS high for the small photodiode 115. In the integration of Dsmall, the time between two XFRS pulses is the integration time of Dsmall. When using CDS to operate a readout scheme performed on the large photodiode and the small photodiode of the pixel at different times, the line buffer 220 may store information from the large photodiode 116. Once the small photodiode 115 from the pixel is read out, it can be combined with the result from the associated large photodiode 116 to form the final pixel output value. Therefore, the additional memory requirements from this dual diode configuration are minimal. In another example embodiment, the CDS operation may be performed for both the large photodiode 116 and the small photodiode 115 of a given pixel 120 at the same time.3C illustrates an example circuit based on a low-noise 3-transistor pixel, and may include a separate transfer gate Mxl 304 for the large photodiode 116. As illustrated by the dashed line for the large photodiode 116Dlarge and the shaded area of the small photodiode 115Dsmall, the diodes can have different sizes, with the larger size being used for Dlarge 116. Other circuits supporting pixels may include a reset transistor Mrst 325 and a readout branch, which is composed of a source follower transistor Msf330 and a row select transistor Msel 335.Still referring to FIG. 3C, in this type of pixel 120, incoming photons are converted into electron and hole pairs in the silicon substrate. The photoelectrons are then collected by two photodiodes Dlarge and Dsmall. The integration time of the large photodiode 116 Dlarge can start at time T0. At this time, both RST and XRFL may be high for a certain amount of time, thereby turning on the transistors Mrst 325 and Mxl 304. This may empty all electrons in the large photodiode 116 and may set it to a predetermined voltage. Once the XRFL is set to low voltage, the Mxl 304 can be turned off, and the large photodiode starts to collect photoelectrons and the voltage drops. In general, the rate of accumulation of such photoelectrons is directly proportional to the amount of incident light incident on Dlarge, and is therefore a function of both light intensity and photodiode area. Dsmall can be configured to collect light for a period of time. While Dsmall collects electrons, Dlarge can also collect electrons, but these may not be used. The large photodiode 116Dlarge can be reset by setting both RST and XRFL to high values. This reset can discard any photoelectrons that Dlarge has already collected and allow Dlarge to start collecting photoelectrons again.Additional photodiode placementFIG. 4 illustrates an alternative embodiment in terms of the location of the small photodiode 115 of the large photodiode 116 on the pixel. The pixel array 200 may include a plurality of multi-diode pixels 120, where the position of the small photodiode 115 on each pixel 120 relates to its position in the pixel array 200, so that the chief ray angle of light incident on the pixel array 200 is guided To the small photodiode 115. FIG. 4 illustrates an example pixel array 200 of an image sensor shown as a square. The three cross-hatched lines 405, 410, and 415 indicate pixel positions in the pixel array 200. The pixel array 200 may be any CMOS, CCD or other image sensor. In some embodiments, the image sensor may be, for example, a 32 megapixel (MP)/30 frames per second (fps) image sensor with approximately 0.5 μm pixels, and each pixel 120 has multiple photodiodes and multiple photodiodes. Each photodiode is associated with a trap capacity of approximately 1000 electrons (-e). These image sensor specifications represent only one embodiment of the image sensor, and other image sensors having different specifications may be used in other embodiments.The pixel array 200 may include a plurality of pixels arranged in a predetermined number of rows and columns 235 (for example, M rows and N columns). Each of the pixels may each include a plurality of photodiodes overlying the substrate for accumulating light-generated charges in the underlying portion of the substrate. In some implementations, the pixel array 200 may include one or more color filters 110 positioned to filter incident light, for example, infrared cut filters or color filters. The photodiode of the CMOS pixel may be one of a depleted p-n junction photodiode or a field-induced depletion area under the light gate.The first pixel position 410 is substantially in the center of the pixel array 200. Each pixel 120 in the central area of the pixel array 200 may include a plurality of light-sensing photodiodes 115 and 116. In one embodiment, each pixel 120 includes two light sensing photodiodes 115 and 116, wherein the small photodiode 115 is substantially enclosed by the large photodiode 116, and the small photodiode 115 is located in the center of the pixel 120. In the first pixel position 410 there are two views of the pixel 120. The first view is from directly above and illustrates the position of the small photodiode 115 with respect to the large photodiode 116. The second view is a cross-sectional view of the pixel 120, illustrating the relationship between the small pixels and the vertices of the MDML 105. In the second view, the small photodiode 115 is directly below the apex of the MDML 105. In this configuration, the chief ray angle of the light from the scene 130 is directed onto the small photodiode 115. The term "substantially" as used herein indicates a tolerance within 10% of the indicated position.Still referring to FIG. 4, the second pixel position 405 is closer to the outer boundary of the image sensor pixel array 200 and is substantially vertically aligned with the center of the pixel array 200. In the second pixel position 405 there are two views of the pixel 420. The first view is from directly above and illustrates the position of the small photodiode 115 with respect to the large photodiode 116. In this embodiment, the smaller photodiode 115 may still be substantially enclosed by the larger photodiode 116, and may also be positioned so that the chief ray angle of light is directed to the smaller photodiode 115. In this view, the smaller photodiode 115 is closer to the bottom of the pixel 420. The second view is a cross-sectional view of the pixel 420, illustrating the relationship between the small pixels and the vertices of the MDML 105. In the second view, the small photodiode 115 is no longer directly below the apex of the MDML 105. Alternatively, the small photodiode 115 is positioned within the pixel 420 to be closer to the center of the array. In this configuration, the chief ray angle of the light from the scene 130 is directed onto the small photodiode 115.Still referring to FIG. 4, the third pixel position 415 is closer to the corner boundary of the image sensor pixel array 200. In the third pixel position 415 there are two views of the pixel 425. The first view is from directly above and illustrates the position of the small photodiode 115 with respect to the large photodiode 116. In this embodiment, the smaller photodiode 115 may still be substantially enclosed by the larger photodiode 116, and may also be positioned such that the chief ray angle of light is directed to the smaller photodiode 115. The smaller photodiode 115 is in the upper left corner of the pixel in this view. The second view is a cross-sectional view of the pixel 425, illustrating the relationship between the small photodiode 115 and the apex of the spherical convex surface of the MDML 105. In the second view, the small photodiode 115 is no longer directly below the apex of the MDML 105. Alternatively, the small photodiode 115 is positioned within the pixel 425 to be closer to the center of the array. In this configuration, the chief ray angle of the light from the scene 130 is directed onto the small photodiode 115.FIG. 5 illustrates an example embodiment of a pixel 500 that includes multiple light-sensing surfaces. In this example, the pixel includes three light-sensing surfaces. The first photosensitive surface 515 may be located in the center of the pixel, and may be substantially enclosed by the second photosensitive surface 520. The second photosensitive surface 520 may be substantially enclosed by the third photosensitive surface 525. The term "substantially" as used herein indicates a tolerance within 10% of the indicated position. In another embodiment, each pixel may include any number of photodiodes of different sizes arranged to allow simulation of the aperture.FIG. 6 illustrates an example pixel circuit including a set of three photodiodes 600. As shown in FIG. The first diode Dsmall 605 may be related to the first photosensitive surface 515 of FIG. 5. The second diode Dmedium 610 may be related to the second photosensitive surface 520 of FIG. 5. The third diode Dlarge 615 may be related to the third photosensitive surface 525 of FIG. 5. Three photodiodes 600 can share a set of common transistors 620 for row/column selection, reset and floating nodes, as shown in FIG. 6. Depending on the shared architecture, the operation timing can be adjusted accordingly.In the operation mode illustrated in FIG. 3B, the three photodiodes 600 may be reset at the same time at time T0 by setting RST, XRFL, Transfer Field Intermediate (XRFM), and XRFS to a high state. After that, the three photodiodes 600 start to accumulate photoelectrons. After the desired exposure time, FN is reset by setting RST high. Then, the SEL can be turned on to read the reset level of FN. After that, XRFL, XRFM, and XRFS can be sequentially set high, and the accumulated charges from the three photodiodes 600 can be sequentially transferred to FN in order, followed by a readout for the FN level of each photodiode . This operation uses three readouts, one for each photodiode, so the signal of each of the three photodiodes 600 can be mixed by an analog signal mixer. Therefore, this process can cause: (1) a method for processing multiple images, one image per photodiode, (2) increase in signals by combining two or more of the three photodiodes 600 Bokeh effect, and (3) noise reduction using Dlarge 615 by applying a per-pixel combination algorithm that combines large-aperture light collection and small-aperture sharpness.In another embodiment, the three photodiodes 600 may be simultaneously reset at time T0 by setting RST, XRFL, XRFM, and XRFS to a high state. After that, the three photodiodes 600 start to accumulate photoelectrons. After the desired exposure time, FN is reset by setting RST high. Then, the SEL can be turned on to read the reset level of FN. After that, XRFL, XRFM, and XRFS can be set high, and the accumulated charges from the three photodiodes 600 can be transferred to FN, followed by another readout of the FN level. This operation allows the use of only one readout, which minimizes the noise contribution from the readout process, while increasing the charge from the three photodiodes up to the enhanced signal level. Therefore, this process can cause a high signal-to-noise ratio.Figure 7A illustrates an example embodiment of a pixel 700 having three photosensitive surfaces 705, 710, 715. In related aspects, the central photosensitive surface 715 may be circular or rounded in shape, and is at least partially surrounded by two additional photosensitive surfaces 705, 710. It should be noted that although the square shape of the central photosensitive surface 715 may be used (e.g., in FIG. 7A), other shapes may be included. In other aspects, the left light-sensing surface 705 may be positioned to the left of the central light-sensitive surface 715, and may substantially surround half of the central surface from the left side of the pixel 700. The right light sensing surface 710 may be positioned to the right of the center photosensitive surface 715, and may substantially surround half of the center surface from the right side of the pixel 700. The pixel 700 as illustrated in FIG. 7A may include a light-sensing surface in the form of a rectangle that is rounded and not necessarily depicted. Figure 7B illustrates an example pixel array 750, where each pixel belongs to the configuration shown in Figure 7A. The term "substantially" as used herein indicates a tolerance within 10% of the indicated position.Still referring to FIGS. 7A to 7B, the left photosensitive surface 705 and the right photosensitive surface 710 of the three-diode pixel may include phase detection diodes. In this example arrangement, light travels from the scene 130 through the lens 125 to focus the target scene 130 onto the pixel containing the phase detection diode. The left photosensitive surface 705 receives light from the left direction L(i) of the lens 125, and the right photosensitive surface 710 receives light from the right direction R(i) of the lens 125. In some embodiments, the light from the left direction L(i) may be the light from the left half L(x) of the scene 130 and the light from the right direction R(i) may be the light from the right half of the scene 130 Part of the R(x) light. Accordingly, a plurality of phase detection diodes interlaced across the image sensor and the imaging diode can be used to extract the left and right images shifted from the center image captured by the imaging diode. Instead of right and left, other embodiments may use top and bottom images, diagonal images, or a combination of left/right, top/bottom and diagonal images to calculate auto focus adjustment. The phase detection diode can be further used to calculate the autofocus lens position and generate a depth map showing the distance of the pixel relative to the focal point of the main lens system.When the image is in focus, the left ray L(i) and the right ray R(i) converge at the plane (or surface) of the phase detection diode. As described above, the signal from the phase detection diode can be used to generate the left image and the right image offset from the center image in the front or rear defocus position, and the offset can be used to determine the autofocus of the camera lens 125 adjust. Depending on whether the focus is in front of the subject (closer to the image sensor) or behind the subject (farther from the image sensor), the lens 125 can be moved forward (toward the image sensor) or backward (away from the image sensor). Because the autofocus process can calculate both the direction and amount of movement of the lens 125, the phase difference autofocus can focus very quickly.To perform phase detection, the imaging system can save two images containing only the values received from the phase detection diodes. The left photosensitive surface 705 may receive light entering the MDML 105 from the left direction and the right photosensitive surface 710 may receive light entering the same MDML 105 from the right direction. The MDML 105 can basically overwrite each pixel 700. Based on the consideration that more MDML 105 provides more reliable phase detection autofocus data but requires a larger amount of calculation for pixel value calculation and also increases the possibility of artifacts in the final image, ranging from the sensor’s MDML 105 Any number of MDML 105 from one to all of them can be placed on the image sensor.The focus can be calculated by applying a cross-correlation function to the data representing the left image and the right image. If the distance between the two images is narrower than the corresponding distance in the focus situation, the autofocus system determines that the focus is in front of the object. If the distance is wider than the reference value, the system determines that the focus is behind the object. The autofocus system can calculate how many lens 125 positions (or sensor positions, in an embodiment with a movable sensor) should be moved and the direction of movement and provide this information to the lens 125 actuator to move the lens 125 accordingly, achieving rapid Focus. The process described above can be performed by an image signal processor.FIGS. 8A to 8D illustrate example CFA configurations for the pixels illustrated in FIGS. 7A to 7B. However, although FIGS. 8A to 8D may correspond to the pixels illustrated in FIGS. 7A to 7B, it should be noted that those skilled in the art will be able to apply the same or similar variations of the color filter pattern to any other described herein. Pixel and photodiode configuration. As explained, a plurality of green filters 805g, 810g, red filters 805r, 810r, and blue filters 805b, 810b may be arranged in a Bayer pattern under the plurality of MDML 105. Figure 8A illustrates an example pixel array 750 containing four pixels in a square pattern, where each pixel contains three photosensitive surfaces outlined in dashed lines. Each of the pixels is surrounded by a solid line representing an individual color filter. In one embodiment, the CFA may be arranged similarly to the arrangement of the Bayer color filter pattern. For example, the pixel in the upper left corner of the pixel array 750 may include a green filter 805g that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the green filter 805g. The pixel in the upper right corner of the pixel array 750 may contain a blue filter 805b that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the blue filter 805b. The pixel in the lower left corner of the pixel array 750 may include a red filter 805r that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the red filter 805r. The pixel in the lower right corner of the pixel array 750 may contain a green filter 805g that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the green filter 805g.Figure 8B illustrates another embodiment of a CFA in which some of the pixels are partially filtered. For example, the pixels in the upper left and lower right corners of the pixel array 750 may include a green filter 810g that substantially covers the central photosensitive surface 715 of the pixel but does not cover the left photosensitive surface 705 or the right photosensitive surface 710. The pixel in the upper right corner of the pixel array 750 may include a blue filter 805b that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the blue filter 805b. The pixel in the lower left corner of the pixel array 750 may include a red filter 805r that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the red filter 805r.FIG. 8C illustrates another embodiment of the CFA in which the center photosensitive surface 715 of each pixel is covered by a color filter and the left photosensitive surface 705 and the right photosensitive surface 710 have no filters. For example, the pixels in the upper left and lower right corners of the pixel array 750 may include a green filter 810g that substantially covers the central photosensitive surface 715 of the pixel but does not cover the left photosensitive surface 705 or the right photosensitive surface 710. The pixels in the upper right corner of the pixel array 750 may include a blue filter 810b that substantially covers only the central photosensitive surface 715, so the central surface 715 is exposed to light filtered by the blue filter 810b. The pixel in the lower left corner of the pixel array 750 may include a red filter 810r that substantially covers the central photosensitive surface 715 of the pixel 700, so the central photosensitive surface 715 is exposed to light filtered by the red filter 810r.Figure 8D illustrates another embodiment of the CFA in which some of the pixels are completely filtered. For example, the pixel in the upper right corner of the pixel array 750 may include a blue filter 805b that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the blue filter 805b. The pixel in the lower left corner of the pixel array 750 may include a red filter 805r that substantially covers the three photosensitive surfaces of the pixel, so the three photosensitive surfaces are exposed to light filtered by the red filter 805r. The pixels in the upper left and lower right corners of the array do not contain filters overlying the photosensitive surface.Method and architecture for aperture simulation9A is a flowchart 900 illustrating an example of a method (or process) for aperture simulation using an image sensor containing a plurality of pixels, where each pixel includes two light-sensing surfaces. In block 905, each pixel of the image sensor detects light incident on the first photosensitive area of the small photodiode 115. The image sensor may include a plurality of pixels, where each pixel includes a small photodiode 115 containing a first photosensitive area. The small photodiode 115 may include a first charge storage element (CSE) for storing energy generated by light incident on the first light sensing area. In block 910, each pixel of the image sensor detects light incident on the second photosensitive area of the large photodiode 116. The large photodiode 116 may include a second CSE for storing energy generated by light incident on the second light sensing area. In one embodiment, the first photosensitive area is at least partially surrounded by the second photosensitive area. In another embodiment, the first photosensitive area may be smaller than the second photosensitive area.Still referring to FIG. 9A, in block 910, each pixel of the image sensor may detect light incident on the second photosensitive area on the large photodiode 116, wherein the first photosensitive area is at least partially surrounded by the second photosensitive area. In block 915, the analog signal mixer 205 can combine signals from the first and large photodiodes 116 in response to the light incident on the first and second photosensitive regions for each pixel, and the signal indicates that the signal is incident on the first photosensitive area. The first light energy on the area and the second light energy incident on the second photosensitive area. The analog aperture can be controlled by mixing signals from the small photodiode 115 and the large photodiode 116. In one embodiment, the signals can be mixed using formula (1):(Es+Eb)·(a0)+Es·(1-a0) (1)among them:Es: the first light energy incident on the first photosensitive area,Eb: the second light energy incident on the second photosensitive area,a0: Configurable register value between zero and one.As is obvious to those skilled in the art, the image may be based on the sum of the small photodiode 115 and the large photodiode 116, but may also be based only on the small photodiode 115 (or center diode). In another embodiment, the image may be based only on the second light energy (Eb).Still referring to FIG. 9A, in block 920, the image signal processor may generate a first image based at least in part on the first light energy incident on the first photosensitive area for the first aperture simulation setting. In block 925, the image signal processor may generate a second image based at least in part on the second light energy incident on the second photosensitive area for the second aperture simulation setting.9B is a flowchart 950 illustrating an example of a method (or process) for aperture simulation using an image sensor including pixels, where each pixel includes three light-sensing surfaces. The process illustrated in FIG. 9B can be used in combination with the process illustrated in FIG. 9A. In block 955, each pixel of the image sensor may detect light incident on the third photosensitive area. The third photosensitive area may be externally added to the first and second photosensitive areas, and may reside on the same pixel.Still referring to FIG. 9B, in block 960, each pixel of the image sensor may combine signals from the first, second, and third photodiodes in response to the first, second, and third photodiodes incident on each pixel. The light of the area comes from the signals of the first, second and third photodiodes. The analog aperture can be controlled by mixing the signals from the first and second photodiodes. In one embodiment, the signals can be mixed using formula (2):(Es)·(a0)+(Es+Em)·(a1)+(Es+Em+Eb)·(a2) (2)among them:Es: the first light energy incident on the first photosensitive area,Eb: the second light energy incident on the second photosensitive area,a0: the first configurable register value between zero and one,a1: the second configurable register value between zero and one,a2: The third configurable register value between zero and one.The configurable register values a0, a1, and a2 can each refer to a unique number.For example, in formula (2), when a0 is set to a value of one and other register values are set to a value of zero, there may be no signal mixing, and the image may be based on the energy collected from the central photosensitive surface 715. This can result in an image with the foreground and background in focus. In another embodiment, the processor may automatically set the register value based on the determination of the distance of the scene using phase detection autofocus. For example, when a close object is detected, the register value can be set to a value of zero or a value close to zero in order to form a large DOF. In this example, each pixel 700 of the pixel array 750 may be individually controlled by the processor. For example, the signal mixer 205 may be configured to collect energy formed based on light incident on only one of any of the three photosensitive surfaces 705, 710, 715 of the pixel 700. This can cause the left, right, and center images to be processed. Using phase detection, the pixels that capture the object determined to be close can use a combination of the energy collected on the three photosensitive surfaces 705, 710, 715, while the pixels that capture the image surrounding the object that is determined to be close can only be used from the center. The light collected by the photosensitive surface 715, or a combination of light from all three photosensitive surfaces 705, 710, 715, where the higher energy level used comes from the central photosensitive surface 715 (e.g., a0=0.9, a1=0.05, and a2 =0.05).The configurable register value can also be set to a value that depends on the decision of the automatic exposure algorithm. For example, an ISP or SoC can determine that a bright scene may require a relatively short exposure time. In this case, the configurable register value can be set so that when capturing an image of the scene, a larger number of analog signals from each diode are combined. In another embodiment, the configurable register value may be adjusted according to manual user settings. In this configuration, the user can manually select the aperture value (for example, F22), which is associated with a set of register values. In another embodiment, the configurable register value can be set to change the DOF according to the distance using a hyperfocus lens design. In this configuration, when the object of the scene is not in focus or when some pixels in the scene are not in focus or both, the simulation with a small aperture can be used (for example, a0=0.9, a1=0.05, and a2 =0.05). This configuration eliminates the need for an autofocus motor and any associated lens structure.In another embodiment, the first image may be captured by the left light sensing surface 705 and the second image may be captured by the right light sensing surface 710. The phase detection algorithm can compare the first and second images and determine the amount of blur in the compared images. For example, when comparing two images, the focus in the two images will appear to be sharp, but the objects in front or behind the focus will be out of phase and will have a degree of blur. The processor can determine the difference value based on the ambiguity. For example, an object in the foreground of the focus may indicate a negative difference value, and an object in the background may indicate a positive difference value. The noise reduction algorithm can perform a pixel-by-pixel analysis on the value of the center pixel compared to the value surrounding it. The algorithm may be based on the noise surrounding the pixel value blurring the central pixel. In the area of the image containing high disparity values, the blur of the pixel noise can be increased to form a more pronounced bokeh effect. This can be done by incorporating more energy into the larger photodiodes of the pixels in these areas. In an area containing an image with a low difference value, it may be advantageous to obtain a clearer image by surrounding diodes (705, 710) that facilitates more light obtained by the central photodiode 715 than each pixel in the area.Still referring to FIG. 9B, in block 965, the processor may generate a third image based at least in part on the third light energy incident on the third photosensitive area. The third image may contain energy generated based on light incident on all of the three photosensitive surfaces 705, 710, 715 of the pixel 700. The register values of Equations 1 and 2 can be manually set by the user of the device.Implementation system and terminologyOne or more of the components, steps, features, and/or functions illustrated in the drawings may be rearranged and/or combined into a single component, step, feature, or function or implemented in several components, steps, or functions. Without departing from the novel features disclosed herein, additional elements, components, steps and/or functions may be added. The devices, devices, and/or components illustrated in the drawings may be configured to perform one or more of the methods, features, or steps described in the drawings. The novel algorithms described herein can also be effectively implemented in software and/or embedded in hardware.And, it should be noted that the embodiments can be described as a process depicted as a flowchart, a structural diagram, or a block diagram. Although the flowchart may describe the operations as a sequential process, many operations can be performed in parallel or simultaneously. In addition, the order of operations can be rearranged. When the operation of the process is completed, the process is terminated. Processes can correspond to methods, functions, procedures, subroutines, subroutines, and so on. When a procedure corresponds to a function, its termination corresponds to the function returning to the calling function or the main function.The term "determining" encompasses a wide variety of actions, and therefore "determining" can include calculations, calculations, processing, derivation, investigation, lookup (for example, lookup in a table, database, or another data structure), verification, and the like. And, "determining" may include receiving (for example, receiving information), accessing (for example, accessing data in a memory), and the like. And, "determining" may include analyzing, selecting, selecting, establishing, and the like.Unless expressly stated otherwise, the phrase "based on" does not mean "based only on." In other words, the phrase "based on" describes both "based only on" and "based at least on."The term "photodiode" or "diode" may include a plurality of photosensitive elements, such as light gates, light conductors, or other light detectors, overlying a substrate for accumulating light-generated charges in an underlying part of the substrate.In addition, a storage medium may refer to one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), disk storage media, optical storage media, flash memory devices, and/or other machines A readable medium, a processor readable medium, and/or a computer readable medium for storing information. The terms “machine-readable medium”, “computer-readable medium” and/or “processor-readable medium” may include (but are not limited to) non-transitory media (for example, portable or fixed storage devices), optical storage devices, and Various other media that store, contain, or carry instructions and/or data. Therefore, the various methods described herein may be stored in a “machine-readable medium”, “computer-readable medium” and/or “processor-readable medium” in whole or in part and processed by one or more It can be implemented by instructions and/or data executed by devices, machines, and/or devices.In addition, the embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments used to perform necessary tasks can be stored in a machine-readable medium, such as a storage medium or other storage device. The processor can perform the necessary tasks. Code segments can represent processes, functions, subroutines, programs, routines, subroutines, modules, software packages, classes, or any combination of instructions, data structures, or program statements. A code segment can be coupled to another code segment or hardware circuit by transmitting and/or receiving information, data, arguments, parameters, or memory content. Information, arguments, parameters, data, etc. may be transferred, forwarded, or transmitted via any suitable device including memory sharing, message transfer, token transfer, network transfer, and the like.Various illustrative logic blocks, modules, circuits, elements, and/or components described in conjunction with the examples disclosed herein may use general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASICs), and field programmable gates. An array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware component, or any combination thereof designed to perform the functions described herein are implemented or performed. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing components, for example, a combination of a DSP and a microprocessor, a combination of multiple microprocessors, a combination of one or more microprocessors and a DSP core, or any other such configuration.The methods or algorithms described in conjunction with the examples disclosed herein can be directly embodied in the form of processing units, programming instructions or other directions in hardware, software modules executable by a processor, or a combination of both, and can be included in a single device or Distributed across multiple devices. The software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.Those skilled in the art will understand that various illustrative logic blocks, modules, circuits, and algorithm steps described in conjunction with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or a combination of both. In order to clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have generally been described above in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system.The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing embodiments are only examples, and should not be construed as limiting the present invention. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. Therefore, the teachings of the present invention can be easily applied to other types of equipment, and those skilled in the art will understand many alternatives, modifications and changes.
A computing device may allocate a plurality of blocks in the memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory. The computing device may further store a plurality of bandwidth-compressed graphics data into the respective plurality of blocks in the memory, wherein one or more of the plurality of bandwidth-compressed graphics data each has a size that is smaller than the fixed size. The computing device may further store data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.
CLAIMS:1. A method comprising:storing, by at least one processor, a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size; andstoring, by the at least one processor, data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth- compressed graphics data.2. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.3. The method of claim 2, further comprising:associating, by the at least one processor, a default depth value for each of a second one or more of the plurality of bandwidth-compressed graphics data, wherein the second one or more of the plurality of bandwidth-compressed graphics data fully occupies a second one or more of the plurality of blocks.4. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.5. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises one or more hash codes that identify each of the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.6. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.7. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks and depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.8. The method of claim 1, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises optimization surfaces associated with the plurality of bandwidth-compressed graphics data.9. The method of claim 1, wherein the plurality of bandwidth-compressed graphics data comprises bandwidth-compressed portions of an image surface.10. The method of claim 1, wherein storing, by the at least one processor, the data associated with the plurality of bandwidth-compressed graphics data into the unused space of the one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data further comprises:determining, by the at least one processor, that the one or more of the plurality of blocks include the unused space; andin response to determining that the one or more of the plurality of blocks include the unused space, storing, by the at least one processor, the data associated with the plurality of bandwidth-compressed graphics data into the unused space of the one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.11. The method of claim 1, wherein the at least one processor includes a graphics processing unit.12. An apparatus configured to process graphics data comprising:a memory; andat least one processor configured to:store a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in the memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size; andstore data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.13. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.14. The apparatus of claim 13, wherein the at least one processor is further configured to:associate a default depth value for each of a second one or more of the plurality of bandwidth-compressed graphics data, wherein a second one or more of the plurality of bandwidth-compressed graphics data fully occupies a second one or more of the plurality of blocks.15. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.16. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises one or more hash codes that identify each of the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.17. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.18. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks and depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.19. The apparatus of claim 12, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises optimization surfaces associated with the plurality of bandwidth-compressed graphics data.20. The apparatus of claim 12, wherein the plurality of bandwidth-compressed graphics data comprises bandwidth-compressed portions of an image surface.21. The apparatus of claim 12, wherein the at least one processor is further configured to:determine that the one or more of the plurality of blocks include the unused space; andin response to determining that the one or more of the plurality of blocks include the unused space, store the data associated with the plurality of bandwidth-compressed graphics data into the unused space of the one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.22. The apparatus of claim 12, wherein the at least one processor includes a graphics processing unit.23. An apparatus comprising:means for storing a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size; and means for storing data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.24. The apparatus of claim 23, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.25. The apparatus of claim 24, further comprising:means for associating a default depth value for each of a second one or more of the plurality of bandwidth-compressed graphics data, wherein a second one or more of the plurality of bandwidth-compressed graphics data fully occupies a second one or more of the plurality of blocks.26. The apparatus of claim 23, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.27. The apparatus of claim 23, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises one or more hash codes that identify each of the one or more of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.28. The apparatus of claim 23, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.29. The apparatus of claim 23, wherein the data associated with the plurality of bandwidth-compressed graphics data comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks and depth data for each of the plurality of bandwidth-compressed graphics data stored in the one or more of the plurality of blocks.30. The apparatus of claim 23, wherein the means for storing further comprises: means for determining that the one or more of the plurality of blocks include the unused space; andmeans for, in response to determining that the one or more of the plurality of blocks include the unused space, storing the data associated with the plurality of bandwidth-compressed graphics data into the unused space of the one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.
STORING BANDWIDTH-COMPRESSED GRAPHICS DATATECHNICAL FIELD[0001] This disclosure relates to data storage, and more specifically to storing bandwidth-compressed graphics data in memory.BACKGROUND[0002] A device that provides content for visual presentation on an electronic display generally includes a graphics processing unit (GPU). The GPU renders pixels that are representative of the content on a display. The GPU generates one or more pixel values for each pixel on the display and performs graphics processing on the pixel values for each pixel on the display to render each pixel for presentation that performs fragment shading of the fragments generated by the rasterization stage.SUMMARY[0003] The techniques of this disclosure generally relate to techniques for storing a plurality of bandwidth-compressed graphics data in memory along with additional data that is associated with the plurality of bandwidth-compressed graphics data. The plurality of bandwidth-compressed graphics data may vary in size, and the plurality of bandwidth-compressed graphics data are stored in uniformly-sized blocks in memory that may accommodate the largest bandwidth-compressed graphics data out of the plurality of bandwidth-compressed graphics data. Therefore, storing the plurality of bandwidth-compressed graphics data into the uniformly-sized blocks in memory may result in remaining unused space in some of the blocks in memory that store the plurality of bandwidth-compressed graphics data. Such unused space in some of the blocks in memory may be utilized to store additional data that is associated with the plurality of bandwidth-compressed graphics data, such as depth data associated with the plurality of bandwidth-compressed graphics data or hash codes that identify each of the plurality of bandwidth-compressed graphics data.[0004] In one example of the disclosure, a method for graphics processing may include storing, by at least one processor, a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size. The process may further include storing, by the at least one processor, data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.[0005] In another example of the disclosure, an apparatus configured to process graphics data may include memory. The apparatus may further include at least one processor configured to: store a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in the memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size; and store data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.[0006] In another example of the disclosure, an apparatus may include means for storing a plurality of bandwidth-compressed graphics data into a respective plurality of blocks in memory, wherein each of the plurality of blocks is of a uniform fixed size in the memory, and wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size. The apparatus may further include means for storing data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.[0007] The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.BRIEF DESCRIPTION OF DRAWINGS[0008] FIG. 1 is a block diagram illustrating an example computing device that may be configured to implement one or more aspects of this disclosure for storing bandwidth- compressed graphical data in memory.[0009] FIG. 2 is a block diagram illustrating example implementations of the CPU, the GPU, and the system memory of FIG. 1 in further detail. [0010] FIGS. 3 A-3F are conceptual diagrams illustrating example techniques for storing bandwidth-compressed graphical data in memory.[0011] FIG. 4 is a flowchart illustrating an example process for storing bandwidth- compressed graphical data in memory.DETAILED DESCRIPTION[0012] Bandwidth-compressed graphics data is graphics data that is compressed so that it may be transferred more quickly through busses of a computing device. As a graphics processing unit (GPU) of a computing device performs graphics processing operations on graphics data, such as a surface, the computing device may transfer the surface through a bus between the GPU and memory or between different memories. For example, the computing device may perform a compositing operation that combines two different surfaces by transferring those two surfaces from memory to the GPU to perform the compositing operation, and transferring the resulting composited surface from the GPU back to memory. Thus, by reducing the size of the surface via compression, the computing device may transfer the surface more quickly between components of the computing device, thereby improving performance of the computing device.[0013] The computing device may perform bandwidth compression of a surface by dividing the surface into sub-regions and compressing each of the sub-regions of the surface to generate a plurality of bandwidth-compressed graphics data. The plurality of bandwidth-compressed graphics data may vary in size due to differences in content between sub-regions of the surface. For example, a computing device may be able to compress a sub-region of the surface that uniformly contains pixels of a single color into a relatively smaller size than another sub-region of the surface that contains pixels of many different colors.[0014] The computing device may store the plurality of bandwidth-compressed graphics data into a plurality of uniformly-sized blocks that the computing device allocates in memory. Each of the blocks is large enough to contain the largest one of the plurality of bandwidth-compressed graphics data. Because each of the plurality of blocks are the same size while the plurality of bandwidth-compressed graphics data may vary in size, storing the plurality bandwidth-compressed graphics data into the plurality of blocks may result in one or more of the blocks that each has unused space that is not occupied by the respective bandwidth-compressed graphics data stored in the block.[0015] In accordance with aspects of the present disclosure, the computing device may store other data associated with the plurality of bandwidth-compressed graphics data into the unused space of the one or more of the blocks. For example, instead of storing depth data associated with the plurality of bandwidth-compressed graphics data into a separate area (e.g., block) in memory, the computing device may instead store such depth data in the unused space of the one or more of the blocks. Similarly, the computing device may store hash codes that identify each of the plurality of bandwidth- compressed graphics data in the unused space of the one or more of the blocks. In this way, the computing device may utilize the unused space in the plurality of blocks to store additional data associated with the plurality of bandwidth-compressed graphics data, thereby increasing memory utilization efficiency of the computing device.[0016] The other data that the computing device may store into the unused space of the one or more blocks may be optimization surfaces, in that the computing device may use such data to optimize the performance of graphics operations on the graphics data. For example the computing device may utilize the depth data to increase its performance in rendering the associated graphics data, while the computing device may utilize the hash codes to increase its performance of certain graphical operations on the graphics data. As such, the computing device may store any number of additional data other than depth data or hash codes into the unused space of the one or more blocks, including storing additional optimization surfaces that may be used to optimize the rendering of the graphics data.[0017] FIG. 1 is a block diagram illustrating an example computing device that may be configured to implement one or more aspects of this disclosure for storing bandwidth- compressed graphical data in memory. As shown in FIG. 1, device 2 may be a computing device including but not limited to video devices, media players, set-top boxes, wireless handsets such as mobile telephones and so-called smartphones, personal digital assistants (PDAs), desktop computers, laptop computers, gaming consoles, video conferencing units, tablet computing devices, and the like. In the example of FIG. 1, device 2 may include central processing unit (CPU) 6, system memory 10, and GPU 12. Device 2 may also include display processor 14, transceiver module 3, user interface 4, and display 8. Transceiver module 3 and display processor 14 may both be part of the same integrated circuit (IC) as CPU 6 and/or GPU 12, may both be external to the IC or ICs that include CPU 6 and/or GPU 12, or may be formed in the IC that is external to the IC that includes CPU 6 and/or GPU 12.[0018] Device 2 may include additional modules or units not shown in FIG. 1 for purposes of clarity. For example, device 2 may include a speaker and a microphone, neither of which are shown in FIG. 1, to effectuate telephonic communications in examples where device 2 is a mobile wireless telephone, or a speaker where device 2 is a media player. Device 2 may also include a video camera. Furthermore, the various modules and units shown in device 2 may not be necessary in every example of device 2. For example, user interface 4 and display 8 may be external to device 2 in examples where device 2 is a desktop computer or other device that is equipped to interface with an external user interface or display.[0019] Examples of user interface 4 include, but are not limited to, a trackball, a mouse, a keyboard, and other types of input devices. User interface 4 may also be a touch screen and may be incorporated as a part of a display 8. Transceiver module 3 may include circuitry to allow wireless or wired communication between computing device 2 and another device or a network. Transceiver module 3 may include modulators, demodulators, amplifiers and other such circuitry for wired or wireless communication.[0020] CPU 6 may be a microprocessor, such as a central processing unit (CPU) configured to process instructions of a computer program for execution. CPU 6 may comprise a general-purpose or a special-purpose processor that controls operation of computing device 2. A user may provide input to computing device 2 to cause CPU 6 to execute one or more software applications. The software applications that execute on CPU 6 may include, for example, an operating system, a word processor application, an email application, a spread sheet application, a media player application, a video game application, a graphical user interface application or another program. Additionally, CPU 6 may execute GPU driver 22 for controlling the operation of GPU 12. The user may provide input to computing device 2 via one or more input devices (not shown) such as a keyboard, a mouse, a microphone, a touch pad or another input device that is coupled to computing device 2 via user interface 4.[0021] The software applications that execute on CPU 6 may include one or more graphics rendering instructions that instruct CPU 6 to cause the rendering of graphics data to display 8. In some examples, the software instructions may conform to a graphics application programming interface (API), such as, e.g., an Open Graphics Library (OpenGL®) API, an Open Graphics Library Embedded Systems (OpenGL ES) API, a Direct3D API, an X3D API, a RenderMan API, a WebGL API, or any other public or proprietary standard graphics API.[0022] In order to process the graphics rendering instructions of the software applications, CPU 6 may issue one or more graphics rendering commands to GPU 12 (e.g., through GPU driver 22) to cause GPU 12 to perform some or all of the rendering of the graphics data. In some examples, the graphics data to be rendered may include a list of graphics primitives, e.g., points, lines, triangles, quadrilaterals, triangle strips, etc.[0023] GPU 12 may be configured to perform graphics operations to render one or more graphics primitives to display 8. Thus, when one of the software applications executing on CPU 6 requires graphics processing, CPU 6 may provide graphics commands and graphics data to GPU 12 for rendering to display 8. The graphics data may include, e.g., drawing commands, state information, primitive information, texture information, etc. GPU 12 may, in some instances, be built with a highly-parallel structure that provides more efficient processing of complex graphic-related operations than CPU 6. For example, GPU 12 may include a plurality of processing elements, such as shader units, that are configured to operate on multiple vertices or pixels in a parallel manner. The highly parallel nature of GPU 12 may, in some instances, allow GPU 12 to draw graphics images (e.g., GUIs and two-dimensional (2D) and/or three-dimensional (3D) graphics scenes) onto display 8 more quickly than drawing the scenes directly to display 8 using CPU 6.[0024] GPU 12 may, in some instances, be integrated into a motherboard of computing device 2. In other instances, GPU 12 may be present on a graphics card that is installed in a port in the motherboard of computing device 2 or may be otherwise incorporated within a peripheral device configured to interoperate with computing device 2. GPU 12 may include one or more processors, such as one or more microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other equivalent integrated or discrete logic circuitry. GPU 12 may also include one or more processor cores, so that GPU 12 may be referred to as a multi-core processor.[0025] GPU 12 may be directly coupled to graphics memory 40. Thus, GPU 12 may read data from and write data to graphics memory 40 without using a bus. In other words, GPU 12 may process data locally using a local storage, instead of off-chip memory. Such graphics memory 40 may be referred to as on-chip memory. This allows GPU 12 to operate in a more efficient manner by eliminating the need of GPU 12 to read and write data via a bus, which may experience heavy bus traffic. In some instances, however, GPU 12 may not include a separate memory, but instead utilize system memory 10 via a bus. Graphics memory 40 may include one or more volatile or non-volatile memories or storage devices, such as, e.g., random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), erasable programmable ROM(EPROM), electrically erasable programmable ROM (EEPROM), Flash memory, a magnetic data media or an optical storage media.[0026] In some examples, GPU 12 may store a fully formed image in system memory 10, where the image may be one or more surfaces. A surface, in some examples, may be a two dimensional block of pixels, where each of the pixels may have a color value. Throughout this disclosure, the term graphics data may, in a non-limiting example, include surfaces or portions of surfaces. Display processor 14 may retrieve the image from system memory 10 and output values that cause the pixels of display 8 to illuminate to display the image. Display 8 may the display of computing device 2 that displays the image content generated by GPU 12. Display 8 may be a liquid crystal display (LCD), an organic light emitting diode display (OLED), a cathode ray tube (CRT) display, a plasma display, or another type of display device.[0027] In accordance with aspects of the present disclosure, computing device 2 may allocate a plurality of blocks in memory, such as system memory 10 or graphics memory 40, wherein each of the plurality of blocks is of a uniform fixed size in the memory. Computing device 2 may further store a plurality of bandwidth-compressed graphics data into the respective plurality of blocks in the memory, wherein one or more of the plurality of bandwidth-compressed graphics data has a size that is smaller than the fixed size. Computing device 2 may further store data associated with the plurality of bandwidth-compressed graphics data into unused space of one or more of the plurality of blocks that contains the respective one or more of the plurality of bandwidth-compressed graphics data.[0028] FIG. 2 is a block diagram illustrating example implementations of CPU 6, GPU 12, and system memory 10 of FIG. 1 in further detail. As shown in FIG. 2, CPU 6 may include at least one software application 18, graphics API 20, and GPU driver 22, each of which may be one or more software applications or services that execute on CPU 6.[0029] Memory available to CPU 6 and GPU 12 may include system memory 10, frame buffer 16, and render targets 24. Frame buffer 16 may be a part of system memory 10 or may be separate from system memory 10, and may store rendered image data. GPU 12 may also render image data for storage in render targets 24. Similar to frame buffer 16, render targets 24 may be a part of system memory 10 or may be separate from system memory 10.[0030] Software application 18 may be any application that utilizes the functionality of GPU 12. For example, software application 18 may be a GUI application, an operating system, a portable mapping application, a computer-aided design program for engineering or artistic applications, a video game application, or another type of software application that uses 2D or 3D graphics.[0031] Software application 18 may include one or more drawing instructions that instruct GPU 12 to render a graphical user interface (GUI) and/or a graphics scene. For example, the drawing instructions may include instructions that define a set of one or more graphics primitives to be rendered by GPU 12. In some examples, the drawing instructions may, collectively, define all or part of a plurality of windowing surfaces used in a GUI. In additional examples, the drawing instructions may, collectively, define all or part of a graphics scene that includes one or more graphics objects within a model space or world space defined by the application.[0032] Software application 18 may invoke GPU driver 22, via graphics API 20, to issue one or more commands to GPU 12 for rendering one or more graphics primitives into displayable graphics images. For example, software application 18 may invoke GPU driver 22, via graphics API 20, to provide primitive definitions to GPU 12. In some instances, the primitive definitions may be provided to GPU 12 in the form of a list of drawing primitives, e.g., triangles, rectangles, triangle fans, triangle strips, etc. The primitive definitions may include vertex specifications that specify one or more vertices associated with the primitives to be rendered. The vertex specifications may include positional coordinates for each vertex and, in some instances, other attributes associated with the vertex, such as, e.g., color coordinates, normal vectors, and texture coordinates. The primitive definitions may also include primitive type information (e.g., triangle, rectangle, triangle fan, triangle strip, etc.), scaling information, rotation information, and the like. Based on the instructions issued by software application 18 to GPU driver 22, GPU driver 22 may formulate one or more commands that specify one or more operations for GPU 12 to perform in order to render the primitive. When GPU 12 receives a command from CPU 6, processor cluster 46 may execute a graphics processing pipeline to decode the command and may configure the graphics processing pipeline to perform the operation specified in the command. For example, a command engine of the graphics processing pipeline may read primitive data and assemble the data into primitives for use by the other graphics pipeline stages in the graphics processing pipeline. After performing the specified operations, GPU 12 outputs the rendered data to frame buffer 16 associated with a display device or to one of render targets 24.[0033] Frame buffer 16 stores destination pixels for GPU 12. Each destination pixel may be associated with a unique screen pixel location. In some examples, frame buffer 16 may store color components and a destination alpha value for each destination pixel. For example, frame buffer 16 may store Red, Green, Blue, Alpha (RGB A) components for each pixel where the "RGB" components correspond to color values and the "A" component corresponds to a destination alpha value. Frame buffer 16 may also store depth values for each destination pixel. In this way, frame buffer 16 may be said to store graphics data (e.g., a surface). Although frame buffer 16 and system memory 10 are illustrated as being separate memory units, in other examples, frame buffer 16 may be part of system memory 10. Once GPU 12 has rendered all of the pixels of a frame into frame buffer 16, frame buffer may output the finished frame to display 8 for display.[0034] Similar to frame buffer 16, each of render targets 24 may also stores destination pixels for GPU 12, including color values and/or depth values for pixels. Each of render targets 24 may store information for the same number of unique pixel locations as frame buffer 16 or may store a subset of the number of unique pixel locations as frame buffer 16.[0035] Processor cluster 46 may include one or more programmable processing units 42 and/or one or more fixed function processing units 44. Programmable processing unit 42 may include, for example, programmable shader units that are configured to execute one or more shader programs that are downloaded onto GPU 12 from CPU 6. In some examples, programmable processing units 42 may be referred to as "shader processors" or "unified shaders," and may perform geometry, vertex, pixel, or other shading operations to render graphics. The shader units may each include one or more components for fetching and decoding operations, one or more ALUs for carrying out arithmetic calculations, one or more memories, caches, and registers.[0036] GPU 12 may designate programmable processing units 42 to perform a variety of shading operations such as vertex shading, hull shading, domain shading, geometry shading, fragment shading, and the like by sending commands to programmable processing units 42 to execute one or more of a vertex shader stage, tessellation stages, a geometry shader stage, a rasterization stage, and a fragment shader stage in the graphics processing pipeline. In some examples, GPU driver 22 may cause a compiler executing on CPU 6 to compile one or more shader programs, and to download the compiled shader programs onto programmable processing units 42 contained within GPU 12. The shader programs may be written in a high level shading language, such as, e.g., an OpenGL Shading Language (GLSL), a High Level Shading Language (HLSL), a C for Graphics (Cg) shading language, an OpenCL C kernel, etc. The compiled shader programs may include one or more instructions that control the operation ofprogrammable processing units 42 within GPU 12. For example, the shader programs may include vertex shader programs that may be executed by programmable processing units 42 to perform the functions of the vertex shader stage, tessellation shader programs that may be executed by programmable processing units 42 to perform the functions of the tessellation stages, geometry shader programs that may be executed by programmable processing units 42 to perform the functions of the geometry shader stage and/or fragment shader programs that may be executed by programmable processing units 42 to perform the functions of the fragment shader stage. A vertex shader program may control the execution of a programmable vertex shader unit or a unified shader unit, and include instructions that specify one or more per-vertex operations.[0037] Processor cluster 46 may also include fixed function processing units 44. Fixed function processing units 44 may include hardware that is hard-wired to perform certain functions. Although fixed function processing units 44 may be configurable, via one or more control signals for example, to perform different functions, the fixed function hardware typically does not include a program memory that is capable of receiving user- compiled programs. In some examples, fixed function processing units 44 in processor cluster 46 may include, for example, processing units that perform raster operations, such as, e.g., depth testing, scissors testing, alpha blending, low resolution depth testing, etc. to perform the functions of the rasterization stage of the graphics processing pipeline.[0038] Graphics memory 40 is on-chip storage or memory that physically integrated into the integrated circuit of GPU 12. In some instances, because graphics memory 40 is on-chip, GPU 12 may be able to read values from or write values to graphics memory 40 more quickly than reading values from or writing values to system memory 10 via a system bus. [0039] In some examples, GPU 12 may operate according to a deferred rendering mode (also called binning rendering or tile-based rendering) to render graphics data. When operating according to the deferred rendering mode, processor cluster 46 within GPU 12 first performs a binning pass (also known as a tiling pass) to divide a frame into a plurality of tiles, and to determine which primitives are within each tiles. In some examples, the binning pass may indicate whether or not a primitive is within a tile. In other examples, the binning pass may also include a depth test and indicate whether or not a particular primitive is visible in a rendered tile. For each of the plurality of tiles, processor cluster 46 then renders graphics data (color values of the pixels) of the tile to graphics memory 40 located locally on GPU 12, including performing the graphics processing pipeline to render each tile, and, when complete, reads the rendered graphics data from graphics memory 40 to frame buffer 16 or one of render targets 24. In some examples, because each rendered tile includes the color values of the pixels of a two dimensional block of pixels, a tile may be considered a surface, or may be considered a portion of a surface that is the finally rendered image made up of a plurality of tiles.[0040] GPU 12 may divide each tile into a plurality of blocks of pixels. The size of the blocks of pixels may be similar to the size of the blocks of pixels on display 8 that correspond to one storage location in the low resolution buffer. GPU 12 may transform primitives of each tile into screen space, and may order the primitives with respect to each other from front to back, testing sub-tiles of the current tile to determine: 1) whether each primitive is included within the given sub-tile; and 2) if included in the given sub-tile, whether pixels of the primitive are occluded by pixels of any other primitive in the particular sub-tile.[0041] In some examples, during the binning pass, GPU 12 may also generate low resolution z (LRZ) data for blocks of pixels of each of the plurality of tiles and may store such LRZ data into a low resolution buffer in memory, such as system memory 10. Low resolution z refers to the fact that the low resolution buffer stores depth data associated with a block of pixels rather than for each pixel of each of the plurality of tiles. The low resolution buffer may be a two-dimensional buffer with a plurality of storage locations. Each storage location in the low resolution buffer may correspond to a block of pixels represented on display 8. In some examples, the number of storage locations within the low resolution buffer may be fewer than the number of pixels to be represented on display 8. An LRZ data may be depth data for a block of pixels (e.g., a 2x2 block of pixels) that contains the backmost depth value for the given block of pixels. A tile may be associated with one or more LRZ data. For example, given a tile that is an 8x8 block of pixel, the tile may include 16 LRZ data that are each associated with a given 2x2 pixel block of the tile, and each of the 16 LRZ data may contain the backmost depth value for the associated 2x2 pixel block of the tile.[0042] GPU 12 may determine the LRZ data based on determining the depth values of pixels of primitives that occupy the block of pixels associated with the LRZ data.Because LRZ data is depth data for a block of pixels rather than for an individual pixel, GPU 12 may be conservative in determining the LRZ data for each block of pixels. For example, if LRZ data is a 2x2 block of pixels (pOO, pOl, plO, and pi 1), GPU 12 may set the corresponding LRZ data to be the depth data of the backmost pixel (i.e., the pixel that is furthest away from the camera). If pixels pOO, pOl, plO, and pi 1 havecorresponding depth values of 0.1, 0.1, 0.2, and 0.15, respectively, where a lower value represents a depth that is further away from the camera than a higher value, GPU 12 may set the LRZ data for that pixel block to be 0.1.[0043] After updating the low resolution buffer with depth information of the pixels making up the rendered surface, GPU 12 may, tile-by-tile, render an image to graphics memory 40 based on the depth values stored in the low resolution buffer. To render pixels, for each pixel on the display, GPU 12 may determine which pixels to render from which primitives in the tile based on the depth values stored within the low resolution buffer. If GPU 12 determines, based on the depth values stored within the low resolution buffer, that pixels of a primitive is occluded in the final scene, GPU 12 may determine to not perform further pixel shading or fragment shading operations on those occluded pixels, thereby improving the performance of GPU 12. After each tile is rendered to graphics memory 40, GPU 12 may transfer the rendered tile from graphics memory 40 to memory 26. In this way, frame buffer 16 or one of render targets 24 may be filled tile-by-tile by rendered tiles from GPU 12 and transferring each of the rendered tiles from graphics memory to frame buffer 16 or one of render targets 24, thereby rendering a surface into frame buffer 16 or one of render targets 24.[0044] When GPU 12 attempts to render additional primitives into the rendered surface, GPU 12 may utilize the constructed LRZ data for the surface to optimize the rendering of those primitives. GPU 12 may rasterize those primitives into pixels via the techniques of this disclosure and may perform low resolution depth testing to discard pixels that GPU 12 determines to be occluded. GPU 12 may for each pixel, compare the depth value of the pixel with the depth value of the associated LRZ data (i.e., the LRZ data associated with the pixel location of the pixel being tested), and may discard the pixel if the depth value of the pixel is smaller (e.g., further away from the camera) than the depth value of the associated LRZ data. By discarding these occluded pixels, GPU 12 may omit the performance of any additional graphics rendering operations for those pixels, such as pixel shading operations and the like, thereby improving graphics processing performance of GPU 12.[0045] In some situations, GPU 12 may not reject pixels as necessarily being occluded by other pixels when GPU 12 performs low resolution testing of those pixels using LRZ data even if those pixels may be rejected during pixel -level depth testing of individual pixels. For example, given an LRZ data that represents a 2x2 block of pixels (pOO, pOl, plO, and pi 1), the LRZ data may be a depth value of 0.1, where a lower value represents a depth that is further away from the camera than a higher value, even though pixel pOl may have an actual depth value of 0.2. Subsequently, GPU 12 may determine whether to render a primitive having new pixel pO 1 ' with a depth value of 0.15 at the same pixel location as pixel pOl . Because the LRZ data is a depth value of 0.1, GPU 12 may nonetheless, based on the LRZ data, determine that the primitive associated with new pixel ρΟ will be visible in the finally rendered surface because the pixel ρΟ has a depth value of 0.15 is larger than the LRZ data's depth value of 0.1, even though the actual depth value of pixel pOl is 0.2. Due to GPU 12's determination that pixel ρΟ is visible based on the LRZ data, GPU 12 may perform graphics rendering operations for the pixel (e.g., fragment shading operations) before GPU 12 performs pixel-level depth testing on pixel ρΟ to determine that pixel ρΟ is not actually visible in the finally rendered scene and discards pixel ρΟ , thereby preventing the color values of pixel ρΟ from being written into frame buffer 16 or one of render targets 24.[0046] Because GPU 12 performs pixel-level depth testing of each pixel after low- resolution depth testing using LRZ data, the use of LRZ data may be considered optional. While low-resolution depth testing may discard pixels prior to GPU 12 performing pixel shading operations on those pixels, GPU 12 may still ultimately perform per-pixel depth testing of each undiscarded pixel after GPU 12 performs pixel shading operations on those pixels. Thus, low-resolution depth testing using LRZ data may be considered an optimization to GPU 12' s processing that saves GPU 12 from expending its processing to perform pixel shading on certain pixels that are discarded as a result of low-resolution depth testing. As such, GPU 12 may still perform correctly to render graphics data even if GPU 12 does not perform low-resolution depth testing as part of its graphics processing.[0047] GPU 12 may also determine a tile-based hash code for each rendered tile based on the color data of the block of pixels included in each rendered tile, such that a tile- based hash code uniquely identifies tiles having different color data for their block of pixels. As discussed above, each rendered tile is a block (e.g., 8x8) of pixels, where each pixel has a color value. GPU 12 may associate tiles that contain different patterns of pixel values (e.g., a tile completely filled with red pixels and a tile completely filled with green pixels) with different tile-based hash codes, and may associate tiles that contain the same pattern of pixel values (e.g., two tiles that are each completely filled with red pixels) with the same tile-based hash code.[0048] Such tile-based hash codes may be useful when GPU 12 determines whether to perform a bit block transfer of color data corresponding to a tile from a first tile to a second tile. If the first tile and the second tile are each associated with the same tile- based hash code, GPU 12 may determine that no actual transfer of color data needs to occur because the first and second tiles contain the same set of color data for their respective blocks of pixels, thereby improving performance of computing device 2. In some examples, GPU 12 may determine a tile-based hash code for blocks of pixels that are smaller than the size of a tile. For example, if a tile comprises an 8x8 block of pixels, GPU 12 may nonetheless determine a tile-based hash code for each 4x4 block of pixels of a surface. In this case, each tile may be associated with four tile-based hash codes for each 4x4 block of pixels it contains.[0049] As each rendered tile is transferred out of graphics memory 40 for storage in frame buffer 16 or one of render targets 24, GPU 12 may compress, via any suitable compression algorithm, each tile to more efficiently move the tile through the bus to frame buffer 16 or one of render targets 24. The resulting size of the compressed tiles may differ based on the variability of the contents of each tile. While some compressed tiles may be a fraction of the size of an uncompressed tile, other compressed tiles may be barely smaller than or the same size as that of an uncompressed tile or may not be compressed at all. Thus, a plurality of bandwidth-compressed tiles may include one or more uncompressed tiles amongst other compressed tiles.[0050] In some examples, GPU 12 may determine a tile-based hash code for each compressed tile. Thus, rather than generating tile-based hash codes for the underlying surface color values of the uncompressed tile, GPU 12 may generate tile-based hash codes based on the data of each tile after compression, thereby acting as checksums for the plurality of compressed tiles. In this example, two tile-based hash codes may be the same if the two associated compressed tiles, after compression, are the same.[0051] Because uncompressed tiles of a given rendered image are all the same size, frame buffer 16 or one of render targets 24 are configured to have enough space to store all of the uncompressed tiles of a surface in fixed-sized blocks that are each the same size as an uncompressed tile. Further, because compressing tiles that make up a surface may result in tiles of different sizes that vary on the color values of each specific tile, GPU 12 may not be able to allocate custom blocks of varying size in memory 26 specifically for storing the compressed tiles. Therefore GPU 12 may utilize the same plurality of blocks allocated for storing uncompressed tiles of a rendered image by storing the plurality of compressed tiles into the plurality of blocks, such that each compressed tile is stored in one of the blocks.[0052] Due to memory 26 storing the compressed tiles in blocks that are each the same size as an uncompressed tile, memory 26 does not actually conserve any space by storing the plurality of compressed tiles instead of uncompressed tiles. Even though the plurality of compressed tiles may take up less space in memory 26 than uncompressed tiles, nevertheless the same amount of space in memory 26 is reserved for the plurality of blocks regardless of whether compressed tiles or uncompressed tiles are stored into the plurality of blocks.[0053] Therefore, when GPU 12 stores the compressed tiles into the plurality of blocks, the plurality of blocks may include unused space that is not taken up by storing the plurality of compressed tiles. For each compressed tile that takes up less than the entire space of the corresponding block in which the compressed tile is stored, thecorresponding block may have unused space. As such, according to the techniques of this disclosure, GPU 12 may be configured to utilize the unused space to store additional data that is associated with the rendered surface that is made up of the plurality of compressed tiles. For example, instead of storing LRZ data and tile-based hash codes for the plurality of compressed tiles in dedicated buffers in memory 26, GPU 12 may store such data in the unused space of the plurality of blocks.[0054] Because unused space in a block that stores a compressed tile is not guaranteed, GPU 12 may be able to store data associated with a particular compressed tile, such as LRZ data and tile-based hash codes, only if the block that stores the particular compressed tile has unused space. However, if a compressed tile fully occupies a block, GPU 12 may not be able to store data associated with the particular compressed tile in the block. Thus, GPU 12 may be able to store data that are optional for each portion of the surface associated with a corresponding compressed tile, into the unused spaces of the blocks.[0055] GPU 12 may determine the unused space available in each of the plurality of blocks resulting from the plurality of blocks storing the compressed tiles. For example, GPU 12 may determine the size of a block in the plurality of blocks, and may determine the size of each of the compressed tiles. If GPU 12 determines that the size of a particular compressed tile is smaller than the size of a block in the plurality of blocks, GPU 12 may determine that the block that stores the particular compressed tile may have unused space.[0056] In response to GPU 12 determining that one or more of the plurality of blocks include unused space, GPU 12 may store optimization surfaces that GPU 12 may utilize to improve its performance into the unused space of the one or more of the plurality of blocks. For example, LRZ data is useful by indicating primitives that are not visible in the finally rendered surface by enabling GPU 12 to not perform rasterization of those primitives. However, without the LRZ data, GPU 12 may still correctly render a given surface by performing rasterization of primitives regardless of whether those primitives are visible in the finally rendered surface. As such, while LRZ data may improve the performance of GPU 12 as it renders a surface, it is not information that is critical for GPU 12 to correctly render a surface.[0057] Tile-based hash codes are similar to LRZ data in that they are useful in improving the performance of GPU 12 but are not critical for GPU 12 to correctly perform graphics operations. Without tile-based hash codes, the GPU 12 may still correctly perform functions such as bit-block transfers of color data, but may perform redundant transfers of color data between portions of the surface that has the same block of color data.[0058] FIGS. 3 A-3F are conceptual diagrams illustrating example techniques for storing bandwidth-compressed graphical data in memory. As shown in FIG. 3A, GPU 12 may store bandwidth-compressed graphics data 56A-56N ("bandwidth-compressed graphics data 56") into blocks 54A-54N ("blocks 54") in memory 26, such as system memory 10, frame buffer 16, one or more of render targets 24, and the like. Bandwidth-compressed graphics data 56, in some examples, may each be a tile (e.g., a portion of an image surface) making up a rendered scene or surface that is compressed by GPU 12 in order to more efficiently move graphics data through buses and between components of computing device 2 (e.g., between GPU 12 and memory 26).[0059] Blocks 54 may be contiguous in memory 26 and may each be the same uniform fixed size to store each of bandwidth-compressed graphics data 56. In some examples, if each of bandwidth-compressed graphics data 56 is a bandwidth-compressed tile, GPU 12 may allocate, in memory 26, the same number of blocks 54 as the number of tiles making up a rendered surface, such that each one of blocks 54 may store acorresponding one of bandwidth-compressed graphics data 56.[0060] Because each of blocks 54 is large enough to store uncompressed graphics data of a rendered surface, storing bandwidth-compressed graphics data 56 into blocks 54 may result in unused space remaining in blocks 54. In the example of FIG. 3 A, unused space 58 A, 58B, 58C, 58D, and 58E ("unused space 58") may remain in blocks 54 A, 54B, 54C, 54E, and 54N, respectively, when blocks 54A, 54B, 54C, 54E, and 54N store respective bandwidth-compressed graphics data 56A, 56B, 56C, 56E, and 56N.[0061] As discussed above, GPU 12 may determine whether each block of blocks 54 has unused space 58 by comparing the size of each bandwidth-compressed graphics data 56 with the size of a block of blocks 54. GPU 12 may create and store flag surfaces 52A-52N ("flag surfaces 52") in memory 26, where each of flag surfaces 52 is associated with one of blocks 54, and may indicate the amount of unused space in a corresponding block of blocks 54.[0062] In the example of FIG. 3 A, flag surfaces 52 may store the fraction, out of four, of the amount of unused space in a corresponding block of blocks 54. Flag surface 52A may indicate that unused space takes up ¾ of block 54 A. Flag surface 52B may indicate that unused space takes up ½ of block 54B. Flag surface 52C may indicate that unused space takes up ¼ of block 54C. Flag surface 52D may indicate that block 54D has no unused space. Flag surface 52E may indicate that unused space takes up ¼ of block 54E. Flag surface 52F may indicate that block 54F has no unused space. Flag surface 52N may indicate that unused space takes up ¼ of block 54N. Because flag surfaces 52 is also stored in memory 26, storing bandwidth-compressed graphics data 56 in memory may take up more space in memory 26 than storing comparable uncompressed graphics data 56.[0063] As discussed above, GPU 12 may store data associated with bandwidth- compressed graphics data 56 into unused space 58. As shown in FIG. 3B, GPU 12 may determine, based on flag surface 52, the blocks of blocks 54 that has unused space 58, and may store LRZ data 60A-60E into unused space 58A-58E of blocks 54. Each of LRZ data 60A-60E may be of a fixed size. Because only blocks 54A, 54B, 54C, 54E, and 54N have respective unused space 58A-58E, GPU 12 may, in the example of FIG. 3B, only store LRZ data 60A-60E that includes depth information for respective bandwidth-compressed graphics data 56A, 56B, 56C, 56E, and 56N into unused space 58 of blocks 54. Thus, depth information for bandwidth-compressed graphics data 56D and 56F are not stored into unused space 58 of blocks 54.[0064] LRZ data 60A may be associated with bandwidth-compressed graphics data 56A in that LRZ data 60A may include LRZ data for the pixels that make up the portion of the surface that corresponds to bandwidth-compressed graphics data 56A. For example, if bandwidth-compressed graphics data 56A includes graphics data with respect to a particular 8x8 block of pixels, LRZ data 60A, in one example, may include a corresponding plurality of LRZ data for each 2x2 pixel block of the 8x8 block of pixels. Similarly, LRZ data 60B may include LRZ data for the pixels that make up the portion of the surface that corresponds to bandwidth-compressed graphics data 56B, LRZ data 60C may include LRZ data for the pixels that make up the portion of the surface that corresponds to bandwidth-compressed graphics data 56C, LRZ data 60D may include LRZ data for the pixels that make up the portion of the surface that corresponds to bandwidth-compressed graphics data 56E, and LRZ data 60E may include LRZ data for the pixels that make up the portion of the surface that corresponds to bandwidth- compressed graphics data 56N.[0065] For bandwidth-compressed graphics data 56D and 56F that do not have associated LRZ data stored in associated blocks 54D and 54F, GPU 12 may associate a default depth value with each of bandwidth-compressed graphics data 56D and 56F that fully occupy their respective blocks 54D and 54F. The default depth value may be a backmost depth value that indicates that additional pixels to be rendered into the portions of the surface associated with bandwidth-compressed graphics data 56D and 56F are in front of the pixels of the portions of the surface associated with bandwidth- compressed graphics data 56D and 56F, and thus will be visible, regardless of whether those additional pixels are actually visible in the finally rendered scene.[0066] To accommodate depth information for each of bandwidth-compressed graphics data 56, GPU 12 may store depth information for multiple bandwidth-compressed graphics data 56 into the unused space 58 of a single block of blocks 54. As shown in FIG. 3C, GPU 12 may store LRZ data 60F into unused space 58A of block 54A that includes LRZ data for multiple consecutive bandwidth-compressed graphics data 56. LRZ data stored into unused space 58 of a single block of block 54 may include depth information for the associated bandwidth-compressed graphics data of bandwidth- compressed graphics data 56 as well as depth information for a next consecutive specified number of bandwidth-compressed graphics data 56. For example, if the LRZ data stored into unused space 58 of a single block of block 54 may include LRZ data for six of bandwidth-compressed graphics data 56, LRZ data 60F may include depth data for bandwidth-compressed graphics data 56A-56F. Similarly, LRZ data 60G may include depth data for bandwidth-compressed graphics data 56N as well as the next five subsequent bandwidth-compressed graphics data 56. In this way, blocks 54 may store depth data for each bandwidth-compressed graphics data 56.[0067] As shown in FIG. 3D, GPU 12 may also store tile-based hash codes 62A-62E into unused space 58A-58E of blocks 54. Each of tile-based hash codes 62A-62E may be of the same size. Because only blocks 54A, 54B, 54C, 54E, and 54N have respective unused space 58A-58E, GPU 12 may, in the example of FIG. 3B, only store tile-based hash codes 62A-62E that identifies the color values for respective bandwidth- compressed graphics data 56A, 56B, 56C, 56E, and 56N into unused space 58 of blocks 54. Thus, tile-based hash codes for bandwidth-compressed graphics data 56D and 56F are not stored into unused space 58 of blocks 54.[0068] To accommodate tile-based hash codes for each of bandwidth-compressed graphics data 56, GPU 12 may store tile-based hash codes for multiple bandwidth- compressed graphics data 56 into the unused space 58 of a single block of blocks 54. As shown in FIG. 3E, GPU 12 may store tile-based hash code 62F into unused space 58C of block 54C that includes tile-based hash codes for multiple consecutive bandwidth- compressed graphics data 56. Tile-based hash codes stored into unused space 58 of a single block of block 54 may include tile-based hash codes for the associated bandwidth-compressed graphics data of bandwidth-compressed graphics data 56 as well as tile-based hash codes for a next consecutive specified number of bandwidth- compressed graphics data 56 or a previous consecutive number of bandwidth- compressed graphics data 56. For example, if the tile-based hash codes stored into unused space 58 of a single block of block 54 may include LRZ data for three of bandwidth-compressed graphics data 56, tile-based hash code 62F may include tile- based hash codes for each of bandwidth-compressed graphics data 56A-56C. Similarly, tile-based hash code 62G may include tile-based hash codes for bandwidth-compressed graphics data 56N as well as the previous two bandwidth-compressed graphics data 56. In this way, blocks 54 may store tile-based hash codes for each bandwidth-compressed graphics data 56.[0069] In some examples, GPU 12 may store multiple types of data associated with bandwidth-compressed graphics data 56 into unused space 58 of blocks 54 at the same time. For example, unused space 58 of blocks 54 may store both depth data as well as tile-based hash codes for each of bandwidth-compressed graphics data 56. As shown in FIG. 3F, GPU 12 may store LRZ data 60H into unused space 58 A of block 54 A and LRZ data 601 into unused space 58D of block 54E. GPU 12 may also store tile-based hash code 62H into unused space 58C of block 54 A and tile-based hash code 621 into unused space 58E of block 54N. As such, unused space 58 of blocks 54 may store both LRZ data and tile-based hash codes for bandwidth-compressed graphics data 56 at the same time.[0070] While FIGS 3A-3F illustrate that GPU 12 is able to store LRZ data and tile- based hash codes into unused space 58 of blocks 54, this disclosure is not necessarily limited to storing only LRZ data and tile-based hash codes into unused space 58 of blocks 54. Rather, GPU 12 may store any other data related to bandwidth-compressed graphics data 56 into unused space 58 of blocks 54.[0071] FIG. 4 is a flowchart illustrating an example process for storing bandwidth- compressed graphical data in memory. As shown in FIG.4, the process may include storing, by GPU 12, a plurality of bandwidth-compressed graphics data 56 into a respective plurality of blocks 54 in memory 26, wherein each of the plurality of blocks 54 is of a uniform fixed size in the memory 26, and wherein one or more of the plurality of bandwidth-compressed graphics data 56 has a size that is smaller than the fixed size (102). The process may further include storing, by GPU 12, data associated with the plurality of bandwidth-compressed graphics data 56 into unused space 58 of one or more of the plurality of blocks 54 that contains the respective one or more of the plurality of bandwidth-compressed graphics data 56 (104).[0072] In some examples, the data associated with the plurality of bandwidth- compressed graphics data 56 comprises depth data for the one or more of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54. In some examples, a second one or more of the plurality of bandwidth- compressed graphics data 56 may fully occupy a second one or more of the plurality of blocks 54, and the process may further include associating, by GPU 12, a default depth value for each of the second one or more of the plurality of bandwidth-compressed graphics data 56. In some examples, the data associated with the plurality of bandwidth- compressed graphics data 56 comprises depth data for each of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54.[0073] In some examples, the data associated with the plurality of bandwidth- compressed graphics data 56 comprises one or more hash codes that identify each of the one or more of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54. In some examples, the data associated with the plurality of bandwidth-compressed graphics data 56 comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54.[0074] In some examples, the data associated with the plurality of bandwidth- compressed graphics data 56 comprises hash codes that identify each of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54 and depth data for each of the plurality of bandwidth-compressed graphics data 56 stored in the one or more of the plurality of blocks 54.[0075] In some examples, the data associated with the plurality of bandwidth- compressed graphics data 56 comprises optimization surfaces associated with the plurality of bandwidth-compressed graphics data 56. In some examples, the plurality of bandwidth-compressed graphics data 56 may comprise bandwidth-compressed portions of an image surface.[0076] In some examples, storing, by GPU 12, the data associated with the plurality of bandwidth-compressed graphics data 56 into the unused space of the one or more of the plurality of blocks 54 that contains the respective one or more of the plurality of bandwidth-compressed graphics data 56 may further include determining, by GPU 12, that the one or more of the plurality of blocks 54 include the unused space, and in response to determining that the one or more of the plurality of blocks 54 include the unused space, storing, by GPU 12, the data associated with the plurality of bandwidth- compressed graphics data 56 into the unused space of the one or more of the plurality of blocks 54 that contains the respective one or more of the plurality of bandwidth- compressed graphics data 56.[0077] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0078] The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor" and "processing unit," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.[0079] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (i.e., a chip set). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.[0080] Various examples have been described. These and other examples are within the scope of the following claims.
In general, in one aspect, a laptop computer includes two planes of orthogonal proximity sensors (one in display, one in keyboard) to create a three dimensional user interface. The intersections of the proximity sensors along three axis defines a plurality of coverage areas. An object (e.g., users finger, pointer) is determined to be within a coverage area if the corresponding intersecting proximity sensors indicate presence of the object. A user may identify an item on the display for selection by placing an object within an appropriate coverage area, select the item by moving the object toward the display, and identify an action to be taken on the display by moving the object through one or more coverage areas. A 3D image of an object may be generated based on the coverage areas indicating the object is located therein. The 3D image may be utilized to authenticate a user.
CLAIMS What is claimed: 1. An apparatus, comprising: a first body supporting a display and a first array of proximity sensors; and a second body extending away from an edge of the first body, wherein the second body supports a second array of proximity sensors, wherein the first array and the second array of proximity sensors are to define a three dimensional (3D) user interface for the apparatus. 2. The apparatus of claim 1, wherein a coverage zone for the 3D user interface is between the first body and the second body. 3. The apparatus of claim 1 , further comprising logic at least a portion of which is in hardware, the logic configured to define a plurality of coverage areas corresponding to intersections of the proximity sensors. 4. The apparatus of claim 3, wherein the logic is configured to indicate presence of an object in a coverage area in response to detection by the proximity sensors associated therewith. 5. The apparatus of claim 4, wherein the logic is configured to allow a user to identify an item on the display for selection by placing an object within a corresponding coverage area. 6. The apparatus of claim 5, wherein the object is a user's finger. 7. The apparatus of claim 5, wherein the object is a user's hand. 8. The apparatus of claim 5, wherein the object is a pointing device. 9. The apparatus of claim 5, wherein the logic is configured to highlight the item identified for selection. 10. The apparatus of claim 9, wherein the logic is configured to select the item in response to movement of the object toward the di splay. 11. The apparatus of claim 4, wherein the logic is configured to identify an action to be taken on the display in response to detection of movement of an object through one or more of the coverage areas. 12. The apparatus of claim 1, wherein the first array of proximity sensors includes a plurality of sensors arranged in rows and columns; the second array of proximity sensors includes a plurality of sensors arranged in rows and columns. 13. The apparatus of claim 12, wherein the rows of proximity sensors in the first array are aligned relative to the rows of proximity sensors in the second array. 14. The apparatus of claim 12, wherein the columns of proximity sensors in the first array are aligned relative to the columns of proximity sensors in the second array. 15. The apparatus of claim 1, wherein the second body is a user interface device. 16. The apparatus of claim 15, wherein the user interface device is a keyboard. 17. The apparatus of claim 1, wherein the second body extends from a lower edge of the first body. 18, The apparatus of claim 1, wherein the second body extends from a side edge of the first body. 19. The apparatus of claim 4, wherein the logic is configured to generate a 3D image of an object in response to detection of the object in one or more coverage areas. 20. The apparatus of claim 19, wherein the logic is configured to utilze the 3D image to perform authentication. 21. The apparatus of claim 19, wherein the object is a user's hand. 22. The apparatus of claim 19, wherein the object is a user's head. 23. An apparatus, comprising: a display, wherein the display is to support a first array of proximity sensors; a user interface extending away from an edge of the display, wherein the user interface is to support a second array of proximity sensors; and logic at least a portion of which is in hardware configured to detect three dimensional (3D) user interactions with the display based at least in part on input from the first array of proximity sensors and the second array of proximity sensors. 24. The apparatus of claim 23, wherein the logic is configured to detect the 3D user interactions within a coverage zone between the display and the user interface. 25. The apparatus of claim 23, wherein the logic is configured to define a plurality of coverage areas corresponding to intersections of the proximity sensors. 26. The apparatus of claim 25, wherein the logic is configured to indicate presence of an object in a coverage area in response to detection by the proximity sensors associated therewith. 27. The apparatus of claim 25, wherein the logic is configured to identify selection of an item on the display in response to placement of an object within a corresponding coverage area. 28. The apparatus of claim 27, wherein the object includes at least one selected from a list comprising a user's finger, a user's hand, and a pointing device. 29. The apparatus of claim 27, wherein the logic is configured to highlight the item identified for selection. 30. The apparatus of claim 27, wherein the logic is configured to identify selection of the item in response to mov ement of the object toward the display. 31. The apparatus of claim 25, wherein the logic is configured to identify an action to be taken on the display in response to detection of movement of an object through one or more of the coverage areas. 32. The apparatus of claim 25, wherein the logic is configured to generate a 3D image of an object in response to detection of the object in one or more coverage areas. 33. The apparatus of claim 32, wherein the logic is configured to authenticate a user utilizing the 3D image.
THREE-DIMENSIONAL USER INTERFACE DEVICE BACKGROUND Users interact with consumer electronic devices (CEDs) in various different manners. A user may interact with a CED using a remote device that is not connected to the CED but that communicates with the CED. For example, the remote device may be a remote control, a wireless mouse, a wireless keyboard, or a gaming controller. The user may interact with devices (e.g., keyboards, touchpad) that are part of or are connected to the CED. The CED may include a display that is touch sensitive so that the user may interact with the CED may touching the display. The various user interfaces are two-dimensional thi s limiting the type of interactions that may occur with the CED. BRIEF DESCRIPTION OF THE DRAWINGS The features and advantages of the various embodiments will become apparent from the following detailed description in which: FIGs. 1A-C illustrate various views of an example laptop computer; FIG. 2 illustrates an example laptop computer that uses proximity sensors in the lid (display) and the base (user interface), according to one embodiment; FIG. 3 illustrates an example coverage zone created for the laptop computer by the proximity sensors, according to one embodiment; FIG. 4 illustrates the creation of example defined areas (e.g., boxes) within the coverage zone, according to one embodiment; and FIG. 5 illustrates an example system diagram for a laptop computer providing a 3D user interface, according to one embodiment. DETAILED DESCRIPTION FIGs. 1A-C illustrate various views of an example laptop computer 100. The computer 100 includes an upper frame (lid) 110 and a lower frame (base) 150 that are pivotally connected to one another other via a hinge or the like (not numbered). The computer 100 may switch between an open configuration where the lid 110 extends in an upward direction from the base 150 to a closed configuration where the lid 110 lays on top of the base 150. The lid 110 may include a display 120 where content can be viewed by a user. The base 150 may include one or more user interfaces to interact with the computer 100. The user interfaces may include, for example, a keyboard 160 and a touchpad 170. When the computer 100 is operational it may be in the open configuration (see FIGs. 1 A and lC) and when the computer is off and/or being transported it may be in the closed configuration (see FIG. I B). When the computer 100 is operational a user interacts with the computer 100 via the user interfaces. The keyboard 160 may enable the user to, for example, enter data and/or select certain parameters using the keys. The touchpad 170 may enable the user to, for example, scroll around the display 120 in order to view and/or select certain content by moving their finger therearound (e.g., detected finger movements may be mapped to screen movements). Whether using the keyboard 160, the touchpad 170, or other user interface devices (e.g., mouse) the interaction is limited to a two-dimensional (2D) interaction (the user interaction is limited to the plane of the display 120). For example, the user may move left, right, up, down, and combinations thereof such as diagonals. Making the display 120 a touchscreen display like those utilized in tablet computers would provide additional user interface options. However, such a device would still only provide a 2D user interface (device where user may only interact with the display in the plane of the display). Proximity sensors may detect if an object is with a certain distance thereof. For example, a proximity sensor utilized on a computer may detect if a human is within normal operating distance (e.g., 3 feet) thereof. However, the proximity sensor may not be able to determine the exact distance the human is from the computer (e.g., 1 foot versus 3 feet). The proximity sensors could be, for example, inductive sensors, capacitive sensors, magnetic sensors, photoelectric sensors, other types of sensors or some combination thereof. The photoelectric sensors may include a light source (e.g., infrared light and a receiver to determine if the light is reflected back. Proximity sensors could be utilized in the display 120 to detect location and/or movement of the user (or particular part of the user such as hand or finger) with respect to the display 120 without the need to touch the display 120 or utilize the user interface devices (e.g., keyboard 160, touchpad 170). However, in order to select certain content or take certain actions the user may need to touch the display 120 and/or utilize the user interface devices (e.g., keyboard 160, touchpad 170). That is, interfacing with such a device is still limited to 2D interactions. FIG. 2 illustrates an example laptop computer 200 that uses proximity sensors 210 in the lid (display) 220 and the base (user interface) 230. The proximity sensors 210 could be, for example, inductive sensors, capacitive sensors, magnetic sensors, photoelectric sensors, other types of sensors or some combination thereof. The proximity sensors 210 are illustrated as being visible for ease of description. Flowever, the proximity sensors 210 may not be visible and or may not be noticeable to the user. The proximity sensors 210 may not affect the contents presented on the display 220 and may not affect the user utilizing the user interface (keyboard) 330. The proximity sensors 210 may be organized as arrays that include rows and columns of sensors 210 that extend across the length and hei ght of the display 220 and length an d depth of the user interface (keyboard) 230. The columns of sensors 210 on the display 220 may be aligned with the columns of sensors 210 on the keyboard 230. Utilizing two separate planes of sensors 210 enables the computer 200 to not only detect location and or movement of the user (or particular part of the user such as hand or finger) or device with respect to the display 220 but also the distance away from the display. FIG. 3 illustrates an example coverage zone 340 created for the laptop computer 200. The proximity sensors 210 on the display 220 may be capable of detecting objects approximately a distance equal to, or slightly greater than, the distance the keyboard 230 extends from the display 220. The proximity sensors 210 on the keyboard 230 may be capable of detecting objects approximately a distance equal to, or slightly greater than, the distance the display 220 extends from the keyboard 230. The coverage zone 340 may be the area where the sensors 210 on the display 220 overlap in coverage with the sensors 210 on the keyboard 230. That is, the coverage area 340 may extend up as high as the display 220. extend out as far as the keyboard 230, and capture the length of the display 220 and keyboard 230. FIG. 4 illustrates the creation of example defined areas (e.g., boxes) 410 within the coverage zone 340. The proximity sensors (not illustrated) may be orientated on the x-axis (length of display) and the z-axis (height of display) for the display (not illustrated). The proximity sensors may be orientated on the x-axis (length of keyboard) and the y-axis (depth of keyboard) for the keyboard (not illustrated). The sensors on the x-axis for the display and the keyboard may be aligned with each other. Each defined area may be centered around an intersection of a proximity sensor for each of the axis. The size of the area may be based on the number of sensors and t he proximity of the sensors to each other. As illustrated, there are three areas defined for each axis (e.g., x=l-3, \— 1 -3. and z=l-3) indicating that there are three sensors associated with each axis. Accordingly, the display and the keyboard may each have 9 sensors associated therewith (3x3). There may be a total of 27 (3x3x3) defined areas. By way of example, a defined area 410A may be centered around the intersection of the proximity sensors at locations x-T (first aligned column), y-2 (second row on keyboard) and x 2 (second row on display). A defined area 410B may include proximity sensors at locations x 3 (third aligned column), y=l (first row on keyboard) and /. 3 (third row on display). A defined area 4 IOC may include proximity sensors at locations x=2 (second aligned column), y=2 (second row on keyboard) and z=3 (third row on display). A defined area 410D may include proximity sensors at locations x=3 (third aligned column), y=3 (third row on keyboard) and z=T (first row on display). If the three proximity sensors associated with the defined area indicate the presence of an item (e.g., finger, stylus) the computer may determine an object is located in that defined area. The defined areas may be associated with content that is presented on a corresponding region of the display. According to one embodiment, the proximity sensors on the x-axis and the z-axis may be utilized to define the area on the display. For example, an icon for a particular program may be associated with a defined area. If a finger, hand or other device is determined to be present in the defined area the icon may be il luminated and if the user wants to select the icon they may make a movement towards the display to select the icon. According to one embodiment, the proximity sensors on the three axis' may be utilized to assign items on the display to defined areas. For example, if the display presented a 3D desktop the various items on the desktop could be associated with the different defined areas. For example, items in the upper right hand corner of the display may be associated with the proximity sensors in column x-3 and row z-3. An item in the upper right hand corner of the 3D deskto appearing closet to the user (e.g., first of three overlapping icons) may be associated with row y=3, an item appearing a middle distance from the user (e.g., second of three overlapping icons) may be associated with row y=2, and an item appearing furthest distance from the user (e.g., third of three overlapping icons) may be associated with row y=l. If a finger, hand or other device is determined to be present in the defined area the icon may be illuminated. If the user wants to select the icon they may make a defined movement (e.g., gesture towards the display) to select the icon. In addition to selecting icons from the desktop, the proximity sensors could be utilized to track movements related to the display similar to the way a touch screen and/or touchpad does but without the need to actually interface with the device. For example, if the user wanted to flip pages on a book they were viewing on the display they could swipe their finger from right to left to advance a page or left to right to go back a page. The actions taken based on the movements of a user's finger (hand, device or the like) may depend on the operational mode of the computer and any applications that may be running thereon. Hardware and/or software processing may be utilized to analyze the data from the proximity sensors to determine the location and movement of the device (e.g., linger, hand, wand) with respect to the display. The proximity sensors may provide the data to a processor to analyze the data in order to detect and/or recognize movements and/or objects within the coverage zone 340. As noted above, the actions taken based on the detection/recognition may depend on the operational status of the computer. According to one embodiment, the use of th e proximity sensors could allow the computer to act as a 3D scanner. Based on the defined areas that are determined to contain the object processing could be performed to get a sense for the size and shape of the object. It should be noted that the 3D image generated may be limited to surface of the object facing the display and the keyboard. According to one embodiment, the generation of a 3D image could be used for authentication and or security access. For example, when a user attempts to log-on to the computer, the computer may utilize the proximity sensors to generate a 3D image of, for example the users hand or face. The generated 3D image may be compared to an authenticated 3D image to determine if access should be granted or not. Specific movements of, for example as users finger through the coverage area may also be used for authentication and or security access. For example, in order to authenticate a user the detected movements of the users finger may be compared to stored moves. The authentication moves may be for example the swiping of a finger from a upper left front portion of the coverage area to the lower right back portion. As one skilled in the art would recognize the more proximity sensors utilized the finer the granularity of the 3D user interface. Accordingly, the use of the proximity sensors for 3D scanning and/or user authentication may require a minimum amount of sensors per area. FIG. 5 illustrates an example system diagram for a laptop computer 500 providing a 3D user interface. The computer 500 may include a display 510, a plurality of proximity sensors 520, and logic 530. The display 510 is to present information. The proximity sensors 520 are to detect the presence of items (e.g., users finger, users hand, stylus) in relation thereto. The logic 530 is to process the input from the proximity sensors. The logic 530 may be hardware and/or software logic. The logic 530 may be one or more processors utilized within the computer 500. The logic 530 may be configured to define a plurality of coverage areas corresponding to intersections of the proximity sensors. The logic 530 may be configured to indicate presence of an object in a coverage area in response t detection by the proximity sensors associated therewith. The logic 530 may be configured to allow a user to identify an item on the display 510 for selection by placing an object within a corresponding coverage area. The logic 530 may be configured to highlight the item identified for selection. The logic 530 may be configured to select the item in response to movement of the object toward the display. The logic 530 may be configured to identify an action to be taken on the display in response to detection of movement of an object through one or more of the coverage areas. The logic 530 may be configured to generate a 3D image of an object in response to detection of the object in one or more coverage areas. The logic 530 may be configured to utilze the 3D image to perform authentication. The 3D user interface has been described with specific reference to a laptop computer but is not limited thereto. Rather, the 3D interface could be utilized with any apparatus that includes a display and a keyboard extending therefrom, such as a desktop computer, certain tablet computers with snap on keyboards such as the Surface™ by Microsoft ®, certain wireless phones, and/or certain personal digital assistants (PDAs) and the like. Furthermore, the second surface is not limited to a keyboard. Rather the other surface may be any type of user interface or device that extends from a display. For example, the second surface may be another display or may be a cover for the device that extends outward when the cover is opened. For example, a 3D interface could be provided for a tablet computer by utilizing a protective cover for the device. The proximity sensors within the protective cover would need to provide data to the computer so some type of communications interface would be required. Although the disclosure has been illustrated by reference to specific embodiments, it will be apparent that the disclosure is not limited thereto as various changes and modifications may be made thereto without departing from the scope. Reference to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described therein is included in at least one embodiment. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment. The various embodiments are intended to be protected broadly within the spirit and scope of the appended claims.
The present invention provides a technique to detect errors in a computer system. More particularly, at least one embodiment of the invention relates to using redundant virtual machines and comparison logic to detect errors occurring in input/output (I/O) operations in a computer system.
1.A device includes:A circuit that compares data corresponding to at least two redundant accesses to an input / output (I / O) device to determine whether an error related to any of the at least two redundant accesses has occurred.2.The apparatus of claim 1, further comprising two or more redundant access interface storage areas to store information corresponding to the two or more redundant accesses.3.The apparatus of claim 2, wherein the two or more redundant access interface storage areas are located in an I / O controller device, and control information corresponding to the I / O controller device is stored.4.The apparatus of claim 2, wherein the two or more redundant access interface storage areas are located in a memory device and will store the data corresponding to the at least two redundant accesses.5.The apparatus of claim 1, wherein the two or more redundant accesses will be generated by two or more corresponding redundant virtual machines (RVM).6.The apparatus of claim 5, wherein if an error is detected in any of the at least two redundant accesses, an interrupt is generated.7.The apparatus of claim 6, wherein the interrupt is to be received by a virtual machine manager (VMM) corresponding to the two or more RVMs.8.A system including:A processor, at least some processing resources of which are to be represented by two or more redundant virtual machines (RVM);An input / output (I / O) controller including output error detection logic to compare data corresponding to two or more accesses from the two or more RVMs;I / O device that receives two or more accesses from the two or more RVMs.9.The system of claim 8, wherein the I / O controller further comprises input replication logic to generate two or more sets of I / O controller interface information corresponding to the two or more RVMs.10.The system of claim 9, wherein the two or more sets of I / O controller interface information are to be stored in two or more sets of registers.11.The system of claim 8, further comprising a memory to store data from the two or more accesses.12.The system of claim 11, wherein data from the two or more accesses is to be stored in two or more buffers in the memory, the two or more buffers corresponding to the Two or more visits.13.The system of claim 9, wherein the two or more accesses correspond to a programmed I / O (PIO) access.14.The system of claim 9, wherein the two or more accesses correspond to a direct memory access (DMA).15.The system of claim 9, wherein if an error is detected by the error detection logic, an interrupt is generated.16.The system of claim 15, wherein the interrupt is to be received by a virtual machine manager (VMM) corresponding to the at least two RVMs.17.A method including:Determining whether the first access to an input / output (I / O) device corresponds to a programmed I / O (PIO) access or direct memory access (DMA);If the first access is a PIO access, comparing the data from the next adjacent access with the data from the first access, and detecting the data from the first access and the data from the next adjacent access Whether the data are equal;If the first access is DMA, the descriptor information from the next adjacent access is compared with the data from the first access.18.The method of claim 17, further comprising, if the descriptor information from the next adjacent access matches the descriptor information of the first access, comparing data from the next adjacent access with data from the next adjacent access The first accessed data.19.The method of claim 18, further comprising, if the descriptor information from the first access matches the descriptor information from the next adjacent access, detecting whether data from the first access is equal to The next adjacently accessed data.20.The method of claim 19, further comprising, if the descriptor field of the first access and the descriptor field of the next adjacent access do not match, comparing subsequent accesses from the next adjacent access The data.21.The method of claim 20, further comprising generating an interrupt if data from the first access and data from the next adjacent access or data from a subsequent access of the next adjacent access are not equal .22.The method of claim 21, wherein the first access and the next adjacent access and the subsequent access to the next adjacent access are from any one or two or more redundant virtual machines (RVM ).23.The method of claim 17, further comprising generating an interrupt if data from the first access and data from the next adjacent access are not equal.24.The method of claim 22, further comprising generating, if data from the first access and data from the next adjacent access or data from subsequent accesses of the next adjacent access are not equal, generating Break.25.A processor including:Processing resources represented by at least two redundant virtual machines (RVM), wherein data corresponding to accesses from the RVM to input / output (I / O) devices are compared with each other by a comparison circuit to determine whether a soft error has occurred .26.The processor of claim 25, wherein the access corresponds to a programmed I / O (PIO) access to the I / O device.27.The processor of claim 25, wherein the access corresponds to a direct memory access (DMA) to the I / O device.28.The processor of claim 25, wherein the access is a read access.29.The processor of claim 25, wherein the access is a write access.30.The processor of claim 25, wherein a virtual machine manager (VMM) is to help handle the soft errors.
Error detection using redundant virtual machinesTechnical fieldThe present disclosure relates to the field of computing and computer systems, and more specifically to the field of error detection in computer systems using virtual machine monitors.Background techniqueSome computer systems may be vulnerable to handling errors during operation. For example, transient errors ("soft errors") caused by a computer system's exposure to radiation or other electromagnetic fields can damage the data being transmitted through the computer system, causing incorrect or unexpected calculation results. For example, soft errors can cause incorrect data within a computer system to be passed between a software application running on a processor and an input / output (I / O) data stream generated by the software application. In this example, soft errors may exist in the application software, operating system, system software, or I / O data itself.The problem of soft errors in computer systems has been located through technology, such as redundant software execution, in which pieces of software are processed two or more times, sometimes on different processing hardware, in order to produce multiple results that can be compared with each other in order to Detection error in results. Redundant software processing, although having some effects on detecting soft errors in computer systems, will require additional computing resources, such as redundant hardware, to process software redundantly.Another technique used in some computer systems is to virtualize hardware in software and to process different code segments redundantly within redundant virtual versions of the hardware in order to detect soft errors. Redundant virtual hardware, or redundant "virtual machine" (RVM), can provide a software representation of the underlying processing hardware so that software code can be processed redundantly in parallel on RVM.Figure 1 shows a redundant virtual machine environment in which software segments, such as software threads, can be processed redundantly in order to detect soft errors in the software. Specifically, FIG. 1 shows two virtual machines (VMs) representing the same processing hardware, where software threads can be processed redundantly and in parallel. The results from redundant copies of one or more operations in the software thread can be compared to each other to detect soft errors before or after the software thread actually commits to the hardware context state.However, in order to ensure that software is processed equally on two VMs, the execution path of the code through these VMs must be controlled (or managed) by the software modules of the replication management layer (RML) to be the same. In addition, RML may need to compare the outputs of two VMs. Unfortunately, RML, or equivalent software modules, introduces additional processing overhead that can cause performance degradation in computer systems. In addition, RML itself may contain soft errors and is therefore untrusted.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is illustrated in the drawings by way of example and not limitation.FIG. 1 illustrates a prior art redundant virtual machine (RVM) environment.FIG. 2 illustrates components of a computer system that can be used in conjunction with one or more embodiments of the invention.FIG. 3 illustrates a processor and input / output (I / O) controller that can be used in cooperation with one or more embodiments of the present invention.FIG. 4 is a flowchart illustrating a number of operations that can be used in one or more embodiments of the invention.FIG. 5 is a shared bus type computer system in which one or more embodiments of the present invention can be performed.FIG. 6 is a peer-to-peer computer system in which one or more embodiments of the invention may be implemented.detailed descriptionEmbodiments of the invention relate to computer systems. More specifically, at least one embodiment of the present invention relates to techniques for detecting and responding to errors in corresponding input / output (I / O) operations within a computer system.At least one embodiment of the present invention uses hardware logic to perform a portion of a function related to detecting a soft error using a redundant virtual machine (RVM). More specifically, one or more embodiments of the present invention use a specified pair of memory areas and corresponding input replication and output comparison logic to detect the difference between one or more processors and one or more I / O devices. Soft errors related to the transmission of I / O data.In one embodiment, the designated storage area includes two or more register groups located within the I / O controller or otherwise associated with the I / O controller to store in two or more virtual machines and Data communicated between I / O devices. In one embodiment, the designated memory area may also include two or more segments of memory (e.g., VM buffers) to store data related to direct memory access (DMA) operations between the memory and the I / O device data.Embodiments of the present invention may incorporate logic within the I / O controller device or logic associated with the I / O controller device to perform multiple functions performed by prior art RML. For example, in one embodiment, logic within an I / O controller related to two or more RVMs representing processing hardware resources may be used to replicate the input provided to the RVM by the I / O device and compare the RVM generated Output to determine if a soft error has occurred. Advantageously, embodiments that include input replication and / or output comparison functionality in the hardware logic can improve processing throughput, reduce software overhead, and reduce the chances of soft errors affecting soft error detection processing.Figure 2 illustrates components of a computer system in which one embodiment of the invention may be implemented. Specifically, FIG. 2 shows a CPU 201, which includes two RVMs 205, 210 to represent various processing resources of the CPU. In addition, FIG. 2 includes an I / O controller including an I / O controller 215 to interface data between the CPU (and RVM) and one or more I / O devices 220. Also included in FIG. 2 are two representations 225, 227 of at least some of the control registers associated with the I / O controller. In one embodiment, the two representations correspond to different RVMs and are used to store control information. The control information is used by the RVM to send data to or receive data from the I / O controller. In one embodiment, these two representations are registers within the I / O controller or otherwise related to the I / O controller, however in other embodiments these representations are locations in a memory structure such as DRAM.Also located in the I / O controller of FIG. 2 is input replication and output comparison logic 230 to generate control interface information corresponding to the I / O controller and compare the output of the RVM and respond to the RVM's execution of input-related tasks. RVM output. For example, in one embodiment, for a given software operation to be performed by the RVM, the control interface information corresponding to the I / O controller may be stored in a register group within the I / O controller, or otherwise stored with the I / O controller In the register set related to the O controller, the output data of the RVM can be compared with each other by the comparison logic to ensure that no soft error that damages the output occurs. Moreover, the information returned from the I / O device and to be sent to the RVM can also be copied using the comparison logic 230 in order to ensure that two RVMs receive the same data, thereby maintaining consistency between the RVMs. Similarly, the results of operations being performed on the RVM can be compared to ensure that no soft errors have occurred in the execution of these operations or in the result data itself.In one embodiment, if the result of the comparison indicates that the output data is inconsistent, error correction logic or software or both can be invoked to handle the error and recover from the error. For example, in one embodiment, a software handler is invoked in response to an error being detected, which prevents the error from putting processing hardware in an incorrect state, or if the hardware has been placed in an incorrect state, Put the hardware in the correct or known state. After the handler recovers from a soft error, in one embodiment, the operation in which the soft error occurred may be performed again.In one embodiment, the I / O controller of FIG. 2 facilitates comparison of the output of the RVM accessed by the PIO by waiting for the same access to the duplicate register set before performing the I / O operation on the I / O device. In one embodiment, PIO operations may include PIO writes and / or side effects operations (if any) related to PIO read operations.In the case of unbuffered I / O reads and writes, they can be performed non-speculatively and in program order, and device register accesses from one RVM can be compared to immediately following program orders from another RVM Was verified on the next device visit. To prevent one RVM from issuing several I / O device accesses before each access can be verified, in one embodiment, the I / O device can respond to one RVM access until another RVM access has occurred (e.g., using Bus-level retry response). If subsequent RVM accesses do not arrive within a certain time limit (programmable time limit in one embodiment), the I / O device can respond with a bus error that can be intercepted by the RVM-related VMM and responded accordingly Locally (ie, by retrying or handling the situation as an error).In one embodiment, if a subsequent RVM access to the I / O device does not match the first RVM access to the I / O device, for example because the access is of a different type, is directed to a different register, or (on write In the case of different data values, the I / O controller can also signal errors to the VMM via a bus error response and / or interrupt.In one embodiment, the I / O controller of FIG. 2 supports input replication for PIO access by returning the same value to two RVMs on the corresponding access. For example, for device register reads that do not have side effects, or for reads whose return value is not related to side effects, if the response value is cached, the device can respond to earlier RVM accesses so that a consistent value is returned in response to subsequent RVM accesses, even The internal state of this time changed. Similarly, if unbuffered I / O reads and writes are performed non-proactively and in program order, then in one embodiment, responses to PIO reads may be synchronized with respect to program flow within the RVM. Therefore, in such an embodiment, the device does not need to participate in the specific timing of the response.FIG. 3 illustrates a plurality of components related to at least one embodiment of the present invention, wherein information is transferred from or to an I / O device through DMA transfer. Specifically, FIG. 3 shows a CPU 301. For the CPU 301, two or more RVMs (not shown) may be used to represent multiple resources. Also shown in FIG. 3 is a memory 305, which can be used to store information communicated between two or more RVMs and I / O devices 320 through the memory controller 310 and the I / O controller 315. Specifically, the memory 305 may be a DRAM. For example, the buffer 325 may be designated as one of the corresponding RVMs and the buffer 330 may be designated as the corresponding RVM.As in the example shown in FIG. 2, input and / or output comparison logic may be included in I / O controller 315 or otherwise associated with I / O controller 315 to compare the corresponding software operations being performed by the RVM Input and / or output. In addition, the I / O controller control information may be represented by two or more register banks (not shown) corresponding to two or more RVMs, as in the example shown in FIG. 2. However, in the case of DMA, as opposed to PIO access, data written from an RVM to an I / O device or data written from an I / O device to an RVM is first stored in the corresponding RVM buffer (325 or 330).In one embodiment, if the DMA address is remapped for virtualized I / O access, the RVM buffers may correspond to the same physical address but with different I / O remapping contexts. Otherwise, in other embodiments, the buffer may be located at a different physical address. In one embodiment, only the contents of the buffer must be verified or copied, so differences in buffer addresses may not be so important.In one embodiment, logic within the I / O controller performs output comparisons on outgoing DMA transfers (to I / O devices) by waiting until it receives descriptor data from one of the RVMs. Descriptor data can be provided in a system where DMA transfers are supported. The I / O controller may then compare the data buffer length and / or other parameters (eg, disk block offset) related to the first pair of RVM descriptor data. If the data buffer length and / or other parameters match, the I / O controller can fetch the data content from the two buffers and compare on a bit-by-byte, byte-by-byte, word-by-word (or some other interval size) basis they. If the contents of the two buffers match, in one embodiment, the I / O operation is verified and forwarded to the device. If there is any mismatch in operating parameters or data, this may be an indication of a soft error, and the I / O controller may initiate an interrupt, which will be handled by the VMM.In one embodiment, input duplication on an incoming DMA transfer (from a device) can be handled in a similar manner as output duplication described above. After the data transfer is complete, in one embodiment, the data can be written to the physical memory twice, at each corresponding RVM location.In one embodiment, input replication may require completion notification from the I / O controller to the CPU. For example, if the I / O device driver is polling the completion of the DMA buffer, the asynchronous nature of the DMA transfer causes one RVM to interpret the descriptor data to indicate that the DMA is complete while another RVM is at the same logical point in its execution Failure to do so thus leads to possible divergences in their execution paths.In one embodiment, the I / O controller is prevented from writing the descriptor completion flag when the RVM is executing and an interrupt service routine (ISR) is executing, in order to prevent the divergence of the RVM execution path mentioned above. In one embodiment, DMA buffer transfers completed during the execution of the ISR execution may not write to their corresponding descriptors until the RVM exits the ISR. In one embodiment, the device driver may access specific device registers at the entry to the ISR or at the exit from the ISR in order to defer descriptor updates.In one embodiment, instead of writing the descriptor information to the memory-based DMA descriptor field, the I / O controller may signal a completion of a DMA request by increasing a counter associated with a corresponding DMA buffer in the memory. In this embodiment, the completion notification may then occur by a PIO read of this register, allowing the use of the PIO input copy technique described above.FIG. 4 is a flowchart showing a plurality of operations that can be used in at least one embodiment of the present invention. At operation 401, it is determined that the access (eg, read or write) to the I / O device is a PIO access or a DMA access. If the access is a PIO access, adjacent access can be considered as redundant access from two or more RVMs. Therefore, neighboring accesses from the RVM can be compared to each other to determine if an error in the access has occurred (at operation 403). At operation 405, if an error occurs, an interrupt may be generated and handled by the VMM of the corresponding RVM and processed accordingly (at operation 407).On the other hand, if it is determined that the access is a DMA access, a comparison is made between two or more access-related descriptors from a corresponding number of RVMs at operation 410. In one embodiment, the corresponding access descriptor may be composed of information such as data buffer length, offset information, and the like. If the descriptors match, then at operation 412, the data subsequently stored in the buffer of the corresponding RVM in the memory may be compared with each other to determine whether an error occurred. If an error occurs in the data or in the descriptor, an interrupt is generated at operation 420 and handled by the VMM of the corresponding RVM in an appropriate manner.FIG. 5 illustrates a front-side bus (FSB) computer system in which one embodiment of the present invention may be used. The processor 505 accesses data from a level one (L1) cache memory 510 and a main memory 515. In other embodiments of the invention, the cache memory may be a level two (L2) cache or other memory within a computer system memory architecture. Further, in some embodiments, the computer system of FIG. 5 may include both an L1 cache and an L2 cache.Shown within the processor of FIG. 5 is a memory area 506 regarding the state of the machine. In one embodiment, the storage area may be a set of registers, while in other embodiments, the storage area may be other storage structures. A processor may have any number of processing cores. However, other embodiments of the present invention may be implemented in other devices in the system, such as separate bus agents, or distributed throughout the system in hardware, software, or some combination thereof.The main memory can be implemented in a variety of memory sources, such as dynamic random access memory (DRAM), hard disk drive (HDD) 520, or a memory source that is remote from the computer system through a network interface 530 and includes multiple storage devices and technologies . The cache memory may be located within or near the processor, such as on the processor's local bus 507.In addition, the cache memory may include faster memory cells, such as six-transistor (6T) cells, or other memory cells with approximately equal or faster access speeds. The computer system of FIG. 5 may be a point-to-point (PtP) network of bus agents, such as a microprocessor, which communicates via a bus signal dedicated to each agent on the PtP network. FIG. 6 illustrates a computer system set up in a point-to-point (PtP) configuration. Specifically, FIG. 6 illustrates a system in which a processor, a memory, and an input / output device are interconnected through a plurality of point-to-point interfaces.The system of FIG. 6 may also include several processors, only two of which are shown for clarity: processors 670, 680. The processors 670, 680 may include local memory controller centers (MCH) 672, 682 to connect the memories 22, 24, respectively. The processors 670 and 680 may exchange data using a PtP interface circuit 678 and 688 through a point-to-point (PtP) interface 650. By using point-to-point interface circuits 676, 694, 686, 698, the processors 670, 680 can exchange data with the chipset 690 through separate PtP interfaces 652, 654. The chipset 690 may also exchange data with the high-performance graphics circuit 638 through the high-performance graphics interface 639. Embodiments of the present invention may be located in any processor with any number of processing cores, or in each of the PtP bus agents of FIG. 6. However, other embodiments of the present invention may exist in other circuits, logic units, or devices within the system of FIG. 6. In addition, in other embodiments of the present invention, several circuits, logic units, or devices shown in FIG. 6 may be distributed.The processor referred to herein, or any other component designed according to embodiments of the present invention, can be designed at different stages, from creative to simulation to production. The data that represents the design can represent the design in a variety of ways. First, as useful in simulation, hardware description languages or other functional description languages can be used to represent hardware. Additionally or alternatively, circuit-level models with logic and / or transistor gates can be generated at certain stages of the design process. In addition, most designs, at some stage, reach the point where they can be modeled with data that represents the physical layout of multiple devices. Where conventional semiconductor fabrication techniques are used, the data representing the device layout model may be data that indicates the presence or absence of multiple features on different mask layers (masks are used to produce integrated circuits).In any representation of the design, data can be stored in any form of machine-readable medium. Light or radio waves, memory, or magnetic or optical storage media, such as magnetic disks, that are modulated or otherwise generated for the transmission of such information, may be the machine-readable medium. Any of these media may "carry" or "indicate" the design, or other information used in embodiments of the invention, such as instructions in an error recovery routine. When an instruction or information-carrying electric carrier wave is transmitted, a new copy is achieved in terms of performing copying, buffering or retransmission of electric signals. Therefore, the action of the communication provider or network provider may be to implement a copy of an article of manufacture such as a carrier wave, thereby implementing the technology of the present invention.Accordingly, techniques for manipulating memory accesses (eg, loading or storing) are disclosed. Although specific embodiments have been described and shown in the accompanying drawings, it should be understood that these embodiments are merely illustrative and not restrictive of the invention as a whole, and the invention is not limited to the specific constructions and arrangements shown and described Because those skilled in the art who have studied the present disclosure can implement many other modifications. For example, in the technical field where such progress is rapid and future improvements are not easily foreseen, the disclosed embodiments enable technology in terms of settings and details without departing from the principles of the present disclosure or the scope of the appended claims. Progress can be easily modified with the help of progress.Aspects of one or more embodiments of the invention may be described, discussed, or otherwise mentioned in a publication regarding a processor or computer system in which one or more embodiments of the invention may be used. These announcements may include, but are not limited to, news publications, magazines, billboards, or other newspapers or otherwise accessible media. Specifically, aspects of one or more embodiments of the present invention may be published on the Internet through a website, "pop-up" advertisements or other web-based media, whether or not the server on which the program that generates the website or pop-up window is located In the United States or its territories.
To provide methods and apparatus for providing holistic global performance and power management.SOLUTION: In an embodiment, logic (e.g., coupled to each compute node of a plurality of compute nodes) causes determination of a policy for power and performance management across the plurality of compute nodes. The policy is coordinated across the plurality of compute nodes to manage a job to one or more objective functions, where the job includes a plurality of tasks to be run concurrently on the plurality of compute nodes.SELECTED DRAWING: Figure 2
A device including a multi-chip integrated circuit (IC) package, wherein the multi-chip IC package is a plurality of processor IC chips, and each processor IC chip consumes a plurality of processor cores and the power consumption of the processor IC chip. And a first power manager for controlling performance, the first power manager managing one or more core clock frequencies associated with one or more of the plurality of processor cores. A plurality of processor IC chips including a first power manager, and an interconnect IC chip coupled to the one or more processor IC chips, wherein the interconnect IC chip comprises the plurality of processor cores. A memory controller for coupling a memory device to a system component including, an I / O interface for coupling one or more input / output (I / O) devices to the interconnected IC chip, and the one or more. A second power manager coupled to a scalable control fabric to secondly adjust performance and power policies across all IC chips in the multi-chip IC package, including the processor IC chip and the interconnected IC chip. Including, the second power manager aggregates the power and / or performance telemetry data received from the first plurality of power managers including the first power manager, and the aggregated power and / or performance telemetry data. Control signals are transmitted to the first plurality of power managers to instruct the first plurality of power managers of one or more power and / or performance constraints based on the above-mentioned first plurality of power managers. Each of the managers, independent of the others of the first plurality of power managers, consumes the power of their respective processor IC chips according to the power and / or performance limits indicated by the second power manager. A device, including an interconnected IC chip, which controls performance and / or performance.The device of claim 1, further comprising a scalable control fabric coupled to the one or more processor IC chips and the interconnected IC chips.The apparatus according to claim 2, wherein the scalable control fabric includes a scalable overlay that can operate on a physical communication fabric that supports data signals in addition to control signals.The device of claim 1, wherein in response to receiving a first control signal, the first power manager modifies the frequency of at least one associated core.The second power manager is firmware or software for aggregating the power and / or performance telemetry data received from the first power manager and for transmitting the control signal to the first power manager. The apparatus according to claim 1, comprising a circuit for executing the above.The first power manager executes firmware or software for controlling the power consumption and / or performance of each processor IC chip according to the power and / or performance limits indicated by the second power manager. The device of claim 4, comprising a circuit.The second power manager and the first power manager include the same circuit, but perform different functions based on the hierarchical relationship between the second power manager and the first power manager. , The apparatus according to claim 6.The device of claim 7, wherein the second power manager and the first power manager execute different program code sequences to perform the different functions.The device of claim 8, wherein the different program code sequence comprises a firmware or software program code.9. The apparatus of claim 9, wherein the firmware or software comprises machine learning firmware or software for learning how to allocate power between execution resources of the system components and to individual processors.The device according to claim 10, wherein the machine learning firmware or software includes enhanced learning firmware or software.
Holistic Global Performance and Power ManagementThe present disclosure relates generally to the field of electronics. More specifically, in some embodiments, it relates to power management for servers and other computing devices.A high performance computing (HPC) system can include a large number of nodes connected by a fabric for distributed computing. Further, the application is divided into tasks that operate simultaneously between nodes in the HPC system. These tasks are broken down into sequential milestones, and tasks are expected to reach each of these milestones at the same time.Unfortunately, if one node completes its work towards the next milestone later than the other, the entire application will stop progressing until the slowest task completes its work. When this happens, the application loses potential performance and power is wasted on nodes that have to wait.A form for carrying out the invention is provided with reference to the accompanying drawings. In the drawing, the leftmost digit of the reference number identifies the figure in which the reference number first appears. The use of the same reference number in different drawings indicates similar or identical items.FIG. 1 shows a block diagram of various computing systems according to some embodiments. FIG. 2 shows a block diagram of a holistic global performance and power management (HGPPM) system according to an embodiment. FIG. 3 shows a detailed block diagram of the interaction of hierarchical partial observation Markov decision process (H-POMDP) agents according to an embodiment. FIG. 4 shows a block diagram of various computing systems according to some embodiments. FIG. 5 shows a block diagram of various computing systems according to some embodiments. FIG. 6 shows a block diagram of various computing systems according to some embodiments.In the following description, a number of specific details are given to provide a good understanding of the various embodiments. However, various embodiments can be implemented without specific details. In other examples, well-known methods, procedures, components, and circuits are not described in detail to avoid obscuring certain embodiments. Further, various aspects of the embodiment include computer-readable instructions organized into integrated semiconductor circuits (“hardware”), one or more programs (“software”), some combinations of hardware and software, and the like. It can be carried out using various means. For the purposes of this disclosure, "logic" shall mean hardware, software, or some combination thereof.As mentioned above, a high performance computing (HPC) system can include a large number of nodes joined by a high speed network fabric for distributed computing. As described herein, a "node" is generally a computational element (which may include one or more processors such as general purpose processors, graphics processors, etc. as described herein), connections to network fabrics, and logins. Indicates a component, service component, and in some cases memory, IO (input / output) device or other component. In general, an application (also referred to herein as a "job") is divided into tasks that run simultaneously across a large number of nodes (eg, tens of thousands) in an HPC system. One or more tasks may be mapped to each node, or a single task may run across one or more cores. A task can consist of the same program running on different data in the same problem set. Tasks are broken down into sequential milestones, and all tasks are expected to complete calculations between milestones within the same amount of time, leading to so-called bulk synchronization style calculations. At milestones, tasks can be synchronized through operations such as global barriers.Unfortunately, if any core or node completes work between synchronizations slower than others (for any one of many reasons), the application until the slowest task completes the work. The whole process stops. When this happens, the application loses potential performance and power is wasted on cores or nodes that have to wait. There are many names for this problem, including load imbalance, application or operating system jitter and performance variation. Load imbalances include static factors such as manufacturing variability that lead to the distribution of hardware component performance, page faults that occur at different times on different cores, and operating systems that affect only some cores. There are numerous causes of system interference, including dynamic factors such as recoverable hardware errors that temporarily impair only one core or node, and an uneven distribution of work between tasks within an application. obtain.As HPC systems continue to grow in size and complexity, load imbalances are becoming a significant source of performance degradation and power waste. Manufacturing variability is a particular problem. Modern processors cannot run floating point intensive workloads at maximum core frequencies without exceeding thermal design and power limits. In addition, two processors of the same model and stepping require different powers to achieve the same core frequency. Industry expects processor performance variability to exceed 20% at a given power budget.To this end, some embodiments provide holistic global performance and power management. More specifically, while managing jobs towards configurable objective functions (eg, maximizing performance within the job power cap, maximizing efficiency within the job power cap, etc.), within the job. A new performance and power management framework for coordinating software and hardware policies across (eg, all) nodes is described. One use of the framework is to solve the above load balancing problem.Additionally, some embodiments include performance maximization, efficiency maximization (eg, minimum energy delay products), performance maximization while managing towards the job power cap, and managing towards the job power cap. Holistic global that coordinates performance and power management decisions across nodes (eg, all) within a job while managing the job towards job power caps or other configurable objectives, such as maximizing efficiency. Provides a performance and power management (HGPPM) framework. The HGPPM technology is based, at least in part, on a hierarchical feedback-guided control system implemented by the Fast Hierarchical Partially Observable Markov Decision Process (H-POMDP) Reinforcement Learning (RL) method. Such embodiments steering power across hierarchical system domains and introduces important features for coordinating a wider range of optimizations across software and hardware abstraction boundaries, resulting in application load imbalance. Can be alleviated. For example, in some embodiments, the HGPPM selects power allocation between hierarchical system domains and offers the best execution options for a given system architecture, problem size, or power allocation to achieve load balancing. Higher performance or efficiency can be achieved by simultaneously adjusting and optimizing the selection of application algorithms from the repertoire to find out.In addition, the techniques described herein include non-mobile computing devices such as desktops, workstations, servers, rack systems, etc., including those described with reference to FIGS. 1-6. , And mobile computing devices such as smartphones, tablets, UMPCs (ultra-mobile personal computers), laptop computers, ultrabook® computing devices, smart watches, smart glasses, etc.). More specifically, FIG. 1 shows a block diagram of a computing system 100 according to an embodiment. FIG. 1 is a schematic representation and does not mean to reflect a physical configuration. System 100 includes one or more processors 102-1 to 102-N (generally referred to herein as "plurality of processors 102" or "processors 102"). The processor 102 can communicate via the interconnect (fabric) 104. It is also possible that one or more processors may share an interconnect / connection to the fabric. Each processor can include various components, but for clarity, only some of them will be described with reference to processor 102-1. Each of the remaining processors 102-2 to 102-N can include components that are the same as or similar to those described with reference to processor 102-1.In embodiments, processor 102-1 includes one or more processor cores 106-1 to 106-M (referred to herein as "plurality of cores 106" or more commonly referred to as "core 106"), cache 108, and more. In certain embodiments, it can be a shared cache or a private cache) and / or can include a router 110. The processor core 106 can be mounted on a single integrated circuit (IC) chip. It can also be implemented in multiple integrated circuits in the same package. Further, the chip is described with reference to one or more shared and / or private caches (such as cache 108), buses or interconnects (such as buses or interconnects 112), logic 150, FIGS. 4-6. It can include a memory controller (including, for example, flash memory, NVM (nonvolatile memory) such as SSD (Solid State Drive)) or other components. In other embodiments, the components of FIG. 1 can be arranged differently, eg, the router can be outside the processor while the VR, memory controller and main memory can be inside the processor. ..In one embodiment, router 110 can be used to communicate between various components of processor 102-1 and / or system 100. Further, the processor 102-1 may include a plurality of routers 110. In addition, a number of routers 110 may be communicating to allow data routing between various internal or external components of processor 102-1. In some embodiments, if there are a large number of routers, some of them can be inside the processor and some of them can be outside.The cache 108 can store data (eg, including instructions) used by one or more components of processor 102-1 such as core 106. For example, cache 108 is data stored in (volatile and / or non-volatile) memory 114 (also referred to herein as "main memory" interchangeably) for high speed access by components of processor 102. Can be cached locally. As shown in FIG. 1, the memory 114 can communicate with the processor 102 via the interconnect 104. In embodiments, the cache 108 (which can be shared) can have various levels, eg, the cache 108 is an intermediate level cache and / or a final level cache (LLC) (L1, L2 cache, etc.). Can be done. In addition, each core 106 can include a level 1 (L1) cache (116-1) (generally referred to herein as "L1 cache 116"). Various components of processor 102-1 can communicate directly with cache 108 via the bus fabric (eg, bus 112) and / or the memory controller hub.The system 100 can also include a (eg, platform) power supply 125 (eg, a direct current (DC) power supply or an alternating current power supply) for powering one or more components of the system 100. The power source 125 may include a PV (photovoltaic) panel, a wind power generator, a thermal generator, a water / hydraulic turbine, and the like. In some embodiments, the power source 125 is one or more battery packs (eg, PV panels, wind power generators, thermal generators, water / hydraulic turbines, plug-in power sources (eg, coupled to an AC power grid). ) And / or includes plug-in power supplies.The power supply 125 can be coupled to the components of the system 100 via a voltage regulator (VR) 130. Further, although FIG. 1 shows one power supply 125 and a single voltage regulator 130, additional power supplies and / or voltage regulators can be utilized. For example, one or more processors 102 may have a corresponding voltage regulator and / or power supply. The voltage regulator 130 may also include a single power plane (eg, powering all cores 106) or a plurality of power planes (eg, where each power plane is a different core or group of cores and / or other than the system 100). Can be coupled to the processor 102 via (which can power its components). Additionally, although FIG. 1 shows the power supply 125 and the voltage regulator 130 as separate components, the power supply 125 and the voltage regulator 130 can be incorporated into other components of the system 100. For example, all or part of the VR 130 can be integrated into the source 125, SOC (as described with reference to FIG. 6) and / or processor 102.As shown in FIG. 1, the memory 114 can be coupled to other components of the system 100 through the memory controller 120. System 100 also includes logic 150 that facilitates and / or executes one or more operations with reference to the HGPPM techniques / embodiments described herein. For example, logic 150 includes one or more compute nodes and / or components of system 100 (eg, processor 102, memory controller 120, memory 114 (sometimes referred to herein as "external memory"), cache 116. , 108 and / or interconnects-fabrics 104, 112, etc.)) performance and / or operations corresponding to power management can be performed. Further, although the logic 150 is shown at some arbitrary location within the system 100, the logic 150 can be located elsewhere within the system 100.In addition, the embodiment is scalable, coordinating performance and power management policies across all nodes in the job and across the software and hardware abstraction layers, while managing the job entirely towards configurable objective functions. Providing dynamic technology. The objective function maximizes performance while satisfying the power cap, minimizing performance differences between computational elements (nodes or cores) while satisfying the power cap (mitigating load imbalances). It can include, but is not limited to. Such techniques are collectively referred to herein as Holistic Global Performance and Power Management (HGPPM), which in one embodiment is at least partially based on hierarchical machine learning algorithms. ..Traditional HPC power managers have many limitations. First, by applying a uniform power cap to each node, the performance difference between the nodes is exacerbated, the frequency of each node becomes non-uniform, and the performance of the application deteriorates. Industry generally expects node performance to fluctuate by more than 20% at a given power cap, so it is important to mitigate these performance differences so as not to exacerbate them. Second, traditional power managers lack the ability to coordinate software and hardware policies. Unfortunately, software and hardware policies have historically been coordinated through independent control systems. This leads to interference between control systems with poor results. It is important to coordinate software and hardware policies under a unified control system. Third, traditional HPC power managers lack scalability. They employ a centralized design that does not allow future systems to coordinate policies across large numbers of nodes (eg, tens of thousands). A radically different design is needed. Finally, traditional HPC systems lack flexibility. To meet the highest performance and efficiency challenges of exascale systems, new policy knobs need to be designed and more opportunities need to be exploited to optimize the system. Current solutions lack a power manager framework that allows understanding and control of new policies. Also, while flexibility is required when programming the performance-to-power trade-offs that power managers make (for example, efficiency takes precedence over performance in some cases), traditional power managers perform. Tends to support only objective functions that are biased towards. With current management technology, it is too brittle.The HGPPM embodiment is a breakthrough that simultaneously solves load imbalance, scalability, and flexibility issues while introducing key features that coordinate software and hardware policies. These are considered to be important requirements for improving the performance and efficiency of exascale systems. More specifically, HGPPM improves prior art in many important ways. HGPPM introduces the ability to detect and mitigate load imbalances within a job by balancing the power allocated to computational elements and equalizing their loads. HGPPM is such that application or operating system jitter, hardware errors, applications or operating systems, etc. are evenly distributed among computational elements (also referred to herein as "nodes" or "cores" that are interchangeable) due to manufacturing variability. This is the first technology that can mitigate the causes of various load imbalances, including the intrinsic causes of imbalances that do not divide work. In addition, at least one embodiment simultaneously provides a synergistic load balancing technique with managing jobs towards a power cap.The HGPPM embodiment also introduces the following important new features: (A) Coordinating policy optimization across software and hardware abstraction boundaries, (b) Extensibility to new types of policies with robust technology for policy optimization, and / or (c) Configurable objectives Flexibility through support for management towards functions. There is no other performance or power manager that supports these features while simultaneously scaling to tune policies across all computational elements in the job. The scalability, robustness and flexibility of such embodiments are groundbreaking.Examples of new policies and optimizations enabled by the HGPPM embodiment are: (a) Applications for maximizing performance or efficiency through new policy knobs that control the number of cores available to each application task. Includes adjusting the processor for better performance or efficiency through a new policy knob that controls how aggressively the processor performs arithmetic operations, memory prefetch operations, etc. However, it is not limited to these. The design of new policies and optimizations is considered important to meet the performance and efficiency challenges of exascale systems, and the HGPPM embodiment is the first performance and power that can plan such optimizations. It is a management framework. In addition, HGPPM's hierarchical learning framework further improves scalability and enhances the responsiveness of load balance to better performance or efficiency of the application.In an embodiment, HGPPM is used to globally coordinate performance and power policies across nodes (eg, all) within a job while managing the job towards a configurable objective function. There is an HGPPM instance bound to each job. The HGPPM approach addresses the scalability challenge of determining policies for many nodes in a job (eg, tens of thousands of nodes) by adopting a hierarchical design based on the hierarchical Partially Observable Markov Decision Process (H-POMDP) machine learning. solve. In particular, one embodiment employs the same reinforcement learning agent hierarchy. As described herein, the use of "nodes", "cores", "computing elements", etc. is interchangeable, eg, each of such units will refer to that unit herein. Indicates a computing component that can perform one or more of the actions described in the book.Reinforcement learning agents optimize policies through interaction with the environment and empirical experiments, not through models. They continuously evaluate the outcome of actions in order to adapt the behavior to the best outcome. Here, the quality of the result is defined by the objective function. Experiments are selected in a systematic way that efficiently navigates the space of all possible policy options. According to embodiments, reinforcement learning algorithms employ a technique called Stochastic Policy Gradient to navigate efficiently while still achieving good or acceptable results.In the H-POMDP, each agent is the same and operates independently for the sub-problem of the overall problem, but all agents use the same objective function and the decision of one agent is considered by its child. The sub-problems are hierarchically defined so as to limit the option space that can be created. In that way, the parent guides the child's behavior to help the child identify the best policy more quickly, or eliminates the child from choosing a particular policy option. This hierarchical approach coordinates performance and power policy decisions from root to leaf in the reinforcement learning agent tree hierarchy.FIG. 2 shows a block diagram of the HGPPM system according to the embodiment. FIG. 2 shows a tree with a depth of 3, but this is for illustrative purposes only, and deeper or shallower tree depths are used in various embodiments. Can be done. In embodiments, each compute node of the system shown in FIG. 2 may include one or more components as described in the computing system of FIGS. 1, 4, 5, and / or 6. More specifically, overall coordination is made possible by a scalable, hierarchical tree design of k-variables. System policies allocate power among hierarchically decomposed (eg, cabinets 202-0 to 202-m (where "cabinet" generally refers to multiple nodes), and then power between nodes. Assign). As further described herein, good overall power allocation decisions are obtained via the H-POMDP reinforcement learning agent running at each node of the tree. Power and / or performance telemetry aggregation (backflowing the tree from leaf to root) and control distribution (parent to child) are provided by the scalable overlay network (SCON) 204. SCON is a logical network implemented at one top of the physical network in an HPC system. The physical network may be an in-band network used by the application (eg, network fabric) or an out-of-band network such as Ethernet (eg, according to the IEEE 802.3 standard). It may be. In one embodiment, the physical network may be the same as or similar to the network 403 described with reference to FIGS. 4 and 5. In FIG. 2, "DN" refers to a dedicated node (eg, a node reserved and unused by the application), and a small box within each DN and compute node represents an H-POMDP agent. As described herein, in embodiments, the H-POMDP agent can be a reinforcement learning agent. As described herein, per-core (or per-core) policies (such as power allocation to each core or core portion) can be provided. As shown in FIG. 2, each compute node can include one or more cores. Also, in the embodiment, the leaf H-POMDP agent is responsible for coordinating any policy within the compute node, which may include per-core policy coordination.In one embodiment, a stochastic policy gradient technique is used. Each node in the HGPPM tree implements a reinforcement learning POMDP algorithm and applies the policy gradient method to search for the optimal policy. For example, a policy can be thought of as a probability distribution for a set of discrete knob settings. For example, the knob contains a set of choices on how the parent agent can allocate power among the children. The policy is evaluated by sampling the knob settings from the distribution, testing it several times, and measuring the resulting reward. The policy is improved through a method similar to the gradient rise in the embodiment. To use a gradient-based method, the policy is made differentiable, which can be made differentiable by adapting a probabilistic softmax policy. The reward gradient is then estimated for the policy and the policy can be stepped in the gradient direction to move to the policy that maximizes the reward. Stochastic policies can be adapted to avoid the "local maxima pitfall" of simpler gradient methods and balance the trade-off between exploration and exploitation. ..In one embodiment, the following operations can be used to implement a stochastic policy gradient. The probabilistic policy gradient algorithm can be implemented using the Natural Actor-Critic framework. Here, a is an action (also known as a knob setting), r is a reward, and α is a step size.As mentioned above, stochastic softmax policies can be used to make the policy differentiable and suitable for gradient-based learning. In particular, the exponential family parameterization of the multinomial distribution is used to give each knob i a set of real weights θi. The probability of obtaining the knob setting j from n possibilities when sampling from the probability distribution for the knob i is given by.The gradient required for the Natural Actor Critic algorithm can be calculated (efficiently) by. Here, is the current probability distribution for the settings that knob i can take, t is the time step of the algorithm, ∇ is the gradient operator, and ∇θ is the gradient with respect to θ. Is a zero vector with a1 at the index given by.As described herein, HGPPM can be applied to correct application load imbalances by balancing power between nodes in a job. In an example of how the load balance problem can be decomposed hierarchically, the load balance problem for the entire job is the load balance between the cabinets used in the job, and then within each cabinet. It is divided into the load balance between the nodes, then the load balance between the tasks mapped to each node, and then the load balance between the cores executing each task. At each particle size, performance is dynamically compared and power is moved from the preceding computational element to the lagging element (to reach the next milestone and barrier in the sequence to maximize or improve application performance. Guided (see the arrival of).In one embodiment, a) the performance discrepancies of the child agents are penalized, b) the best distribution of their input power budget among the children to each agent so that the aggregate performance is rewarded. Maps the job load balancing process to the reinforcement learning abstraction by learning and defining the objective function. Here, the aggregate performance is considered to be the minimum performance obtained by the child agent. The performance of each child agent is the average or median (or other function) of several samples. Each agent learns how to divide its input budget (from its parents) among its children to get the best results from them. The children, in turn, take their budget and divide the budget into their children and so on. Decisions at the bottom level of the tree allow you to specify how the hardware should split power between different types of processors and external memory resources. Performance can be defined based on many metrics. For example, at the lowest level of the H-POMDP tree, the indicator is the core frequency, the next application milestone (provided to HGPPM via programmer-created annotations, or automatically inferred by performance counter analysis. Measuring the progress of each core towards (or other means) or other means), runtime of the application phase completed so far between milestones, rate of retired instructions, rate of main memory access, etc. Can be.In general, the objective function evaluated by each reinforcement learning agent in the H-POMDP is an aggregate of the objective function values of each child. Many aggregations are possible, including the minimum, mean, variation, etc. of the child's objective function values. In one embodiment, if the objective function is to maximize performance, node performance can be defined as the minimum performance of any active core of a processor within a node, and cabinet performance is cabinet performance. It is defined as the minimum performance of any active node in the job, and the performance of the job can be defined as the minimum performance of the active cabinet in the job. Aggregate calculations can be performed by HGPPM technology and / or with the assistance of SCON within the HPC system.As described herein, an embodiment of HGPPM can coordinate different types of policies (beyond the power budget) and coordinate multiple types at once. In this mode, the HGPPM technology configures the policy into a joint policy. Reinforcement learning agents experiment with join policy options and optimize join policies according to the objective function. In one example, consider a hierarchy that ends with one reinforcement learning agent for each node. How the agent divides the node's power budget among the various types of hardware resources on the node, and how many threads / cores each software process on the node uses. Consider the case where it is entrusted to combine and learn two policies. The agent creates a join policy with one option for each combination of power budget and parallelism options. The learning agent tests the new power budget options and the new parallelism options together, measures the combined effect on the objective function, and navigates towards the best coupling policy over time.FIG. 3 is enlarged and displayed on the leaf H-POMDP RL agent among the calculation nodes of FIG. Interactions between RL agents, applications, and processors within a node are shown, including inputs to the RL agent (labeled Observables in Figure 3) and new policy settings output by the RL. Has been done. This figure shows the management of the policy examples described above. The number of threads per application process and the division of node power budgets among hardware resources. In one embodiment, the H-POMDP RL agent captures performance and phase signals from the application. In other embodiments, the performance and phase signals may be automatically inferred by the HGPPM (as described above) without the application's programmer annotations. From the processor, the H-POMDP RL agent captures an energy signal. The output of the RL agent is a new policy setting (eg, a new setting for the number of threads per application process, or a new subdivision of the node power budget between the hardware components of the node).Observables are synthesized in various ways to define the desired objective function. The objective function is evaluated and fed to the Learning algorithm as a reward (as described above). The RL agent uses this reward signal to measure the impact of different policy settings. RL agents experiment with policy settings by enacting policies (in Figure 3, output policy settings are labeled as Actions), measuring their impact on observables and reward signals over time. Explore the policy space. As mentioned above, the RL agent navigates the policy space over time, searching in an efficient manner until it identifies the best policy setting.The figure of FIG. 3 is an example of an embodiment of HGPPM in which a plurality of policies are synthesized. The size of the search space can grow exponentially as more policies are synthesized. There are also dependencies between policies that complicate the search space. The above-mentioned coupling policy (dividing the node power budget among node hardware resources and the number of threads / cores per software process) is an example of a coupling policy having a complicated interdependency. Optimal power partitioning between resources depends on the balance of communication, memory, and computation within the application process, which depends on the number of threads / cores employed by the application process and is used by the application process. The optimal number of threads / cores to do depends on the available communications, memory, and compute bandwidth (although the amount of bandwidth depends on how much power each resource is allocated).One embodiment of HGPPM scales to handle large search spaces with complex trade-offs by adopting stochastic policy gradient reinforcement learning techniques. The probabilistic policy gradient technique estimates the gradient of the objective function index for the policy and then steps in the gradient direction. In the next round, the join policy option to be tried is that of a step away from the previous one (in the gradient direction). The probabilistic policy gradient method does not thoroughly explore the entire space, but only tries in the direction in which the results are expected to improve, gets the steps in the gradient direction, and navigates the exponential search space. ..One drawback of gradient-based search techniques is that they tend to assume that the search space is convex. If not, this method is not guaranteed to land on the overall optimal decision. Therefore, a probabilistic policy gradient algorithm may be adopted instead of the usual policy gradient algorithm. Instead of choosing the next policy option based on the gradient direction, probabilities are assigned to samples from all options and distributions. With a non-zero probability, steps are taken in directions not indicated by the gradient, which is optimal. By doing so, this embodiment can escape the extremum. In the stochastic policy gradient algorithm, instead of learning the policy option that maximizes the objective function, the policy distribution that maximizes it is learned. Gradient steps tend to update the probability distribution and assign more probabilities in the direction that matches the gradient, but there is still the possibility of choosing other options.The approach adapted in one embodiment also addresses three classical challenges using reinforcement learning techniques and H-POMDP. The first challenge involves reinforcement learning balancing the exploration of new areas of policy space with the use of the most well-known optimal policies. The search-focused method uses the suboptimal policy most of the time. Utilization-focused methods can compromise on suboptimal policies, leaving potential benefits on the table. The stochastic policy gradient technique used in some embodiments ensures that a new region of search space is tried because all policy options have non-zero probabilities in the policy distribution.In addition, updating the distribution to take gradient steps and weight in promising directions risks gradually creating strong biases that ruin the "best known" options and search. Although it looks like, one embodiment incorporates a regularizer component that counterbalances the over-bias by making the distribution slow and uniform. The strength of these conflicting forces is systematically adjusted at runtime. One embodiment measures how stable and predictable the relationship between the objective function index and the policy is. The more stable and predictable the relationship, the stronger the force of the gradient steps (Note: the amount of bias added at each step is constant, but frequent gradient steps make the force effective. Can be strong). If the relationship is unstable or unpredictable, normalization wins.The second problem affects not only the reinforcement learning control system but all the control systems. It is a noise issue. The embodiment solves the noise problem partially through the opposing force mechanisms described above and partially through digital signal processing techniques. In particular, noise causes the relationship between the objective function index and the policy to be poorly predicted. The embodiment takes a gradient step only when the noise is recent and low level and the relationship is predictable (in other words, when the gradient result can be trusted to guide towards a better policy). During periods of high noise, normalization will prevail. One embodiment has a free parameter that sets the bias intensity for each gradient step and each normalization step. For these parameters, as long as the application experiences a period of time with a stable and predictable relationship between the objective function indicator and the policy, the gradient step bias is generally superior and the policy distribution over time. It is set to approach the optimum. Many methods can be used to set the bias and normalization step size. Some are manual, but standard techniques can be followed, and some are automatic online methods.Another mechanism used by some embodiments to solve the noise problem is digital signal processing. Many signal processing methods can be used. In one example, a moving average or median filter is applied to the objective function signal. This type of filter tends to smooth the signal using a short history of previous values. It is possible to apply a filter to the input of the objective function. This can have an advantage when the objective function divides by a signal of relatively small amplitude because the noise in the denominator is amplified by the division operation.A third task is to ensure control stability, despite adjustments distributed throughout the hierarchy of H-POMDP reinforcement learning. Consider the following example. Suppose a parent changes to a new power budget before the child has the opportunity to explore the optimal division of the budget among the children. If that happens, parents may base on incomplete data from their children about speculation about how well the power budget works. In reality, parents may never get complete data from their children, and the data are probabilistic. Still, the H-POMDP may never converge, or it takes time to converge on a good overall policy, unless the children are given time to find a good division of the budget. It may be too much.There are many solutions to this problem, with some embodiments. One example involves arranging an enhancement acquisition agent to operate at predetermined time intervals for a duration that becomes coarser (from leaf to route) as the movement moves up the hierarchy. Another approach is to allow the adjustment timescale to be self-configuring across the agent hierarchy. Hierarchical levels run as fast as possible, but speed is rate limited to ensure accuracy. That is, the parent block waits for input from the child, and the child needs performance or other information to evaluate the objective function when the child achieves a good policy (eg, a good allocation of power). Send only to the parent. As a result, parents cannot set new policies (eg, new power budgets for their children) before they are ready. This self-configuring strategy has the advantage of maximizing the responsiveness of overall policy optimization. According to some embodiments, there are many ways to determine when a good policy is reached. One standard method is the convergence test. That is, if the policy change in the last k iterations is less than epsilon, then a good policy has been reached. k and epsilon are free parameters that can be adjusted according to offline manual procedures.At the final level of the H-POMDP hierarchy, reinforcement learning agents have no children. At this hierarchy level, agents can choose when to sample objective function indicators and try new policy options. The objective function can be sampled, for example, in a phase change event, which is sampled at a fixed time interval that is coarser than the phase duration, or at a fixed time interval that is finer than the phase period. The new policy can be tested after one or more samples have been collected, but the number of samples taken for each test can be variable or fixed.In addition, some embodiments can be synergistic with phase-based optimization. Phase-based optimization instantiates a copy of one state for the reinforcement learning agent for each application phase. Similar or identical H-POMDP hierarchies of reinforcement learning agents can be used, but they operate differently depending on which phase the application is in. At any given time, the embodiment determines the current application phase and loads the exact state. Reinforcement learning agents are tasked with optimizing for the same objective function in all phases, but different phases may have their own policies. In addition, the current phase and the definition of the policy for each phase can be determined in many ways as described above. Some examples include getting information from the programmer through annotations on the application (or other software layer). When determining the current phase, it can also be inferred (through the use of hardware event counters) through a dynamic analysis of activity within different computational resources.FIG. 4 shows a block diagram of the computing system 400 according to the embodiment. The computing system 400 can include one or more central processing units (CPUs) 402 or processors that communicate over an interconnect network (or bus) 404. The processor 402 is a general-purpose processor, a network processor (which processes data communicated via the computer network 403), an application processor (used in a mobile phone, a smartphone, etc.), or another type of processor (reduced). Including instruction set computer (RISC) processor or compound instruction set computer (CISC)).Wired (eg, Ethernet, Gigabit, Fiber, etc.) or wireless networks (cellular, 3G (3rd generation mobile phone technology or 3rd generation wireless format (UWCC)), 4G (4th generation (wireless / mobile communication)), low Various types of computer networks 403 can be utilized, including power embedding (eg, LPE). In addition, processor 402 can have a single or multiple core design. Processor 402 with a multi-core design can integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processor 402 with a multi-core design can be implemented as a symmetric or asymmetric multiprocessor.In embodiments, the one or more processors 402 may be the same as or similar to the processor 102 of FIG. For example, one or more processors 402 may include one or more cores 106 and / or cache 108. Also, the operations described with reference to FIGS. 1 to 3 can be performed by one or more components of the system 400.Chipset 406 can also communicate with interconnect network 404. Chipset 406 can include graphics and memory control hub (GMCH) 408. The GMCH 408 is a memory controller 410 that communicates with the memory 114 (which may be the same as or similar to the memory controller 120 of FIG. 1). The system 400 can also include the logic 150 at various locations (such as those shown in FIG. 4, but can be present at other locations within the system 400 (not shown)). The memory 114 can store data including a series of instructions executed by the CPU 402, or any other device included in the computing system 400. In one embodiment, the memory 114 is one or more volatile / non-volatile storage (or memory) such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), and the like. ) Device or hard disk, nanowire memory, dynamic random access memory (FeTRAM), magnetic resistance random access memory (MRAM), flash memory, spin injection memory (STTRAM), resistance random access memory, PCM (phase change memory), etc. Other types of storage devices such as 3D crosspoint memory, solid state drive (SSD) with NAND / NOR memory can be included. Additional devices can communicate via an interconnected network 404 such as multiple CPUs and / or multiple system memories.The GMCH 408 can also include a graphics interface 414 that communicates with the graphics accelerator 416. In one embodiment, the graphics interface 414 can communicate with the graphics accelerator 416 via an Accelerated Graphics Port (AGP) or Peripheral Component Interconnect (PCI) (or PCI Express (PCIe) interface). In an embodiment, the display device 417 (flat panel display, touch screen, etc.) interprets and displays a digital representation of an image stored in a storage device, such as video memory, system memory, etc., by the display. It can communicate with the graphics interface 414 via a signal converter that converts it into a signal. The display signal generated by the display device can be interpreted by the display device 417 and passed through various control devices before being subsequently displayed.The hub interface 418 can allow the GMCH 408 to communicate with the input / output control hub (ICH) 420. The ICH 420 can provide an interface to an I / O device that communicates with the computing system 400. The ICH 420 can communicate with the bus 422 via a peripheral bridge (or controller) 424 such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, another type of peripheral bridge or controller. The bridge 424 can provide a data path between the CPU 402 and the peripheral device. Other types of topologies may be utilized. Also, a plurality of buses can communicate with the ICH 420 via, for example, a plurality of bridges or controllers. In addition, other peripherals that communicate with the ICH420 include, in various embodiments, integrated drive electronics (IDE) or Small Computer System Interface (SCSI) hard drives, USB ports, keyboards, mice, parallel serial ports, floppy (registration). A trademark) disk drive, digital output support (eg, Digital Video Interface (DVI)), or other device can be included.Bus 422 can communicate with audio device 426, one or more disk drives 428 and network interface device 430 (eg, communicating with computer network 403 via a wired or wireless interface). As shown, the network interface device 430 can be coupled to the antenna 431 and includes radio (eg, IEEE 802.11 interface (IEEE802.11a / b / g / n, etc.)). ), Cellular interface, 3G, 4G, via LPE, etc.) to communicate with the network 403. Other devices can communicate via bus 422. Also, in some embodiments, various components (such as the network interface device 430) can communicate with the GMCH 408. In addition, the processors 402 and GMCH408 can be combined to form a single chip. Furthermore, in other embodiments, the graphics accelerator 416 may be included within the GMCH 408.In addition, the computing system 400 may include volatile and / or non-volatile memory (or storage). For example, non-volatile memory includes read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrical EPROM (EEPROM), disk drive (eg, 428), floppy (registered trademark) disk, compact. Disk ROM (CD-ROM), digital versatile disc (DVD), flash memory, magneto-optical disk or other types of non-volatile machine-readable media capable of storing electronic data (including, for example, instructions). include.FIG. 5 shows a computing system 500 arranged in a point-to-point (PtP) configuration according to an embodiment. In particular, FIG. 5 shows a system in which processors, memory, and input / output devices are interconnected by a number of point-to-point interfaces. The operations described with reference to FIGS. 1 to 4 can be performed by one or more components of the system 500.As shown in FIG. 5, the system 500 can include several processors, of which only two processors 502 and 504 are shown for clarity. Processors 502 and 504 can include local memory controller hubs (MCH) 506 and 508 that allow communication with memories 510 and 512, respectively. The memory 510 and / or 512 can store various data as described with reference to the memory 114 of FIGS. 1 and / or 4. Also, in some embodiments, the MCHs 506 and 508 can include the memory controllers 120 and / or the logic 150 of FIGS. 1-4.In an embodiment, the processors 502 and 504 can be one of the processors 402 described with reference to FIG. Processors 502 and 504 can use PtP interface circuits 516 and 518, respectively, to exchange data via point-to-point (PtP) interface 514. Processors 502 and 504 can also use point-to-point interface circuits 526, 528, 530 and 532 to exchange data with the chipset 520 via the individual PtP interfaces 522 and 524, respectively. The chipset 520 can exchange data with the high performance graphics circuit 534 via the high performance graphics interface 536, for example, using the PtP interface circuit 537. As described in FIG. 4, in some embodiments, the graphics interface 536 can be coupled to a display device (eg, display 417).As shown in FIG. 5, one or more of the core 106 and / or cache 108 of FIG. 1 can be located within processors 502 and 504. However, other embodiments may be present in other circuits, logic units or devices within system 500 of FIG. In addition, other embodiments can be distributed across some of the circuits, logic units or devices shown in FIG.The chipset 520 can communicate with the bus 540 using the PtP interface circuit 541. The bus 540 may have one or more devices communicating with it, such as a bus bridge 542, an I / O device 543, and the like. Via bus 544, the bridge 542 has a modem, network interface device, as described with reference to network interface device 430, including keyboard / mouse 545, communication device 546 (eg, including one via antenna 431). Other communication devices that can communicate with the computer network 403, etc.) Can communicate with audio I / O devices and / or data storage devices 548. The data storage device 548 can store code 549 that can be executed by processors 502 and / or 504.In some embodiments, one or more of the components described herein can be embodied on a system on chip (SOC) device. FIG. 6 shows a block diagram of the SOC package according to the embodiment. As shown in FIG. 6, the SOC 602 includes one or more central processing unit (CPU) cores 620, one or more graphics processor unit (GPU) cores 630, and input / output (I / O) interfaces 640. , And the memory controller 120. Various components of the SOC package 602, such as the SCON204 described herein with reference to other figures, can be interconnected or coupled to a bus / network. Also, the SOC package 602 can include more or fewer components, as described herein with reference to other figures. In addition, each component of the SOC Package 602 can include, for example, one or more other components, as described with reference to other figures herein. In one embodiment, the SOC package 602 (and its components) is provided, for example, on one or more integrated circuit (IC) dies packaged on a single semiconductor device.As shown in FIG. 6, the SOC package 602 is coupled to the main memory 114 (outside the SOC package 602) via an interface such as the memory controller 120. In embodiments, memory 114 (or a portion thereof) can be integrated on SOC package 602.The I / O interface 640 can be coupled to one or more I / O devices 670, for example, via interconnects and / or buses as described herein with reference to other figures. .. The I / O device 670 can include one or more of a keyboard, mouse, touchpad, display, image / video capture device (camera, video camera / video recorder, etc.), touch screen, speaker, and the like. Further, in the embodiment, the SOC package 602 can include / integrate the logic 150. Alternatively, the logic 150 can be provided outside the SOC package 602 (ie, as discrete logic).The following examples relate to further embodiments. Example 1 includes logic that is coupled to each node of a plurality of nodes and causes a policy decision for power and performance management to transmit to the plurality of nodes, wherein the policy is power and performance across the plurality of nodes. It causes management coordination, the policy manages jobs for one or more objective functions, and the jobs include devices that include multiple tasks running simultaneously on the plurality of nodes. include. Example 2 includes the device of Example 1, wherein the logic determines a separate policy for each of the plurality of nodes. Example 3 includes the device of Example 1, wherein the logic determines a separate policy for at least a portion of each of the plurality of nodes. In Example 4, the one or more objective functions maximize performance while satisfying the power upper limit, maximize energy efficiency while satisfying the power upper limit, and between the plurality of nodes while satisfying the power upper limit. Includes one or more of minimizing performance differences, maximizing performance or maximizing efficiency, or maximizing performance while maximizing power while meeting power limits, eg 1. Includes the devices described in. Example 5 includes the device of Example 1, wherein the logic operates according to a hierarchical machine learning operation. Example 6 is described in Example 1, wherein the logic performs one or more actions to solve one or more of a load imbalance problem, a scalability problem, or a flexibility problem. Includes equipment. Example 7 includes the device of Example 1, wherein the policy coordinates power and performance management across all nodes in the job. Example 8 includes the device of Example 1, wherein the policy coordinates power and performance management across all nodes in the job and across software and hardware abstraction layers. Example 9 includes the device of Example 1, wherein the logic determines the policy according to a stochastic policy gradient method. Example 10 is described in Example 1, wherein the plurality of nodes form a cabinet, the policy is decomposed hierarchically among one or more cabinets, and then decomposed among the plurality of nodes. Includes equipment. Example 11 includes the device of Example 1, further comprising a scalable overlay network that connects the plurality of nodes. Example 12 further includes a scalable overlay network that joins the plurality of nodes, the scalable overlay network comprising the device of Example 1, which provides aggregation of power or performance telemetry and distribution of control. Example 13 includes the device of Example 1, wherein the system on chip (SOC) integrated circuit includes the logic and the memory. In Example 14, each node of the plurality of nodes has a processor having one or more processor cores, an image processing unit having one or more processor cores, a connection to a network fabric, a login component, a service component, and a memory. , Or the apparatus according to Example 1, which comprises one or more of input / output devices.Example 15 includes a step of inducing a policy decision for power and performance management for each node of a plurality of nodes, and a step of transmitting the policy to the plurality of nodes, wherein the policy includes the plurality of nodes. It causes power and performance management coordination across nodes, the policy manages jobs towards one or more objective functions, and the job is a plurality of tasks running simultaneously on the plurality of nodes. Including methods. Example 16 includes the method of Example 15, further comprising determining a separate policy for each of the plurality of nodes. Example 17 includes the method of Example 15, further comprising determining a separate policy for at least a portion of each of the plurality of nodes. In Example 18, the one or more objective functions maximize performance while satisfying a power upper limit, maximize energy efficiency while satisfying a power upper limit, and between the plurality of nodes while satisfying the power upper limit. To include one or more of minimizing the performance difference, maximizing performance or maximizing efficiency, or maximizing performance while maximizing power while meeting power caps. Includes the methods described. Example 19 includes the method of Example 15, wherein the determination operates according to a hierarchical machine learning operation. Example 20 includes the method of Example 15, wherein the determination is performed to solve one or more of a load imbalance problem, a scalability problem, or a flexibility problem. Example 21 includes the method of Example 15, wherein the policy further comprises coordinating power and performance management across all nodes in the job. Example 22 includes the method of Example 15, wherein the policy further comprises coordinating power and performance management across all nodes in the job and across software and hardware abstraction layers. Example 23 includes the method of Example 15, further comprising the step of determining the policy according to the stochastic policy gradient method. Example 24 includes the method of Example 15, wherein the plurality of nodes form a cabinet, the policy is hierarchically decomposed among one or more cabinets, and then decomposed among the plurality of nodes. .. Example 25 includes the method of Example 15, further comprising the step of joining the plurality of nodes via a scalable overlay network. Example 26 further comprises the step of joining the plurality of nodes via a scalable overlay network, wherein the scalable overlay network provides aggregation of power or performance telemetry and distribution of control, according to the method of Example 15. include.Example 27 is a computer-readable medium that contains the one or more instructions that are executed on the processor and that configures the processor to perform one or more operations, wherein the one or more instructions are: The policy includes a step of inducing a policy decision for power and performance management for each node of the plurality of nodes and a step of transmitting the policy to the plurality of nodes, and the policy includes power and power across the plurality of nodes. It causes performance management coordination, the policy manages jobs for one or more objective functions, and the jobs include a plurality of tasks running simultaneously on the plurality of nodes. Example 28 includes said one or more instructions that are executed on the processor and configure the processor to perform the one or more operations, the one or more instructions being of the plurality of nodes. Includes the computer-readable medium of Example 27, further comprising the step of determining a separate policy for each. Example 29 includes the one or more instructions that are executed on the processor and that configures the processor to perform the one or more operations, the one or more instructions being of the plurality of nodes. Includes the method of Example 27, further comprising the step of determining a separate policy for at least a portion of each.Example 30 includes an apparatus comprising means for performing the method according to any one of the above examples.Example 31 includes a machine-readable medium that, when executed, comprises a machine-readable instruction that implements the method described in any one of the above examples or implements a device.In various embodiments, for example, with reference to FIGS. 1-6, the operations described herein are implemented in hardware (eg, circuits), software, firmware, microcode, or a combination thereof. Tangible (eg, non-temporary) machine-readable or computer-readable that stores instructions (or software procedures) used to program a computer to perform the processes described herein. It can be provided as a computer program product that includes a medium. Also, the term "logic" can include, by way of example, software, hardware or a combination of software and hardware. The machine-readable medium can include a storage device as described with respect to FIGS. 1-3.In addition, such tangible computer readable media may be downloaded as a computer program product, where the program is directed to a data signal (eg, a bus, modem or network connection) via a communication link (eg, a bus, modem or network connection). It is transferred from a remote computer (eg, a server) to a requesting computer (eg, a client) by a carrier (on a carrier, other propagation medium, etc.).By "one embodiment" or "embodiment" in the specification is meant that a particular feature, structure, or property described in connection with an embodiment can be included in at least one implementation. Although the phrase "in one embodiment" appears in various places herein, all may or may not indicate the same embodiment.Also, within the specification and claims, the terms "combined" and "connected" can be used in conjunction with their derivatives. In some embodiments, "connected" can be used to indicate that two or more elements are in direct physical or electrical contact with each other. By "bonded" may mean that two or more elements are in direct physical or electrical contact. However, "combined" can also mean that the two or more elements are not in direct contact with each other, but are still cooperating or interacting with each other.Thus, while the embodiments are described in a language specific to structural features and / or methodological acts, the subject matter described in the claims is not limited to the particular features or actions described. Please understand that. Rather, certain features and behaviors are disclosed as an exemplary form for implementing the subject matter described in the claims.
A method and system are provided in which a Windows Portable Devices (WPD) driver installed and executed on a central device enables one or more applications on that device to interface with a peripheral device, such as a Bluetooth low energy (BLE) device. The peripheral device may utilize a Generic Attribute Profile (GATT) to interface with the WPD driver. Through the WPD driver, the central device may access, transmit, receive, and/or modify information associated with the peripheral device and/or control the peripheral device. The information associated with the peripheral device may include services, characteristics, and/or descriptors. A WPD device and objects that logically or virtually represents the peripheral device may be generated to map attributes of the WPD device to services and/or characteristics associated with the peripheral device. More than one WPD device may be available when multiple peripheral devices are represented in the central device.
A method, comprising:executing, on a central device, a Windows Portable Devices (WPD) driver to enable one or more applications on central device to interface with a peripheral device, wherein:the peripheral device is communicatively coupled to the central device;anda Generic Attribute Profile (GATT) is utilized to interface the central device with the peripheral device; andaccessing, through the WPD driver, information associated with the peripheral device.The method of claim 1, wherein the accessed information comprises one or more services and/or one or more characteristics associated with the peripheral device.The method of claim 1, comprising:generating, in the central device, a WPD device that represents the peripheral device, the WPD comprising one or more WPD objects;mapping, through the WPD driver, one or more attributes of the WPD device to one or more services and/or one or more characteristics associated with the peripheral device; andaccessing, by the one or more applications, the one or more attributes of the WPD device.The method of claim 1, wherein the peripheral device comprises a Bluetooth low energy (BLE) device.The method of claim 1, comprising controlling, through the WPD driver, one or more functions associated with the peripheral device.The method of claim 1, comprising enumerating one or more services and/or one or more characteristics associated with the peripheral device.The method of claim 1, comprising transmitting, through the WPD driver, a registration to the peripheral device to notify the central device when one or more characteristics associated with the peripheral device have changed.The method of claim 1, comprising receiving, through the WPD driver, an indication from the peripheral device that one or more characteristics associated with the peripheral device have changed.The method of claim 1, comprising modifying, through the WPD driver, one or more characteristics associated with the peripheral device.The method of claim 1, comprising communicating between a Bluetooth host stack in the central device and a Bluetooth host stack in the peripheral device.A system, comprising:one or more processors and/or circuits in a central device that are operable to:execute a Windows Portable Devices (WPD) driver to enable one or more applications on the central device to interface with a peripheral device, wherein:the peripheral device is communicatively coupled to the central device; anda Generic Attribute Profile (GATT) is utilized to interface the central device with the peripheral device; andaccess, through the WPD driver, information associated with the peripheral device.The system of claim 11, wherein the accessed information comprises one or more services and/or one or more characteristics associated with the peripheral device.The system of claim 11, wherein the one or more processors and/or circuits are operable to:generate a WPD device that represents the peripheral device, the WPD comprising one or more WPD objects;map, through the WPD driver, one or more attributes of the WPD device to one or more services and/or one or more characteristics associated with the peripheral device; andaccess, by the one or more applications, the one or more attributes of the WPD device.The system of claim 11, wherein the second device comprises a Bluetooth low energy (BLE) device.The system of claim 11, wherein the one or more processors and/or circuits are operable to control, through the WPD driver, one or more functions associated with the peripheral device.
CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCEThis patent application makes reference to, claims priority to, and claims benefit from United States Provisional Patent Application No. 61/419,911, filed December 6, 2010 , which is hereby incorporated herein by reference in its entirety.FIELD OF THE INVENTIONCertain embodiments of the invention relate to Bluetooth device communication. More specifically, certain embodiments of the invention relate to windows portable devices (WPD) interface for Bluetooth low energy (BLE) devices.BACKGROUND OF THE DISCLOSUREBluetooth low energy (BLE) is an enhancement to the Bluetooth standard that was introduced in Bluetooth version 4.0. Devices using Bluetooth low energy wireless technology may consume a fraction of the power of other Bluetooth-enabled products. In some instances, a Bluetooth low energy device may be able to operate more than a year on a coin-cell battery without recharging. The deployment of these types of devices over a wide range of situations may result in a need for such devices to interface with Windows-based machines.Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.BRIEF SUMMARY OF THE INVENTIONAccording to an aspect of the invention, a method is provided, comprising:executing, on a central device, a Windows Portable Devices (WPD) driver to enable one or more applications on central device to interface with a peripheral device, wherein:the peripheral device is communicatively coupled to the central device; anda Generic Attribute Profile (GATT) is utilized to interface the central device with the peripheral device; andaccessing, through the WPD driver, information associated with the peripheral device.Advantageously, the accessed information comprises one or more services and/or one or more characteristics associated with the peripheral device.Advantageously, the method further comprises:generating, in the central device, a WPD device that represents the peripheral device, the WPD comprising one or more WPD objects;mapping, through the WPD driver, one or more attributes of the WPD device to one or more services and/or one or more characteristics associated with the peripheral device; andaccessing, by the one or more applications, the one or more attributes of the WPD device.Advantageously, the peripheral device comprises a Bluetooth low energy (BLE) device.Advantageously, the method further comprises controlling, through the WPD driver, one or more functions associated with the peripheral device.Advantageously, the method further comprises enumerating one or more services and/or one or more characteristics associated with the peripheral device.Advantageously, the method further comprises transmitting, through the WPD driver, a registration to the peripheral device to notify the central device when one or more characteristics associated with the peripheral device have changed.Advantageously, the method further comprises receiving, through the WPD driver, an indication from the peripheral device that one or more characteristics associated with the peripheral device have changed.Advantageously, the method further comprises modifying, through the WPD driver, one or more characteristics associated with the peripheral device.Advantageously, the method further comprises communicating between a Bluetooth host stack in the central device and a Bluetooth host stack in the peripheral device.According to a further aspect, a system is provided comprising:one or more processors and/or circuits in a central device that are operable to:execute a Windows Portable Devices (WPD) driver to enable one or more applications on the central device to interface with a peripheral device, wherein:the peripheral device is communicatively coupled to the central device; anda Generic Attribute Profile (GATT) is utilized to interface the central device with the peripheral device; andaccess, through the WPD driver, information associated with the peripheral device.Advantageously, the accessed information comprises one or more services and/or one or more characteristics associated with the peripheral device.Advantageously, the one or more processors and/or circuits are operable to:generate a WPD device that represents the peripheral device, the WPD comprising one or more WPD objects;map, through the WPD driver, one or more attributes of the WPD device to one or more services and/or one or more characteristics associated with the peripheral device; andaccess, by the one or more applications, the one or more attributes of the WPD device.Advantageously, the second device comprises a Bluetooth low energy (BLE) device.Advantageously, the one or more processors and/or circuits are operable to control, through the WPD driver, one or more functions associated with the peripheral device.Advantageously, the one or more processors and/or circuits are operable to enumerate one or more services and/or one or more characteristics associated with the peripheral device.Advantageously, the one or more processors and/or circuits are operable to transmit, through the WPD driver, a registration to the peripheral device to notify the central device when one or more characteristics associated with the peripheral device have changed.Advantageously, the one or more processors and/or circuits are operable to receive, through the WPD driver, an indication from the peripheral device that one or more characteristics associated with the peripheral device have changed.Advantageously, the one or more processors and/or circuits are operable to modify, through the WPD driver, one or more characteristics associated with the peripheral device.Advantageously, the one or more processors and/or circuits are operable to communicate between a Bluetooth host stack in the central device and a Bluetooth host stack in the peripheral device.Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGSFIG. 1 is a block diagram that illustrates an exemplary Windows-based machine that utilizes a Windows Portable Devices interface to interact with one or more Bluetooth low energy devices, in accordance with an embodiment of the invention.FIGS. 2A and 2B are each diagrams that illustrate examples of the interface between a Windows-based machine and a BLE device through a WPD interface, in accordance with embodiments of the invention.FIG. 3 is a diagram that illustrates an exemplary WPD architecture for interfacing a Windows-based machine and a BLE device, in accordance with an embodiment of the invention.FIG. 4 is a flow chart that illustrates examples of operations associated with a WPD interface for GATT-enabled devices, in accordance with an embodiment of the invention.FIG. 5 is a flow chart that illustrates another example of operations associated with a WPD interface for GATT-enabled devices, in accordance with an embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONCertain embodiments of the invention can be found in a method and system for a Windows Portable Devices interface for Bluetooth low energy devices. In accordance with various embodiments of the invention, a Windows Portable Devices (WPD) driver installed and executed on a central device may enable one or more applications on the central device to interface with a peripheral device, such as a Bluetooth low energy (BLE) device, for example. The central device may be a Windows-based machine, for example. The peripheral device may be communicatively coupled to the central device and may utilize a Generic Attribute Profile (GATT) to interface with the WPD driver. Through the WPD driver, the central device may access, transmit, receive, and/or modify information associated with the peripheral device. Moreover, also through the WPD driver, the central device may control the operation and/or functionality of the peripheral device. The information associated with the peripheral device may include services and characteristics. Each characteristic may have one or more values and/or descriptors. A WPD device that logically or virtually represents the peripheral device may be generated by the WPD driver. The WPD device may comprise one or more WPD objects. WPD object properties may be mapped into services and/or characteristics associated with the peripheral device. The characteristics associated with the peripheral device may comprise one or more values and/or one or more descriptors. In some instances, more than one WPD device may be available when multiple peripheral devices are represented in the central device.FIG. 1 is a block diagram that illustrates an exemplary Windows-based machine that utilizes a Windows Portable Devices interface to interact with one or more Bluetooth low energy devices, in accordance with an embodiment of the invention. Referring to FIG. 1 , there are shown devices 100, 120, 130, and 140. The device 100 is a Windows-based machine that may comprise suitable logic, circuitry, code, and/or interfaces to support Windows Portable Devices. While the device 100 may be a personal computer, a laptop, or a tablet computer, the device 100 need not be so limited and other machines that support Windows-based operations may be utilized.The devices 120, 130, and 140 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to support the use of Bluetooth low energy for Bluetooth communication. In this regard, the devices 120, 130, and 140 may be referred to as BLE devices or BLE-enabled devices, for example. The device 100 may be operable to communicate with the devices 120, 130, and 140 through Bluetooth connections 122, 132, and 142, respectively.The Windows Portable Devices supported by the device 100 may refer to a type of object-based architecture or platform that may be utilized in a Windows-based machine (i.e., central device) to allow access to one or more external devices (i.e., peripheral devices). The external or peripheral devices may comprise, but need not be limited to, key fobs, portable medical devices, media players, digital still and/or video cameras, mobile phones, or other like devices that may be communicatively coupled to the Windows-based machine. The devices 120, 130, and 140 shown in FIG. 1 may correspond to external or peripheral devices while the device 100 may correspond to the central device. The object-based architecture provided by Windows Portable Devices may comprise one or more Application Programming Interfaces (APIs) that enable the interaction between an application running or executing on the Windows-based machine and one or more of the external devices.One or more applications running or executing on the device 100 may utilize Windows Portable Devices to perform various operations in connection with an external device. For example, Windows Portable Devices may enable an application to connect to the external device, search and/or retrieve information from the external device, list or enumerate the external devices that are attached or connected, determine the capabilities of the external device, send and/or generate information to be stored in the external device, modify information in the external device, control the external device, and/or detect the presence or absence of the external device.Also shown in FIG. 1 are WPD devices 124, 134, and 144 that respectively correspond to logical or virtual representations of the devices 120, 130, and 140 in the device 100. The WPD devices 124, 134, and 144 may be utilized by applications executing on the device 100 to perform operations in connection with the devices 120, 130, and 140, respectively. The WPD devices 124, 134, and 144 may be generated to be compatible with the object-based architecture supported by Windows Portable Devices. Each of the WPD devices 124, 134, and 144 may comprise one or more objects, which may be referred to as WPD objects. These objects may have properties, events, or the like. An example of an object is a storage object.The Bluetooth low energy operation supported by the devices 120, 130, and 140 may refer to a specification that is included in Bluetooth 4.0. Bluetooth low energy introduces new protocols to simplify the development and the implementation of low energy profiles. The new protocols may include an Attribute Protocol (ATT) and a Generic Attribute Profile (GATT), for example.In operation, the device 120 may communicate with the device 100 by using Bluetooth low energy protocols associated with the Bluetooth connection 122. An application executing on the device 100 may be able to interact with the device 120 through an interface implemented using the Windows Portable Devices object-based architecture supported by the device 100. Such interface may comprise a driver that allows the Bluetooth low energy protocols to communicate with the object-based architecture of Windows Portable Devices. In this regard, the driver may be utilized to generate the WPD device 124 to enable the interaction between the device 100 and the device 120.Similarly, the devices 130 and 140 may communicate with the device 100 by using Bluetooth low energy protocols associated with the Bluetooth connections 132 and 142, respectively. One or more applications executing on the device 100 may be able to interact with the devices 130 and 140 through interfaces implemented using the Windows Portable Devices object-based architecture supported by the device 100. Those interfaces may comprise a driver that allows the Bluetooth low energy protocols to communicate with the object-based architecture of Windows Portable Devices. In this regard, the drivers may be utilized to generate the WPD devices 134 and 144 to enable the interaction between the device 100 and the devices 130 and 140, respectively.FIGS. 2A and 2B are each diagrams that illustrate examples of the interface between a Windows-based machine and a BLE device through a WPD interface, in accordance with embodiments of the invention. Referring to FIG. 2A , there are shown the Windows-based machine 100 and a Bluetooth low energy device 250. The Bluetooth low energy device 250 may be substantially similar or the same as any one of the devices 120, 130, and 140 shown in FIG. 1 .The Windows-based machine 100 may comprise a memory module 210, a processor module 220, and a communication module 230. The memory module 210 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store data and/or code associated with the execution of one or more applications. The memory module 210 may be operable to store data and/or code utilized to support the processes associated with Windows Portable Devices. The memory module 210 may comprise a single memory device or multiple memory devices. A memory device may be an integrated circuit that comprises a Dynamic Random Access Memory (DRAM), a Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM or DDR2 SDRAM), or FLASH memory, for example.The processor module 220 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to run or execute a Windows-based operating system (OS) and one or more applications. The processor module 220 may be operable to support the processes associated with Windows Portable Devices. The processor module 220 may comprise a single processing device or multiple processing devices. A processing device may be an integrated circuit that comprises a central processing unit (CPU) or host processor, a baseband processor, a graphics processor, or some other type of dedicated processor, for example. The processor module 220 may also be operable to handle data and/or control signals associated with the transmission and/or reception operations of the communication module 230.The communication module 230 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to communicate with one or more external devices. The communication module 230 may support wireless and/or wired communication with external devices. With respect to wireless communication operations, the communication module 230 may comprise one or more radios (not shown) that are operable to transmit and/or receive radio frequency (RF) signals. For example, the communication module 230 may comprise a Bluetooth radio that may be operable to support Bluetooth low energy protocols and enable communication with the Bluetooth low energy device 250. The communication module 230 may also support other types of radios such as radios used for communication in Wireless Local Area Networks (WLANs), Personal Area Networks (PANs), or cellular networks, for example.The Bluetooth low energy device 250 may comprise a memory module 260, a processor module 270, and a communication module 280. The memory module 260 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store data and/or code associated with the operations of the Bluetooth low energy device 250. The memory module 260 may be operable to store data and/or code utilized to support the processes associated with Bluetooth low energy, for example. Like the memory module 210 described above, the memory module 260 may comprise a single memory device or multiple memory devices.The processor module 270 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to run or execute an operating system and one or more applications. The processor module 270 may be operable to support the processes associated with Bluetooth low energy, for example. Like the processor module 220 described above, the processor module 270 may comprise a single processing device or multiple processing devices. The processor module 270 may comprise a baseband processor for handling Bluetooth baseband operations.The communication module 280 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to communicate with one or more additional devices. The additional devices may be Bluetooth-enabled devices or may be devices that utilize a different wireless communication technology, such as WLAN, for example. In this regard, the communication module 280 may comprise one or more radios (not shown) that are operable to transmit and/or receive RF signals. For example, the communication module 280 may comprise a Bluetooth radio that may be operable to support Bluetooth low energy protocols and enable communication with the Windows-based machine 100.The Bluetooth low energy device 250 may be operable to support various Bluetooth-related protocols, profiles, and/or processes. The Bluetooth protocol stack may be implemented by a Bluetooth controller stack that is operable to handle the timing critical radio interface and a Bluetooth host stack that is operable to handle high level data. The Bluetooth controller stack may be implemented utilizing the communication module 280, which may comprise the Bluetooth radio, and the processor module 270, which may comprise a processing device such as a microprocessor, for example. The Bluetooth host stack may be implemented as part of the OS running on the processor module 270 or as an instantiation of a package on top of the OS. In some instances, the Bluetooth controller stack and the Bluetooth host stack may run or execute on the same processing device in the processor module 270.In operation, the Bluetooth low energy device 250 may communicate with the Windows-based machine 100 by using Bluetooth low energy protocols in a Bluetooth connection 290. An application executing on the processor module 220 of the Windows-based machine 100 may be able to interact with the Bluetooth low energy device 250 through an interface implemented using the Windows Portable Devices object-based architecture supported by the Windows-based machine 100. Such interface may comprise a driver that allows the Bluetooth low energy protocols to communicate with the object-based architecture of Windows Portable Devices. In this regard, the driver may be utilized to generate a WPD device 240 as shown in FIG. 2B to enable the interaction between the Windows-based machine 100 and the Bluetooth low energy device 250. The WPD device 240 is a logical or virtual representation of the Bluetooth low energy device 250 supported by the Windows Portable Devices object-based architecture in the Windows-based machine 100.FIG. 3 is a diagram that illustrates an exemplary WPD architecture for interfacing a Windows-based machine and a BLE device, in accordance with an embodiment of the invention. Referring to FIG. 3 , there are shown the Windows-based machine 100 and the Bluetooth low energy device 250. With respect to the Windows-based machine 100, there are shown an application 300 and Windows Driver Foundation (WDF) host process 320, both of which may be part of a WPD architecture supported by the Windows-based machine 100. An example of a WDF host process 320 is a Wudfhost.exe process. The application 300 may comprise one or more APIs 310 for use with Windows Portable Devices. The APIs 310 may be referred to as WPD APIs, for example. The WDF host process 320 may comprise a driver 330 for use with Windows Portable Devices. The driver 330 may be referred to as a WPD driver, for example.The driver 330 may be utilized to generate or instantiate a WPD device associated with the Bluetooth low energy device 250. The WPD device may then be utilized by the application 300 to interact with the Bluetooth low energy device 250. The WPD device may correspond to a logical or virtual representation of the Bluetooth low energy device 250 that may include attributes associated with the Bluetooth low energy device 250. The WPD device may comprise one or more objects to represent the various attributes of the Bluetooth low energy device 250, for example.Also shown in FIG. 3 is a Bluetooth host stack 340, which may run or execute on the Windows-based machine 100. The Bluetooth host stack 340 may comprise various protocols and profiles, including but not limited to a GATT 350, an ATT 360, a low energy Security Manager Protocol (SMP) 370, and a Logical Link Control And Adaptation Protocol (L2CAP) 380. The ATT 360 may be operable as a wire protocol while the GATT 350 may be operable as a protocol that describes how ATT is used in the composition of services. For example, the GATT 350 may be operable to define how ATT attributes are grouped together into services and to describe the characteristics associated with the services. Thus, the GATT 350 and the ATT 360 may utilize characteristics to describe the state of a device and services to describe how those characteristics are related to each other and how they are used. The SMP 370 may be operable for pairing and transport specific key distribution. The L2CAP 380 may be operable to multiplex data between higher layer protocols, segment and reassemble packages, manage multicast data transmission. In some instances, the SMP 370 may be bound to the L2CAP 380.With respect to the Windows-based machine 100, there are shown a Bluetooth controller 315 and a Bluetooth radio 325. The Bluetooth controller 315 may comprise suitable logic, circuitry, code, and/or interfaces that are operable to control the Bluetooth radio interface. The Bluetooth controller 315 may be part of the processor module 220 and/or of the communication module 230 shown in FIG. 2B . The Bluetooth controller 315 may be utilized to implement a Bluetooth controller stack, for example. The Bluetooth radio 325 may comprise suitable logic, circuitry, code, and/or interface that may be operable to wirelessly communicate with a Bluetooth radio on another device.With respect to the Bluetooth low energy device 250, there are shown an application 305 and a Bluetooth host stack 345. The Bluetooth host stack 345 may comprise various protocols and profiles, including but not limited to a GATT 355, an ATT 365, an SMP 375, and an L2CAP 385. The protocols shown in connection with the Bluetooth host stack 345 may be substantially similar to those shown in connection with the Bluetooth host stack 340. Also with respect to the Bluetooth low energy device 250, there are shown a Bluetooth controller 317 and a Bluetooth radio 327, which are substantially similar to the Bluetooth controller 315 and the Bluetooth radio 325, respectively.In operation, the application 300 may communicate with the driver 330 through the API 310 by opening device handles and sending input/output (I/O) control codes. Although not shown, the API 310 and the driver 330 may utilize serializers to pack and unpack commands and/or parameters in buffers. The driver 330 may be utilized to generate a WPD device used by the application 300 to interact with the Bluetooth low energy device 250. The driver 330 may communicate with the Bluetooth host stack 340 based on the low energy protocols GATT 350 and ATT 360. The services and/or characteristics associated with the Bluetooth low energy device 250 may be communicated to the driver 330, which in turn may map the information into the appropriate object attributes as defined by the Windows Portable Devices. The characteristics associated with the Bluetooth low energy device 250 may comprise one or more values and/or one or more descriptors, for example.The application 305 in the Bluetooth low energy device 250 may be associated with certain functionality provided by the device. The application 305 may be utilized to obtain information from the Bluetooth low energy device 250, such as information related to an operation, feature, or capability of the Bluetooth low energy device 250. Such information may be communicated from the Bluetooth low energy device 250 to the Windows-based machine 100 through the Bluetooth connection 290 by having the information pass from the Bluetooth host stack 345 to the Bluetooth host stack 340. When the information is received by the WDF host process 320 from the Bluetooth host stack 340, the information may be mapped into the attributes of a WPD device representing the Bluetooth low energy device 250 by the WPD driver 330. Accordingly, the application 300 may access the information associated with the Bluetooth low energy device 250 from the WPD device.Similarly, controls or commands provided by the application 300 may make their way to the Bluetooth low energy device 250 through the WPD device that is available in the Windows-based machine 100. For example, the application 300 may communicate a registration to a particular descriptor of a characteristic associated with the Bluetooth low energy device 250 to notify the application 300 when that characteristic has changed. Once the registration has been made on the descriptor, if the characteristics associated with that descriptor changes, a notification may be provided to the application 300 from the Bluetooth low energy device 250.Another example of the operation described in connection to FIG. 3 is when the Bluetooth low energy device 250 is a proximity fob that has a battery service and a battery level characteristic. In this example, the driver 330 may expose a method by which the application 300 can read the battery level of the Bluetooth low energy device 250. For example, the application 300, through the driver 330, may read the value of the battery level from attributes in a WPD device representing the Bluetooth low energy device 250. The attribute information may have been obtained from information produced by the application 305 in the Bluetooth low energy device 250. Thus, when the application 300 asks the driver 330 to read the battery level, the driver 330 may use ATT and GATT protocols to talk to the proximity fob, determine the current value of the battery level, and provide the information as an attribute in the WPD device. The application 300 may then access the information from the WPD device. This information may be useful to determine the remaining battery life of the Bluetooth low energy device 250 and a corresponding action to take.In yet another example of the operation described in connection to FIG. 3 , the Bluetooth low energy device 250 may be a thermometer that has a temperature service and a temperature value characteristic. In this example, the driver 330 may expose a method by which the application 300 can read the temperature value of the Bluetooth low energy device 250 from a WPD device representing the Bluetooth low energy device 250. Thus, when the application 300 asks the driver 330 to read the current temperature level, the driver 330 may use ATT and GATT protocols to talk to the thermometer, determine the current value of the temperature level, and provide the information as an attribute in the WPD device. The application 300 may then access the information from the WPD device.Other examples include using the application 300 and the driver 330 to write a value to an alert characteristic of a key fob to cause the key fob to beep, and using the application 300 and the driver 330 to read a weight value characteristic from a weight scale to display the value of the weight.While the examples described above typically relate to Bluetooth low energy devices, the invention need not be so limited. Other devices may also be utilized such as devices that enable the use of GATT or other like protocol or profile to interface with devices that utilize Windows Portable Devices. These devices may be referred to as GATT-enabled devices and may include Bluetooth low energy devices, for example.Moreover, while the examples described above with respect to FIG. 3 relate to a single Bluetooth low energy device 250 and a single application 300 in the Windows-based machine 100, the invention need not be so limited. For example, multiple peripheral devices may result in multiple WPD devices in the Windows-based machine 100. One or more applications in the Windows-based machine 100 may be utilized to access one or more of the WPD devices.FIG. 4 is a flow chart that illustrates examples of operations associated with a WPD interface for GATT-enabled devices, in accordance with an embodiment of the invention. Referring to FIG. 4 , there is shown a flow chart 400 in which, at step 410, a Windows-based machine and a GATT-enabled device may be paired. The Windows-based machine may be the Windows-based machine 100 described above, for example. The GATT-enabled device may be one of the devices 120, 130, 140, and the Bluetooth low energy device 250 described above, for example. At step 420, a WPD driver on the Windows-based machine may execute a GATT discovery procedure to enumerate services, characteristics, and descriptors of the GATT-enabled device. At step 430, the WPD driver may generate a WPD device on the Windows-based machine with attributes that represent the services and/or characteristics associated with the GATT-enabled device. The WPD device may comprise one or more objects, the properties of those objects being mapped to the services and/or characteristics associated with the GATT-enabled device. At step 440, the application running on the Windows-based machine may access information and/or operations of the GATT-enabled device through the WPD driver. Such information may be provided by the WPD driver to the WPD device for the application to access.FIG. 5 is a flow chart that illustrates another example of operations associated with a WPD interface for GATT-enabled devices, in accordance with an embodiment of the invention. Referring to FIG. 5 , there is shown a flow chart 500 in which, at step 510, more than one WPD device may be generated in a Windows-based machine such as the Windows-based machine 100 described above. Each of the WPD devices may be generated by a WPD driver associated with the type of GATT-enabled device being represented by the WPD device.At step 520, one or more applications in the Windows-based machine, such as the application 300 described above, for example, may access information and/or operations of the GATT-enabled devices through the WPD devices and the WPD drivers. For example, an application may be utilized to manage the battery level in more than one Bluetooth low energy device. Such application may access attribute information from the various WPD devices that represent the Bluetooth low energy devices in order to determine the current battery level in each of those devices.In accordance with an embodiment of the invention, a central device, such as the Windows-based machine 100, for example, may execute a WPD driver to enable one or more applications on the central device to interface with a peripheral device. The peripheral device may be one of the devices 120, 130, and 140, and the Bluetooth low energy device 250, for example. The WPD driver may be substantially the same or similar to the driver 330 described above with respect to FIG. 3 . Moreover, the peripheral device may be communicatively coupled to the central device and may utilize GATT to interface with the WPD driver. Once the WPD driver is being executed, information associated with the peripheral device may be accessed through the WPD driver. The accessed information may comprise one or more services and/or one or more characteristics associated with the peripheral device. The characteristics may comprise one or more values and/or one or more descriptors, for example.A WPD device that represents the peripheral device may be generated in the central device. The WPD device may comprise one or more objects that may be referred to as WPD objects. The central device may map, through the WPD driver, the properties (e.g., attributes) of the WPD objects to one or more services and/or one or more characteristics associated with the peripheral device. With such mapping, the application may access the WPD device to interact with the peripheral device.The central device may be operable to control, through the WPD driver, one or more functions of the peripheral device. The central device may be operable to enumerate one or more services and/or one or more characteristics associated with the peripheral device. The central device may be operable to transmit, through the WPD driver, a registration to the peripheral device to notify the central device when one or more characteristics associated with the peripheral device have changed. The central device may also be operable to receive, through the WPD driver, an indication from the peripheral device that one or more characteristics associated with the peripheral device have changed. The central device may be operable to modify, through the WPD driver, one or more characteristics associated with the peripheral device. Communication between the central device and the peripheral device may occur through a Bluetooth host stack in the central device and a Bluetooth host stack in the peripheral device, both of which utilize ATT and GATT.Another embodiment of the invention may provide a non-transitory machine and/or computer readable storage and/or medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for a WPD interface for Bluetooth low energy devices.Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
A synchronous flash memory includes an array of non-volatile memory cells. The memory array is arranged in rows and columns, and can be further arranged in addressable blocks. Data communication connections are used for bi-directional data communication with an external device, such as a processor or other memory controller. A data buffer can be coupled to the data communication connections to manage the bi-directional data communication. This buffer can be a pipelined input/output buffer circuit. Finally, a write latch is coupled between the data buffer and the memory array to latch data provided on the data communication connections. One method of operating a synchronous memory device comprises receiving write data on data connections, latching the write data in a write latch, and releasing the data connections after the write data is latched. A read operation can be performed on the synchronous memory device while the write data is transferred from the write latch to memory cells. Further, the memory device does not require any clock latency during a write operation.
What is claimed is: 1. A method of writing to a synchronous non-volatile memory device comprising:receiving write data on a first clock cycle and executing a data write operation; and executing a data read operation on a next clock cycle immediately following the first clock cycle. 2. The method of claim 1 wherein the data write operation is executed on a first memory bank of the synchronous non-volatile memory device and the data read operation is executed on a second memory bank.3. The method of claim 1 further comprising latching the write data on the first clock cycle.4. The method of claim 1 wherein executing the data write operation comprises:receiving a write command; receiving a row address; and receiving a column address, wherein the column address is received on the first clock cycle in synchronization with the write data. 5. The method of claim 1 further comprises:latching the write data in a write latch on the first clock cycle; and performing a write operation during the next clock cycle to store the write data in the synchronous non-volatile memory device. 6. A method of operating a synchronous memory device comprising:receiving write data on data connections; latching the write data in a write latch; releasing the data connections after the write data is latched; and performing a read operation on the synchronous memory device while the write data is transferred from the write latch to memory cells. 7. The method of claim 6 wherein the read operation is initiated in response to a read command received by the synchronous memory device on a second clock cycle immediately following a first clock cycle coincident with receiving the write data.8. The method of claim 6 further comprises:receiving a row address on a first clock cycle; receiving a column address on a second clock cycle following the first clock cycle, wherein the write data is received on the data connections on the second clock cycle. 9. The method of claim 8 wherein the read operation is initiated in response to a read command received by the synchronous memory device on a third clock cycle immediately following a second clock cycle.10. The method of claim 6 wherein the synchronous memory device comprises an array of non-volatile memory cells.11. A method of writing to a synchronous memory device comprising:providing a write command and write data from a processor to the synchronous memory device on a first clock cycle; storing the write data in a write latch of the synchronous memory device; and performing a write operation to copy the write data from the write latch to a memory array of the synchronous memory device; and providing a read command from the processor to the synchronous memory device on a second clock cycle immediately following the first clock cycle to initiate a read operation on the memory array. 12. The method of claim 11 wherein the write data is copied to a first bank of the memory array and the read operation is performed on a second bank of the memory array.13. The method of claim 11 wherein the processor provides a row address, and a column address, wherein the column address is provided on the first clock cycle in synchronization with the write data.14. A synchronous memory device comprising:a memory array arranged in rows and columns; data communication connections for bi-directional data communication with an external device; data buffer coupled to the data communication connections to manage the bi-directional data communication; and a write latch coupled between the data buffer and the memory array to latch data provided on the data communication connections. 15. The synchronous memory device of claim 14 further comprising control circuitry to copy the data from the write latch to the memory array.16. The synchronous memory device of claim 15 wherein the memory array is arranged in a plurality of memory blocks, and the control circuitry is configured to copy the data from the write latch to a first block of the plurality of memory blocks.17. The synchronous memory device of claim 16 wherein the control circuitry is further configured to read data from a second block of the plurality of memory blocks while the data is copied to the first block.18. The synchronous memory device of claim 14 wherein the memory array comprises non-volatile memory cells.19. A method of operating a synchronous memory device comprising:receiving a read command and corresponding column address on a first clock cycle to request output data from a memory array of the synchronous memory, wherein the output data is provided on an external data connection a predefined number of clock cycles following the first clock cycle; and receiving a first command of a write command sequence on a second clock cycle immediately following the first clock cycle to initiate a write operation to the memory array such that the write command is provided in coincidence with or prior to providing the output data on the external data connection. 20. The method of claim 19 wherein the write command sequence comprises:a load command register cycle used to initiate the write operation; an active cycle used to define and activate a selected row of the memory array; and a write cycle used to define a column of the memory array and provide write data on the external data connection. 21. The method of claim 19 wherein the memory array comprises non-volatile memory cells.22. A method of initiating a write operation in a memory system, the method comprises:providing a read command from a processor to a synchronous memory device; providing a memory array address from the processor to the synchronous memory device on a first clock cycle of a memory array location to perform a read operation; providing a first command of a write command sequence from the processor to the synchronous memory device on a second clock cycle immediately following the first clock cycle to initiate a write operation of the memory array such that the write command is provided prior to providing output data from the memory array address on an external data connection. 23. The method of claim 22 wherein the write command sequence comprises:a load command register cycle used to initiate the write operation; an active cycle used to define and activate a selected row of the memory array; and a write cycle used to define a column of the memory array and provide write data on the external data connection. 24. A memory system comprising:a processor; and a synchronous memory device coupled to the processor via a bi-directional data bus, the synchronous memory device comprises, a memory array arranged in rows and columns; data communication connections coupled to the bi-directional data bus; an input/output data buffer coupled to the data communication connections to manage bi-directional data communication; and a write latch coupled between the data buffer and the memory array to latch data provided on the data communication connections. 25. The memory system of claim 24 wherein the memory array is arranged in a plurality of memory blocks, and the synchronous memory comprises control circuitry configured to copy the data from the write latch to a first block of the plurality of memory blocks.26. The memory system of claim 25 wherein the control circuitry is further configured to read data from a second block of the plurality of memory blocks while the data is copied to the first block.27. The memory system of claim 24 wherein the memory array comprises non-volatile memory cells.
TECHNICAL FIELD OF THE INVENTIONThe present invention relates generally to non-volatile memory devices and in particular the present invention relates to a synchronous non-volatile flash memory.BACKGROUND OF THE INVENTIONMemory devices are typically provided as internal storage areas in the computer The term memory identifies data storage that comes in the form of integrated circuit chips. There are several different types of memory. One type is RAM (random-access memory). This is typically used as main memory in a computer environment. RAM refers to read and write memory; that is, you can both write data into RAM and read data from RAM. This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lostComputers almost always contain a small amount of read-only memory (ROM) that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to. An EEPROM (electrically erasable programmable read-only memory) is a special type non-volatile ROM that can be erased by exposing it to an electrical charge. Like other types of ROM, EEPROM is traditionally not as fast as RAM. EEPROM comprise a large number of memory cells having electrically isolated gates (floating gates). Data is stored in the memory cells in the form of charge on the floating gates. Charge is transported to or removed from the floating gates by programing and erase operations, respectively.Yet another type of non-volatile memory is a Flash memory. A Flash memory is a type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time. Many modern personal computers (PCS) have their basic input/output system (BIOS) stored on a flash memory chip so that it can easily be updated if necessary. Such a BIOS is sometimes called a flash BIOS. Flash memory is also popular in modems because it enables the modem manufacturer to support new protocols as they become standardized.A typical Flash memory comprises a memory array that includes a large number of memory cells arranged in row and column fashion. Each of the memory cells includes a floating gate field-effect transistor capable of holding a charge. The cells are usually grouped into blocks. Each of the cells within a block can be electrically programmed in a random basis by charging the floating gate. The charge can be removed from the floating gate by a block erase operation. The data in a cell is determined by the presence or absence of the charge in the floating gate.A synchronous DRAM (SDRAM) is a type of DRAM that can run at much higher clock speeds than conventional DRAM memory. SDRAM synchronizes itself with a CPU's bus and is capable of running at 100 MHZ, about three times faster than conventional PPM (Fast Page Mode) RAM, and about twice as fast EDO (Extended Data Output) DRAM and BEDO (Burst Extended Data Output) DRAM. SDRAM's can be accessed quickly, but are volatile. Many computer systems are designed to operate using SDRAM, but would benefit from non-volatile memory.For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for a non-volatile memory device that can operate in a manner similar to SDRAM operationSUMMARY OF THE INVENTIONThe above-mentioned problems with memory devices and other problems are addressed by the present invention and will be understood by reading and studying the following specification.In one embodiment, a method of writing to a synchronous non-volatile memory device is provided. The method comprises receiving write data on a first clock cycle and executing a data write operation, and executing a data read operation on a next clock cycle immediately following the first clock cycle.Another method of operating a synchronous memory device comprises receiving write data on data connections, latching the write data in a write latch, releasing the data connections after the write data is latched, and performing a read operation on the synchronous memory device while the write data is transferred from the write latch to memory cells.In yet another embodiment, a method of operating a synchronous memory device comprises receiving a read command and corresponding column address on a first clock cycle to request output data from a memory array of the synchronous memory. The output data is provided on an external data connection a predefined number of clock cycles following the first clock cycle. The method includes receiving a first command of a write command sequence on a second clock cycle immediately following the first clock cycle to initiate a write operation to the memory array such that the write command is provided in coincidence with or prior to providing the output data on the external data connection.A synchronous memory device is provided in one embodiment and comprises a memory array arranged in rows and columns, data communication connections for bi-directional data communication with an external device, and a data buffer coupled to the data communication connections to manage the bi-directional data communication. A write latch is coupled between the data buffer and the memory array to latch data provided on the data communication connections.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a block diagram of a synchronous flash memory of the present invention;FIG. 1B is an integrated circuit pin interconnect diagram of one embodiment of the present invention;FIG. 1C is an integrated circuit interconnect bump grid array diagram of one embodiment of the present invention;FIG. 2 illustrates a mode register of one embodiment of the present invention;FIG. 3 illustrates read operations having a CAS latency of one, two and three clock cycles;FIG. 4 illustrates activating a specific row in a bank of the memory of one embodiment of the present invention;FIG. 5 illustrates timing between an active command and a read or write command;FIG. 6 illustrates a read command;FIG. 7 illustrates timing for consecutive read bursts of one embodiment of the present invention;FIG. 8 illustrates random read accesses within a page of one embodiment of the present invention;FIG. 9 illustrates a read operation followed by a write operation;FIG. 10 illustrates read burst operation that are terminated using a burst terminate command according to one embodiment of the present invention;FIG. 11 illustrates a write command;FIG. 12 illustrates a write followed by a read operation;FIG. 13 illustrates a power-down operation of one embodiment of the present invention;FIG. 14 illustrates a clock suspend operation during a burst read;FIG. 15 illustrates a memory address map of one embodiment of the memory having two boot sectors;FIG. 16 is a flow chart of a self-timed write sequence according to one embodiment of the present invention;FIG. 17 is a flow chart of a complete write statuscheck sequence according to one embodiment of the present invention;FIG. 18 is a flow chart of a self-timed block erase sequence according to one embodiment of the present invention;FIG. 19 is a flow chart of a complete block erase status-check sequence according to one embodiment of the present invention;FIG. 20 is a flow chart of a block protect sequence according to one embodiment of the present invention;FIG. 21 is a flow chart of a complete block status-check sequence according to one embodiment of the present invention;FIG. 22 is a flow chart of a device protect sequence according to one embodiment of the present invention;FIG. 23 is a flow chart of a block unprotect sequence according to one embodiment of the present invention;FIG. 24 illustrates the timing of an initialize and load mode register operation;FIG. 25 illustrates the timing of a clock suspend mode operation;FIG. 26 illustrates the timing of a burst read operation;FIG. 27 illustrates the timing of alternating bank read accesses;FIG. 28 illustrates the timing of a full-page burst read operation;FIG. 29 illustrates the timing of a burst read operation using a data mask signal;FIG. 30 illustrates the timing of a write operation followed by a read to a different bank;FIG. 31 illustrates the timing of a write operation followed by a read to the same bank; andFIG. 32 illustrates a memory system of the present invention.DETAILED DESCRIPTION OF THE INVENTIONIn the following detailed description of present embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the spirit and scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the claims.The following detailed description is divided into two major sections. The first section is an Intece Functional Description that details compatibility with an SDRAM memory. The second major section is a Functional Description that specifies flash architecture functional commands.INTERFACE FUNCTIONAL DESCRIPTIONReferring to FIG. 1A, a block diagram of one embodiment of the present invention is described. The memory device 100 includes an array of non-volatile flash memory cells 102. The array is arranged in a plurality of addressable banks. In one embodiment, the memory contains four memory banks 104, 106, 108 and 110. Each memory bank contains addressable sectors of memory cells. The data stored in the memory can be accessed using externally provided location addresses received by address register 112. The addresses are decoded using row address multiplexer circuitry 114. The addresses are also decoded using bank control logic 116 and row address latch and decode circuitry 118. To access an appropriate column of the memory, column address counter and latch circuitry 120 couples the received addresses to column decode circuitry 122. Circuit 124 provides input/output gating, data mask logic, read data latch circuitry and write driver circuitry. Data is input through data input registers 126 and output through data output registers 128. Command execution logic 130, having a command register 135, is provided to control the basic operations of the memory device. A state machine 132 is also provided to control specific operations performed on the memory arrays and cells. A status register 134 and an identification register 136 can also be provided to output data.FIG. 1B illustrates an interconnect pin assignment of one embodiment of the present invention. The memory package 150 has 54 interconnect pins. The pin configuration is substantially similar to available SDRAM packages. Two interconnects specific to the present invention are RP# 152 and Vccp 154. Although the present invention may share interconnect labels that are appear the same as SDRAM's, the function of the signals provided on the interconnects are described herein and should not be equated to SDRAM's unless set forth herein. FIG. 1C illustrates one embodiment of a memory package 160 that has bump connections instead of the pin connections of FIG. 1C. The present invention, therefore, is not limited to a specific package configuration.Prior to describing the operational features of the memory device, a more detailed description of the interconnect pins and their respective signals is provided. The input clock connection is used to provide a clock signal (CLK). The clock signal can be driven by a system clock, and all synchronous flash memory input signals are sampled on the positive edge of CLX. CLK also increments an internal burst counter and controls the output registers.The input clock enable (CKE) connection is used to activate (HIGH state) and deactivates (LOW state) the CLX signal input. Deactivating the clock input provides POWER-DOWN and STANDBY operation (where all memory banks are idle), ACTIVE POWER-DOWN (a memory row is ACMIE in either bank) or CLOCK SUSPEND operation (bust/access in progress). CKE is synchronous except after the device enters power-down modes, where CKE becomes asynchronous until after exiting the same mode. The input buffers, including CLK, are disabled during power-down modes to provide low standby power. CKE may be tied HIGH in systems where power-down modes (other than RP# deep power-down) are not required.The chip select (CS#) input connection provides a signal to enable (registered LOW) and disable (registered HIGH) a command decoder provided in the command execution logic. All commands are masked when CS# is registered HIGH. Further, CS# provides for external bank selection on systems with multiple banks, and CS# can be considered part of the command code; but may not be necessary.The input command input connections for RAS#, CAS#, and WE# (along with CAS#, CS#) define a command that is to be executed by the memory, as described in detail below. The input/output mask (DQM) connections are used to provide input mask signals for write accesses and an output enable signal for read accesses. Input data is masked when DQM is sampled HIGH during a WRITE cycle. The output buffers are placed in a high impedance (High-Z) state (after a two-clock latency) when DQM is sampled HIGH during a READ cycle. DQML corresponds to data connections DQ0-DQ7 and DQMH corresponds to data connections DQ8-DQ15. DQML and DQMH are considered to be the same state when referenced as DQM.Address inputs 133 are primarily used to provide address signals. In the illustrated embodiment the memory has 12 lines (A0-A11). Other signals can be provided on the address connections, as described below. The address inputs are sampled during an ACTIVE command (row-address A0-A11) and a READ/WRITE command (column-address A0-A7) to select one location in a respective memory bank. The address inputs are also used to provide an operating code (OpCode) during a LOAD COMMAND REGISTER operation, explained below. Address lines A0-A11 are also used to input mode settings during a LOAD MODE REGISTER operation.An input reset/power-down (RP#) connection 140 is used for reset and power-down operations. Upon initial device power-up, a 100 [mu]s delay after RP# has transitioned from LOW to HIGH is required in one embodiment for internal device initialization, prior to issuing an executable command. The RP# signal clears the status register, sets the internal state machine (ISM) 132 to an array read mode, and places the device in a deep powerdown mode when LOW. During power down, all input connections, including CS# 142, are "Don't Care" and all outputs are placed in a High-Z state. When the RP# signal is equal to a VHH voltage (5V), all protection modes are ignored during WRITE and ERASE. The RP# signal also allows a device protect bit to be set to 1 (protected) and allows block protect bits of a 16 bit register, at locations 0 and 15 to be set to 0 (unprotected) when brought to VHH. The protect bits are described in more detail below. RP# is held HIGH during all other modes of operation.Bank address input connections, BA0 and BA1 define which bank an ACTIVE, READ, WRITE, or BLOCK PROTECT command is being applied. The DQ0-DQ15 connections 143 are data bus connections used for bi-directional data communication. Referring to FIG. 1B, a VCCQ connection is used to provide isolated power to the DQ connections to improved noise immunity. In one embodiment, VCCQ=Vcc or 1.8V±0.15V. The VSSQ connection is used to isolated ground to DQs. for improved noise immunity. The VCC connection provides a power supply, such as 3V. A ground connection is provided through the Vss connection. Another optional voltage is provided on the VCCP connection 144. The VCCP connection can be tied externally to VCC, and sources current during device initialization, WRITE and ERASE operations. That is, writing or erasing to the memory device can be performed using a VCCP voltage, while all other operations can be performed with a VCC voltage. The Vccp connection is coupled to a high voltage switch/pump circuit 145.The following sections provide a more detailed description of the operation of the synchronous flash memory. One embodiment of the present invention is a nonvolatile, electrically sector-erasable (Flash), programmable read-only memory containing 67,108,864 bits organized as 4,194,304 words by 16 bits. Other population densities are contemplated, and the present invention is not limited to the example density. Each memory bank is organized into four independently erasable blocks (16 total). To ensure that critical firmware is protected from accidental erasure or overwrite, the memory can include sixteen 256K-word hardware and software lockable blocks. The memory's four-bank architecture supports true concurrent operations.A read access to any bank can occur simultaneously with a background WRITE or ERASE operation to any other bank. The synchronous flash memory has a synchronous interface (all signals are registered on the positive edge of the clock signal, CLK). Read accesses to the memory can be burst oriented. That is, memory accesses start at a selected location and continue for a programmed number of locations in a programmed sequence. Read acceses begin with the registration of an ACTIVE command, followed by a READ command. The address bits registered coincident with the ACTIVE command are used to select the bank and row to be accessed. The address bits registered coincident with the READ command are used to select the starting column location and bank for the burst access.The synchronous flash memory provides for programmable read burst lengths of 1, 2, 4 or 8 locations, or the full page, with a burst terminate option. Further, the synchronous flash memory uses an internal pipelined architecture to achieve high-speed operation.The synchronous flash memory can operate in low-power memory systems, such as systems operating on three volts. A deep power-down mode is provided, along with a power-saving standby mode. All inputs and outputs are low voltage transistor-transistor logic (LVTTL) compatible. The synchronous flash memory offers substantial advances in Flash operating performance, including the ability to synchronously burst data at a high data rate with automatic column address generation and the capability to randomly change column addresses on each clock cycle during a burst access.In general, the synchronous flash memory is configured similar to a multi-bank DRAM that operates at low voltage and includes a synchronous interface. Each of the banks is organized into rows and columns. Prior to normal operation, the synchronous flash memory is initialized. The following sections provide detailed information covering device initialization, register definition, command descriptions and device operation.The synchronous flash is powered up and initialized in a predefined manner. After power is applied to VCC, VCCQ and VCCP (simultaneously), and the clock signal is stable, RP# 140 is brought from a LOW state to a HIGH state. A delay, such as a 100 [mu]s delay, is needed after RP# transitions HIGH in order to complete internal device initialization. After the delay time has passed, the memory is placed in an array read mode and is ready for Mode Register programing or an executable command. After initial programming of a non-volatile mode register 147 (NVMode Register), the contents are automatically loaded into a volatile Mode Register 148 during the initialization. The device will power up in a programmed state and will not require reloading of the non-volatile mode register 147 prior to issuing operational commands. This is explained in greater detail below.The Mode Register 148 is used to define the specific mode of operation of the synchronous flash memory. This definition includes the selection of a burst length, a burst type, a CAS latency, and an operating mode, as shown in FIG. 2. The Mode Register is programmed via a LOAD MODE REGISTER command and retains stored information until it is reprogrammed. The contents of the Mode Register may be copied into the NVMode Register 147. The NVMode Register settings automatically load the Mode Register 148 during initialization Details on ERASE NVMODE REGISTER and WRITE NVMODE REGISTER command sequences are provided below. Those skilled in the art will recognize that an SDRAM requires that a mode register must be externally loaded during each initialization operation. The present invention allows a default mode to be stored in the NV mode register 147. The contents of the NV mode register are then copied into a volatile mode register 148 for access during memory operations.Mode Register bits M0-M2 specify a burst length, M3 specifies a burst type (sequential or interleaved), M4-M6 specific a CAS latency, M7 and M8 specify a operating mode, M9 is set to one, and M10 and M11 are reserved in this embodiment. Because WRITE bursts are not currently implemented, M9 is set to a logic one and write accesses are single location (non-burst) accesses. The Mode Register must be loaded when all banks are idle, and the controller must wait the specified time before initiating a subsequent operation.Read accesses to the synchronous flash memory can be burst oriented, with the burst length being programmable, as shown in Table 1. The burst length determines the maximum number of column locations that can be automatically accessed for a given READ command. Burst lengths of 1, 2, 4, or 8 locations are available for both sequential and the interleaved burst types, and a full-page burst is available for the sequential type. The full-page burst can be used in conjunction with the BURST TERMINATE command to generate arbitrary burst lengths that is, a burst can be selectively terminated to provide custom length bursts. When a READ command is issued, a block of columns equal to the burst length is effectively selected. All accesses for that burst take place within this block, meaning that the burst will wrap within the block if a boundary is reached. The block is uniquely selected by A1-A7 when the burst length is set to two, by A2-A7 when the burst length is set to four, and by A3-A7 when the burst length is set to eight. The remaining (least significant) address bit(s) are used to select the starting location within the block. Full-page bursts wrap within the page if the boundary is reached.Accesses within a given burst maybe programmed to be either sequential or interleaved; this is referred to as the burst type and is selected via bit M3. The ordering of accesses within a burst is determined by the burst length, the burst type and the starting column address, as shown in Table 1.<tb> <sep>TABLE 1<tb> <sep>BURST DEFINITION<tb> <sep> <sep>Order of Accesses Within a Burst<tb> <sep>Burst<sep>Starting<sep>Type =<sep>Type =<tb> <sep>Length<sep>Column Address<sep>Sequential<sep>Interleaved<tb> <sep>2<sep> <sep> <sep>A0<sep>0-1<sep>0-1<tb> <sep> <sep> <sep> <sep>0<sep>1-0<sep>1-0<tb> <sep> <sep> <sep> <sep>1<tb> <sep>4<sep> <sep>A1<sep>A0<tb> <sep> <sep> <sep>0<sep>0<sep>0-1-2-3<sep>0-1-2-3<tb> <sep> <sep> <sep>0<sep>1<sep>1-2-3-0<sep>1-0-3-2<tb> <sep> <sep> <sep>1<sep>0<sep>2-3-0-1<sep>2-3-0-1<tb> <sep> <sep> <sep>1<sep>1<sep>3-0-1-2<sep>3-2-1-0<tb> <sep>8<sep>A2<sep>A1<sep>A0<tb> <sep> <sep>0<sep>0<sep>0<sep>0-1-2-3-4-5-6-7<sep>0-1-2-3-4-5-6-7<tb> <sep> <sep>0<sep>0<sep>1<sep>1-2-3-4-5-6-7-0<sep>1-0-3-2-5-4-7-6<tb> <sep> <sep>0<sep>1<sep>0<sep>2-3-4-5-6-7-0-1<sep>2-3-0-1-6-7-4-5<tb> <sep> <sep>0<sep>1<sep>1<sep>3-4-5-6-7-0-1-2<sep>3-2-1-0-7-6-5-4<tb> <sep> <sep>1<sep>0<sep>0<sep>4-5-6-7-0-1-2-3<sep>4-5-6-7-0-1-2-3<tb> <sep> <sep>1<sep>0<sep>1<sep>5-6-7-0-1-0-3-2<sep>5-4-7-6-1-0-3-2<tb> <sep> <sep>1<sep>1<sep>0<sep>6-7-0-1-2-3-4-5<sep>6-7-4-5-2-3-0-1<tb> <sep> <sep>1<sep>1<sep>1<sep>7-0-1-2-3-4-5-6<sep>7-6-5-4-3-2-1-0<tb> <sep>Full<sep>n = A0-A7<sep>Cn, Cn + 1, Cn + 2<sep>Not supported<tb> <sep>Page<sep>(location 0-255)<sep>Cn + 3, Cn + 4<tb> <sep>256<sep> <sep>. . . Cn - 1,<tb> <sep> <sep> <sep>Cn . . .Column Address Strobe (CAS) latency is a delay, in clock cycles, between the registration of a READ command and the availability of the first piece of output data on the DQ connections. The latency can be set to one, two or three clocks cycles. For example, if a READ command is registered at clock edge n, and the latency is m clocks, the data will be available by clock edge n+m. The DQ connections will start driving data as a result of the clock edge one cycle earlier (n+m-1) and, provided that the relevant access times are met, the data will be valid by clock edge n+m. For example, assuming that the clock cycle time is such that all relevant access times are met, if a READ command is registered at T0, and the latency is programmed to two clocks, the DQs will start driving after T1 and the data will be valid by T2, as shown in FIG. 3. FIG. 3 illustrates example operating frequencies at which different clock latency setting can be used. The normal operating mode is selected by setting M7 and M8 to zero, and the programmed burst length applies to READ bursts.The following truth tables provide more detail on the operation commands of an embodiment of the memory of the present invention. An explanation is provided herein of the commands and follows Truth Table 2.<tb> <sep>TRUTH TABLE 1<tb> <sep>Interface Commands and DQM Operation<tb> <sep>NAME<sep>CS<sep>RAS<sep>CAS<sep>WE<sep> <sep> <sep><tb> <sep>(FUNCTION)<sep>#<sep>#<sep>#<sep>#<sep>DQM<sep>ADDR<sep>DQs<tb> <sep>COMMAND<sep>H<sep>X<sep>X<sep>X<sep>X<sep>X<sep>X<tb> <sep>INHIBIT<tb> <sep>(NOP)<tb> <sep>NO OPERATION<sep>L<sep>H<sep>H<sep>H<sep>X<sep>X<sep>X<tb> <sep>(NOP)<tb> <sep>ACTIVE (Select<sep>L<sep>L<sep>H<sep>H<sep>X<sep>Bank/<sep>X<tb> <sep>bank and<sep> <sep> <sep> <sep> <sep> <sep>Row<tb> <sep>activate row)<tb> <sep>READ (Select bank,<sep>L<sep>H<sep>L<sep>H<sep>X<sep>Bank/<sep>X<tb> <sep>column and start<sep> <sep> <sep> <sep> <sep> <sep>Col<tb> <sep>READ burst)<tb> <sep>WRITE (Select<sep>L<sep>H<sep>L<sep>L<sep>X<sep>Bank/<sep>Valid<tb> <sep>bank, column<sep> <sep> <sep> <sep> <sep> <sep>Col<tb> <sep>and start<tb> <sep>WRITE)<tb> <sep>BURST<sep>L<sep>H<sep>H<sep>L<sep>X<sep>X<sep>Active<tb> <sep>TERMINATE<tb> <sep>ACTIVE<sep>L<sep>L<sep>H<sep>L<sep>X<sep>X<sep>X<tb> <sep>TERMINATE<tb> <sep>LOAD COMMAND<sep>L<sep>L<sep>L<sep>H<sep>X<sep>Com<sep>X<tb> <sep>REGISTER<sep> <sep> <sep> <sep> <sep> <sep>Code<tb> <sep>LOAD MODE<sep>L<sep>L<sep>L<sep>L<sep>X<sep>Op<sep>X<tb> <sep>REGISTER<sep> <sep> <sep> <sep> <sep> <sep>Code<tb> <sep>Write Enable/Output<sep>-<sep>-<sep>-<sep>-<sep>L<sep>-<sep>Active<tb> <sep>Enable<tb> <sep>Write Inhibit/Output<sep>-<sep>-<sep>-<sep>-<sep>H<sep>-<sep>High-Z<tb> <sep>High-Z<tb> <sep>TRUTH TABLE 2<tb> <sep>Flash Memory Command Sequences<tb> <sep> <sep>1st CYCLE<sep>2nd CYCLE<sep>3rd CYCLE<tb> <sep>Operation<sep>CMD<sep>ADDR<sep>ADDR<sep>DQ<sep>RP #<sep>CMD<sep>ADDR<sep>ADDR<sep>DQ<sep>RP #<sep>CMD<sep>ADDR<sep>ADDR<sep>DQ<sep>RP #<tb> <sep>READ<sep>LCR<sep>90H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>Row<sep>Bank<sep>X<sep>H<sep>READ<sep>CA<sep>Bank<sep>X<sep>H<tb> <sep>DEVICE<tb> <sep>Config.<tb> <sep>READ<sep>LCR<sep>70H<sep>X<sep>X<sep>H<sep>ACTIVE<sep>X<sep>X<sep>X<sep>H<sep>READ<sep>X<sep>X<sep>X<sep>H<tb> <sep>Status<tb> <sep>Register<tb> <sep>CLEAR<sep>LCR<sep>50H<sep>X<sep>X<sep>H<tb> <sep>Status<tb> <sep>Register<tb> <sep>ERASE<sep>LCR<sep>20H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>Row<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>D0H<sep>H/VHH<tb> <sep>SETUP/<tb> <sep>Confirm<tb> <sep>WRITE<sep>LCR<sep>40H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>Row<sep>Bank<sep>X<sep>H<sep>WRITE<sep>Col<sep>Bank<sep>D1N<sep>H/VHH<tb> <sep>SETUP/<tb> <sep>WRITE<tb> <sep>Protect<sep>LCR<sep>60H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>Row<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>01H<sep>H/VHH<tb> <sep>BLOCK/<tb> <sep>Confirm<tb> <sep>Protect<sep>LCR<sep>60H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>X<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>F1H<sep>VHH<tb> <sep>DEVICE/<tb> <sep>Confirm<tb> <sep>Unprotect<sep>LCR<sep>60H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>X<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>D0H<sep>H/VHH<tb> <sep>BLOCKS/<tb> <sep>Confirm<tb> <sep>ERASE<sep>LCR<sep>30H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>X<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>C0H<sep>H<tb> <sep>NV mode<tb> <sep>Register<tb> <sep>WRITE<sep>LCR<sep>A0H<sep>Bank<sep>X<sep>H<sep>ACTIVE<sep>X<sep>Bank<sep>X<sep>H<sep>WRITE<sep>X<sep>Bank<sep>X<sep>H<tb> <sep>NV mode<tb> <sep>RegisterThe COMMAND INHIBIT function prevents new commands from being executed by the synchronous flash memory, regardless of whether the CLK signal is enabled. The synchronous flash memory is effectively deselected, but operations already in progress are not affected.The NO OPERATION (NOP) command is used to perform a NOP to the synchronous flash memory that is selected (CS# is LOW). This prevents unwanted commands from being registered during idle or wait states, and operations already in progress are not affected.The mode register data is loaded via inputs A0-A11. The LOAD MODE REGISTER command can only be issued when all array banks are idle, and a subsequent executable command cannot be issued until a predetermined time delay (MRD) is met. The data in the NVMode Register 147 is automatically loaded into the Mode Register 148 upon power-up initialization and is the default data unless dynamically changed with the LOAD MODE REGISTER command.An ACTIVE command is used to open (or activate) a row in a particular array bank for a subsequent access. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A11 selects the row. This row remains active for accesses until the next A command, power-down or RESET.The READ command is used to initiate a burst read access to an active row. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A7 selects the starting column location Read data appears on the DQs subject to the logic level on the data mask (DQM) input that was present two clocks earlier. If a given DQM signal was registered HIGH, the corresponding DQs will be High-Z (high impedance) two clocks later; if the DQM signal was registered LOW, the DQs will provide valid data. Thus, the DQM input can be used to mask output data during a read operation.A WRITE command is used to initiate a single-location write access on an active row. A WRITE command must be preceded by a WRITE SETUP command. The value on the BA0, BA1 inputs selects the bank, and the address provided on inputs A0-A7 selects a column location. Input data appearing on the DQs is written to the memory array, subject to the DQM input logic level appearing coincident with the data. If a given DQM signal is registered LOW, the corresponding data will be written to memory; if the DQM signal is registered HIGH, the corresponding data inputs will be ignored, and a WRITE will not be executed to that word/column location. A WRITE command with DQM HIGH is considered a NOP.An ACTIVE TERMINATE command is not required for synchronous flash memories, but can be provided to terminate a read in a manner similar to the SDRAM PRECHARGE command. The ACTIVE TERMINATE command can be issued to terminate a BURST READ in progress, and may or may not be bank specific.A BURST TERMINATE command is used to truncate either fixed-length or full-page bursts. The most recently registered READ command prior to the BURST TERMINATE command will be truncated. BURST TERMINATE is not bank specific.The Load Command Register operation is used to initiate flash memory control commands to the Command Execution Logic (CEL) 130. The CEL receives and interprets commands to the device. These commands control the operation of the Internal State Machine 132 and the read path (i.e., memory array 102, ID Register 136 or Status Register 134).Before any READ or WRITE commands can be issued to a bank within the synchronous flash memory, a row in that bank must be "opened." This is accomplished via the ACTIVE command (defined by CS#, WE#, RAS#, CAS#), which selects both the bank and the row to be activated, see FIG. 4.After opening a row (issuing an ACTIVE command), a READ or WRITE command may be issued to that row, subject to a time period (tRCD) specification, tRCD (MIN) should be divided by the clock period and rounded up to the next whole number to determine the earliest clock edge after the ACTIE command on which a READ or WRITE command can be entered. For example, a tRCD specification of 30 ns with a 90 MHZ clock (11.11 ns period) results in 2.7 clocks, which is rounded to 3. This is reflected in FIG. 5, which covers any case where 2<tRCD (MWN)/tCK<3. (The same procedure is used to convert other specification limits from time units to clock cycles).A subsequent ACTIVE command to a different row in the same bank can be issued without having to close a previous active row, provided the minimum time interval between successive ACTIVE commands to the same bank is defined by tRC.A subsequent ACTIVE command to another bank can be issued while the first bank is being accessed, which results in a reduction of total row access overhead. The minimum time interval between successive ACTIVE commands to different banks is defined by a time period tRRD.READ bursts are initiated with a READ command (defined by CS#, WE#, RAS#, CAS#), as shown in FIG. 6. The starting column and bank addresses are provided with the READ command. During READ bursts, the valid data-out element from the starting column address will be available following the CAS latency after the READ command. Each subsequent data-out element will be valid by the next positive clock edge. Upon completion of a burst, assuming no other commands have been initiated, the DQs will go to a High-Z state. A full page burst will continue until terminated. (At the end of the page, it will wrap to column 0 and continue.) Data from any READ burst may be truncated with a subsequent READ command, and data from a fixed-length READ burst may be immediately followed by data from a subsequent READ command. In either case, a continuous flow of data can be maintained. The first data element from the new burst follows either the last element of a completed burst, or the last desired data element of a longer burst that is being truncated. The new READ command should be issued x cycles before the clock edge at which the last desired data element is valid, where x equals the CAS latency minus one. This is shown in FIG. 7 for CAS latencies of one, two and three; data element n+3 is either the last of a burst of four, or the last desired of a longer burst. The synchronous flash memory uses a pipelined architecture and therefore does not require the 2n rule associated with a prefetch architecture. A READ command can be initiated on any clock cycle following a previous READ command. Full-speed, random read accesses within a page can be performed as shown in FIG. 8, or each subsequent READ may be performed to a different bank.Data from any READ burst may be truncated with a subsequent WRITE command (WRITE commands must be preceded by WRITE SETUP), and data from a fixed-length READ burst may be immediately followed by data from a subsequent WRITE command (subject to bus turnaround limitations). The WRITE may be initiated on the clock edge immediately following the last (or last desired) data element from the READ burst, provided that I/O contention can be avoided. In a given system design, there may be the possibility that the device driving the input data would go Low-Z before the synchronous flash memory DQs go High-Z. In this case, at least a single-cycle delay should occur between the last read data and the WRITE command.The DQM input is used to avoid I/O contention as shown in FIG. 9. The DQM signal must be asserted (HIGH) at least two clocks prior to the WRITE command (DQM latency is two clocks for output buffers) to suppress data-out from the READ. Once the WRITE command is registered, the DQs will go High-Z (or remain High-Z) regardless of the state of the DQM signal. The DQM signal must be de-asserted prior to the WRITE command DQM latency is zero clocks for input buffers) to ensure that the written data is not masked. FIG. 9 shows the case where the clock frequency allows for bus contention to be avoided without adding a NOP cycle.A fixed-length or full-page READ burst can be truncated with either ACTIVE TERMINATE (may or may not be bank specific) or BURST TERMINATE (not bank specific) commands. The ACTIVE TERMINATE or BURST TERMINATE command should be issued x cycles before the clock edge at which the last desired data element is valid, where x equals the CAS latency minus one. This is shown in FIG. 10 for each possible CAS latency, data element n+3 is the last desired data element of a burst of four or the last desired of a longer burst.A single-location WRITE is initiated with a WRITE command (defined by CS#, WE#, RAS#, CAS#) as shown in FIG. 11. The starting column and bank addresses are provided with the WRITE command. Once a WRITE command is registered, a READ command can be executed as defined by Truth Tables 4 and 5. An example is shown in FIG. 12. During a WRITE, the valid data-in is registered coincident with the WRITE command.Unlike SDRAM, synchronous flash does not require a PRECHARGE command to deactivate the open row in a particular bank or the open rows in all banks. The ACTIVE TERMINATE command is similar to the BURST TERMINATE command; however, ACTIVE TERMINATE may or may not be bank specific. Asserting input A10 HIGH during an ACTIVE TERMINATE command will terminate a BURST READ in any bank. When A10 is low during an ACTIVE TERMINATE command, BA0 and BA1 will determine which bank will undergo a terminate operation. ACTIVE TERMINATE is considered a NOP for banks not addressed by A10, BA0, BA1.Power-down occurs if clock enable, CKE is registered LOW coincident with a NOP or COMMAND INHIBIT, when no accesses are in progress. Entering power-down deactivates the input and output buffers (excluding CKE) after internal state machine operations (including WRITE operations) are completed, for power savings while in standby.The power-down state is exited by registering a NOP or COMMAND INHIBIT and CKE HIGH at the desired clock edge (meeting tCKS). See FIG. 13 for an example power-down operation.A clock suspend mode occurs when a column access/burst is in progress and CKE is registered LOW. In the clock suspend mode, an internal clock is deactivated, "freezing" the synchronous logic. For each positive clock edge on which CKE is sampled LOW, the next internal positive clock edge is suspended. Any command or data present on the input pins at the time of a suspended internal clock edge are ignored, any data present on the DQ pins will remain driven, and burst counters are not incremented, as long as the clock is suspended (see example in FIG. 14). Clock suspend mode is exited by registering CKE HIGH; the internal clock and related operation will resume on the subsequent positive clock edge.The burst read/single write mode is a default mode in one embodiment. All WRITE commands result in the access of a single column location (burst of one), while READ commands access columns according to the programmed burst length and sequence. The following Truth Table 3 illustrates memory operation using the CKE signal.<tb> <sep>TRUTH TABLE 3<tb> <sep>CKE<tb> <sep> <sep> <sep>CURRENT<sep> <sep><tb> <sep>CKEn-1<sep>CKEn<sep>STATE<sep>COMMANDn<sep>ACTIONn<tb> <sep>L<sep>L<sep>POWER-<sep>X<sep>Maintain POWER-<tb> <sep> <sep> <sep>DOWN<sep> <sep>DOWN<tb> <sep> <sep> <sep>CLOCK<sep>X<sep>Maintain CLOCK-<tb> <sep> <sep> <sep>SUSPEND<sep> <sep>SUSPEND<tb> <sep>L<sep>H<sep>POWER-<sep>COMMAND<sep>Exit POWER-<tb> <sep> <sep> <sep>DOWN<sep>INHIBIT or NOP<sep>DOWN<tb> <sep> <sep> <sep>CLOCK<sep>X<sep>Exit CLOCK<tb> <sep> <sep> <sep>SUSPEND<sep> <sep>SUSPEND<tb> <sep>H<sep>L<sep>All Banks Idle<sep>COMMAND<sep>POWER-DOWN<tb> <sep> <sep> <sep>Reading or<sep>INHIBIT or NOP<sep>Entry CLOCK<tb> <sep> <sep> <sep>Writing<sep>VALID<sep>SUSPEND Entry<tb> <sep>H<sep>H<sep> <sep>See Truth Table 4<tb> <sep>TRUTH TABLE 4<tb> <sep>Current State Bank n - Command to Bank n<tb> <sep>CURRENT<sep>CS<sep>RAS<sep>CAS<sep>WE<sep><tb> <sep>STATE<sep>#<sep>#<sep>#<sep>#<sep>COMMAND/ACTION<tb> <sep>Any<sep>H<sep>X<sep>X<sep>X<sep>COMMAND INHIBIT (NOP/<tb> <sep> <sep> <sep> <sep> <sep> <sep>continue previous operation)<tb> <sep> <sep>L<sep>H<sep>H<sep>H<sep>NO OPERATION (NOP/<tb> <sep> <sep> <sep> <sep> <sep> <sep>continue previous operation<tb> <sep>Idle<sep>L<sep>L<sep>H<sep>H<sep>ACTIVE (Select and<tb> <sep> <sep> <sep> <sep> <sep> <sep>activate row)<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep> <sep>L<sep>L<sep>L<sep>L<sep>LOAD MODE REGISTER<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>ACTIVE TERMINATE<tb> <sep>Row Active<sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep> <sep> <sep> <sep> <sep> <sep>start READ burst)<tb> <sep> <sep>L<sep>H<sep>L<sep>L<sep>WRITE (Select column and start<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>WRITE)<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>ACTIVE TERMINATE<tb> <sep> <sep> <sep> <sep> <sep> <sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep>READ<sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep> <sep> <sep> <sep> <sep> <sep>start new READ burst)<tb> <sep> <sep>L<sep>H<sep>L<sep>L<sep>WRITE (Select column and start<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>WRITE)<tb> <sep> <sep>L<sep>H<sep>H<sep>L<sep>ACTIVE TERMINATE<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>BURST TERMINATE<tb> <sep> <sep> <sep> <sep> <sep> <sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep>WRITE<sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep> <sep> <sep> <sep> <sep> <sep>start new READ burst)<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep>TRUTH TABLE 5<tb> <sep>Current State Bank n - Command to Bank m<tb> <sep>CURRENT<sep>CS<sep>RAS<sep>CAS<sep>WE<sep><tb> <sep>STATE<sep>#<sep>#<sep>#<sep>#<sep>COMMAND/ACTION<tb> <sep>Any<sep>H<sep>X<sep>X<sep>X<sep>COMMAND INHIBIT (NOP/<tb> <sep> <sep> <sep> <sep> <sep> <sep>continue previous operation)<tb> <sep> <sep>L<sep>H<sep>H<sep>H<sep>NO OPERATION (NOP/<tb> <sep> <sep> <sep> <sep> <sep> <sep>continue previous operation<tb> <sep>Idle<sep>X<sep>X<sep>X<sep>X<sep>Any Command Otherwise<tb> <sep> <sep> <sep> <sep> <sep> <sep>Allowed to Bank m<tb> <sep>Row<sep>L<sep>L<sep>H<sep>H<sep>ACTIVE (Select and<tb> <sep>Activating,<sep> <sep> <sep> <sep> <sep>activate row)<tb> <sep>Active, or<sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep>Active<sep> <sep> <sep> <sep> <sep>start READ burst)<tb> <sep>Terminate<sep>L<sep>H<sep>L<sep>L<sep>WRITE (Select column and start<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>WRITE<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>ACTIVE TERMINATE<tb> <sep> <sep> <sep> <sep> <sep> <sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep>READ<sep>L<sep>L<sep>H<sep>H<sep>ACTIVE (Select and<tb> <sep> <sep> <sep> <sep> <sep> <sep>activate row)<tb> <sep> <sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep> <sep> <sep> <sep> <sep> <sep>start new READ burst)<tb> <sep> <sep>L<sep>H<sep>L<sep>L<sep>WRITE (Select column and start<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>WRITE)<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>ACTIVE TERMINATE<tb> <sep> <sep> <sep> <sep> <sep> <sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTER<tb> <sep>WRITE<sep>L<sep>L<sep>H<sep>H<sep>ACTIVE (Select and<tb> <sep> <sep> <sep> <sep> <sep> <sep>activate row)<tb> <sep> <sep>L<sep>H<sep>L<sep>H<sep>READ (Select column and<tb> <sep> <sep> <sep> <sep> <sep> <sep>start READ burst)<tb> <sep> <sep>L<sep>L<sep>H<sep>L<sep>ACTIVE TERMINATE<tb> <sep> <sep>L<sep>H<sep>H<sep>L<sep>BURST TERMINATE<tb> <sep> <sep>L<sep>L<sep>L<sep>H<sep>LOAD COMMAND<tb> <sep> <sep> <sep> <sep> <sep> <sep>REGISTERFunction DescriptionThe synchronous flash memory incorporates a number of features to make it ideally suited for code storage and execute-in-place applications on an SDRAM bus. The memory array is segmented into individual erase blocks. Each block may be erased without affecting data stored in other blocks. These memory blocks are read, written and erased by issuing commands to the command execution logic 130 (CEL). The CEL controls the operation of the Internal State Machine 132 (ISM), which completely controls all ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, WRITE, BLOCK ERASE, BLOCK PROTECT, DEVICE PROTECT, UNPROTECT ALL BLOCKS and VERIFY operations. The ISM 132 protects each memory location from overersure and optimizes each memory location for maximum data retention. In addition, the ISM greatly simplifies the control necessary for writing the device in-system or in an external programmer.The synchronous flash memory is organized into 16 independently erasable memory blocks that allow portions of the memory to be erased without affecting the rest of the memory data. Any block may be hardware-protected against inadvertent erasure or writes. A protected block requires that the RP# pin be driven to VHH (a relatively high voltage) before being modified. The 256K-word blocks at locations 0 and 15 can have additional hardware protection. Once a PROTECT BLOCK command has been executed to these blocks, an UNPROTECT ALL BLOCKS command will unlock all blocks except the blocks at locations 0 and 15, unless the RP# pin is at VHH. This provides additional security for critical code during in-system firmware updates, should an unintentional power disruption or system reset occur.Power-up initialization, ERASE, WRITE and PROTECT timings are simplified by using an ISM to control all programming algorithms in the memory array. The ISM ensures protection against overasure and optimizes write margin to each cell. During WRITE operations, the ISM automatically increments and monitors WRITE attempts, verifies write margin on each memory cell and updates the ISM Status Register. When a BLOCK ERASE operation is performed, the ISM automatically overwrites the entire addressed block (eliminates overasure), increments and monitors ERASE attempts and sets bits in the ISM Status Register.The 8-bit ISM Status Register 134 allows an external processor 200 to monitor the status of the ISM during WRITE, ERASE and PROTECT operations. One bit of the 8-bit Status Register (SR7) is set and cleared entirely by the ISM. This bit indicates whether the ISM is busy with an ERASE, WRITE or PROTECT task. Additional error information is set in three other bits (SR3, SR4 and SR5): write and protect block error, erase and unprotect all blocks error, and device protection error. Status register bits SR0, SR1 and SR2 provide details on the ISM operation underway. The user can monitor whether a device-level or bank-level ISM operation (including which bank is under ISM control) is underway. These six bits (SR3-SR5) must be cleared by the host system. he status register is described in further detail below with reference to Table 2.The CEL 130 receives and interprets commands to the device. These commands control the operation of the ISM and the read path (i.e., memory array, device configuration or status register). Commands may be issued to the CEL while the ISM is active.To allow for maximum power conservation, the synchronous flash features a very low current, deep power-down mode. To enter this mode, the RP# pin 140 (reset/power-down) is taken to VSS±0.2V. To prevent an inadvertent RESET, RP# must be held at Vss for 100 ns prior to the device entering the reset mode. With RP# held at Vss, the device will enter the deep power-down mode. After the device enters the deep power-down mode, a transition from LOW to HIGH on RP# will result in a device power-up initialize sequence as outlined herein. Transitioning RP# from LOW to HIGH after entering the reset mode but prior to entering deep power-down mode requires a 1 [mu]s delay prior to issuing an executable command. When the device enters the deep power-down mode, all buffers excluding the RP# buffer are disabled and the current draw is low, for example, a maximum of 50 [mu]A at 3.3V VCC. The input to RP# must remain at Vss during deep power-down. Entering the RESET mode clears the Status Register 134 and sets the ISM 132 to the array read mode.The synchronous flash memory array architecture is designed to allow sectors to be erased without disturbing the rest of the array. The array is divided into 16 addressable "blocks" that are independently erasable. By erasing blocks rather than the entire array, the total device endurance is enhanced, as is system flexibility. Only the ERASE and BLOCK PROTECT functions are block oriented. The 16 addressable blocks are equally divided into four banks 104, 106, 108 and 110 of four blocks each. The four banks have simultaneous read-while-write functionality. An ISM WRITE or ERASE operation to any bank can occur simultaneously to a READ operation to any other bank. The Status Register 134 may be polled to determine which bank is under ISM operation. The synchronous flash memory has a single background operation ISM to control power-up initialization, ERASE, WRITE, and PROTECT operations. Only one ISM operation can occur at any time; however, certain other commands, including READ operations, can be performed while the ISM operation is taking place. An operational command controlled by the ISM is defined as either a bank-level operation or a device-level operation. WRITE and ERASE are bank-level ISM operations. After an ISM bank operation has been initiated, a READ to any location in the bank may output invalid data, whereas a READ to any other bank will read the array. A READ STATUS REGISTER command will output the contents of the Status Register 134. The ISM status bit will indicate when the ISM operation is complete (SR7=1). When the ISM operation is complete, the bank will automatically enter the array read mode. ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, BLOCK PROTECT, LJSS DEVICE PROTECT, and UNPROTECT ALL BLOCKS are device-level ISM operations. Once an ISM device-level operation has been initiated, a READ to any bank will output the contents of the array. A READ STATUS REGISTER command may be issued to determine completion of the ISM operation. When SR7=1, the ISM operation will be complete and a subsequent ISM operation may be initiated. Any block may be protected from unintentional ERASE or WRITE with a hardware circuit that requires the RP# pin be driven to VHH before a WRITE or ERASE is commenced, as explained below.Any block may be hardware-protected to provide extra security for the most sensitive portions of the firmware. During a WRITE or ERASE of a hardware protected block, the RP# pin must be held at VHH until the WRITE or ERASE is completed. Any WRITE or ERASE attempt on a protected block without RP#=VHH will be prevented and will result in a write or erase error. The blocks at locations 0 and 15 can have additional hardware protection to prevent an inadvertent WRITE or ERASE operation. In this embodiment, these blocks cannot be software-unlocked through an UNPROTECT ALL BLOCKS command unless RP#=VHH. The protection status of any block may be checked by reading its block protect bit with a READ STATUS REGISTER command. Further, to protect a block, a three-cycle command sequence must be issued with the block address.The synchronous flash memory can feature three different types of READs. Depending on the mode, a READ operation will produce data from the memory array, status register, or one of the device configuration registers. A READ to the device configuration register or the Status Register must be preceded by an LCR-ACTIVE cycle and burst length of data out will be defined by the mode register settings. A subsequent READ or a READ not preceded by an LCR-ACTIVE cycle will read the array. However, several differences exist and are described in the following section.A READ command to any bank outputs the contents of the memory array. While a WRITE or ERASE ISM operation is taking place, a READ to any location in the bank under ISM control may output invalid data. Upon exiting a RESET operation, the device will automatically enter the array read mode.Performing a READ of the Status Register 134 requires the same input sequencing as when reading the array, except that an LCR READ STATUS REGISTER (70H) cycle must precede the ACTIVE READ cycles. The burst length of the Status Register data-out is defined by the Mode Register 148. The Status Register contents are updated and latched on the next positive clock edge subject to CAS latencies. The device will automatically enter the array read mode for subsequent READs.Reading any of the Device Configuration Registers 136 requires the same input sequencing as when reading the Status Register except that specific addresses must be issued. WE# must be HIGH, and DQM and CS# must be LOW. To read the manufacturer compatibility ID, addresses must be at 000000H, and to read the device ID, addresses must be at 000001H. Any of the block protect bits is read at the third address location within each erase block (xx0002H), while the device protect bit is read from location 000003H.The DQ pins are used either to input data to the array. The address pins are used either to specify an address location or to input a command to the CEL during the LOAD COMMAND REGISTER cycle. A command input issues an 8-bit command to the CEL to control the operation mode of the device. A WRITE is used to input data to the memory array. The following section describes both types of inputs.To perform a command input, DQM must be LOW, and CS# and WE# must be LOW. Address pins or DQ pins are used to input commands. Address pins not used for input commands are "Don't Care" and must be held stable. The 8-bit command is input on DQ0-DQ7 or A0-A7 and is latched on the positive clock edge.A WRITE to the memory array sets the desired bits to logic 0s but cannot change a given bit to a logic 1 from a logic 0. Setting any bits to a logic 1 requires that the entire block be erased. To perform a WRITE, DQM must be LOW, CS# and WE# must be LOW, and VCCP must be tied to VCC. Writing to a protected block also requires that the RP# pin be brought to VHH. A0-A11 provide the address to be written, while the data to be written to the array is input on the DQ pins. The data and addresses are latched on the rising edge of the clock. A WRITE must be preceded by a WRITE SETUP command.To simplify the writing of the memory blocks, the synchronous flash incorporates an ISM that controls all internal algorithms for the WRITE and ERASE cycles. An 8-bit command set is used to control the device. See Truth Tables 1 and 2 for a list of the valid commands.The 8-bit ISM Status Register 134 (see Table 2) is polled to check for ERASE NVMODE REGISTER, WRITE NVMODE REGISTER, WRITE, ERASE, BLOCK PROTECT, DEVICE PROTECT or UNPROTECT ALL BLOCKS completion or any related errors. Completion of an ISM operation can be monitored by issuing a READ STATUS REGISTER (70H) command. The contents of the Status Register will be output to DQ0-DQ7 and updated on the next positive clock edge (subject to CAS latencies) for a fixed burst length as defined by the mode register settings. The ISM operation will be complete when SR7=1. All of the defined bits are set by the ISM, but only the ISM status bit is reset by the ISM. The erasetunprotect block, write/protect block, device protection must be cleared using a CLEAR STATUS REGISTER (50H) command. This allows the user to choose when to poll and clear the Status Register For example, a host system may perform multiple WRITE operations before checking the Status Register instead of checking after each individual WRITE. Asserting the RP# signal or powering down the device will also clear the Status Register.<tb> <sep>TABLE 2<tb> <sep>STATUS REGISTER<tb> <sep>STA-<sep> <sep><tb> <sep>TUS<sep> <sep><tb> <sep>BIT#<sep>STATUS REGISTER BIT<sep>DESCRIPTION<tb> <sep>SR7<sep>ISM STATUS<sep>The ISMS bit displays the active<tb> <sep> <sep>1 = Ready<sep>status of the state machine when<tb> <sep> <sep>0 = Busy<sep>performing WRITE or BLOCK<tb> <sep> <sep> <sep>ERASE. The controlling logic<tb> <sep> <sep> <sep>polls this bit to determine when<tb> <sep> <sep> <sep>the erase and write status<tb> <sep> <sep> <sep>bits are valid.<tb> <sep>SR6<sep>RESERVED<sep>Reserved for future use.<tb> <sep>SR5<sep>ERASE/UNPROTECT BLOCK<sep>ES is set to 1 after the maximum<tb> <sep> <sep>STATUS<sep>number of ERASE cycles is<tb> <sep> <sep>1 = BLOCK ERASE or<sep>executed by the ISM without a<tb> <sep> <sep>BLOCK UNPROTECT error<sep>successful verify. This bit is also<tb> <sep> <sep>0 = Successful BLOCK ERASE<sep>set to 1 if a BLOCK UN-<tb> <sep> <sep>or UNPROTECT<sep>PROTECT operation is unsuc-<tb> <sep> <sep> <sep>cessful. ES is only cleared by a<tb> <sep> <sep> <sep>CLEAR STATUS REGISTER<tb> <sep> <sep> <sep>command or by a RESET.<tb> <sep>SR4<sep>WRITE/PROTECT BLOCK<sep>WS is set to 1 after the<tb> <sep> <sep>STATUS<sep>maximum number of WRITE<tb> <sep> <sep>1 = WRITE or BLOCK<sep>cycles is executed by the ISM<tb> <sep> <sep>PROTECT error<sep>without a successful verify. This<tb> <sep> <sep>0 = Successful WRITE or<sep>bit is also set to 1 if a BLOCK<tb> <sep> <sep>BLOCK PROTECT<sep>or DEVICE PROTECT oper-<tb> <sep> <sep> <sep>ation is unsuccessful. WS is only<tb> <sep> <sep> <sep>cleared by a CLEAR STATUS<tb> <sep> <sep> <sep>REGISTER command or by a<tb> <sep> <sep> <sep>RESET.<tb> <sep>SR2<sep>BANKA1 ISM STATUS<sep>When SR0 = 0, the bank under<tb> <sep>SR1<sep>BANKA0 ISM STATUS<sep>ISM control can be decoded<tb> <sep> <sep> <sep>from BA0, BA1: [0,0] Bank0;<tb> <sep> <sep> <sep>[0,1] Bank1; [1,0] Bank2;<tb> <sep> <sep> <sep>[1,1] Bank3.<tb> <sep>SR3<sep>DEVICE PROTECT STATUS<sep>DPS is set to 1 if an invalid<tb> <sep> <sep>1 = Device protected, invalid<sep>WRITE, ERASE, PROTECT<tb> <sep> <sep>operation attempted<sep>BLOCK, PROTECT DEVICE<tb> <sep> <sep>0 = Device unprotected or RP#<sep>or UNPROTECT ALL<tb> <sep> <sep>condition met<sep>BLOCKS is attempted. After<tb> <sep> <sep> <sep>one of these commands is<tb> <sep> <sep> <sep>issued, the condition of RP#,<tb> <sep> <sep> <sep>the block protect bit and<tb> <sep> <sep> <sep>the device protect bit are<tb> <sep> <sep> <sep>compared to determine if the<tb> <sep> <sep> <sep>desired operation is allowed.<tb> <sep> <sep> <sep>Must be cleared by<tb> <sep> <sep> <sep>CLEAR STATUS REGISTER<tb> <sep> <sep> <sep>or by a RESET.<tb> <sep>SR0<sep>DEVICE/BANK ISM STATUS<sep>DBS is set to 1 if the ISM<tb> <sep> <sep>1 = Device level ISM operation<sep>operation is a device-level<tb> <sep> <sep>0 = Bank level ISM operation<sep>operation. A valid READ to<tb> <sep> <sep> <sep>any bank of the<tb> <sep> <sep> <sep>array can immediately follow<tb> <sep> <sep> <sep>the registration of a device-level<tb> <sep> <sep> <sep>ISM WRITE operation. When<tb> <sep> <sep> <sep>DBS is set to 0, the ISM<tb> <sep> <sep> <sep>operation is a bank-level<tb> <sep> <sep> <sep>operation. A READ<tb> <sep> <sep> <sep>to the bank under ISM<tb> <sep> <sep> <sep>control may result in invalid data<tb> <sep> <sep> <sep>SR2 and SR3 can be decoded to<tb> <sep> <sep> <sep>determine which bank is<tb> <sep> <sep> <sep>under ISM control.The device ID, manufacturer compatibility ID, device protection status and block protect status can all be read by issuing a READ DEVICE CONFIGURATION (90H) command. To read the desired register, a specific address must be asserted. See Table 3 for more details on the various device configuration registers 136.<tb> <sep>TABLE 3<tb> <sep>DEVICE CONFIGURATION<tb> <sep>DEVICE<sep> <sep> <sep><tb> <sep>CONFIGURATION<sep>ADDRESS<sep>DATA<sep>CONDITION<tb> <sep>Manufacturer<sep>000000H<sep>2CH<sep>Manufacturer<tb> <sep>Compatibility<sep> <sep> <sep>compatibility read<tb> <sep>Device ID<sep>000001H<sep>D3H<sep>Device ID read<tb> <sep>Block Protect Bit<sep>xx0002H<sep>DQ0 = 1<sep>Block protected<tb> <sep> <sep> <sep>DQ0 = 0<sep>Block unprotected<tb> <sep> <sep>xx0002H<tb> <sep>Device Protect Bit<sep>000003H<sep>DQ0 = 1<sep>Block protect modifi-<tb> <sep> <sep> <sep>DQ0 = 0<sep>cation prevented<tb> <sep> <sep>000003H<sep> <sep>Block protect modifi-<tb> <sep> <sep> <sep> <sep>cation enabledCommands can be issued to bring the device into different operational modes. Each mode has specific operations that can be performed while in that mode. Several modes require a sequence of commands to be written before they are reached. The following section describes the properties of each mode, and Truth Tables 1 and 2 list all command sequences required to perform the desired operation. Read-while-write functionality allows a background operation write or erase to be performed on any bank while simultaneously reading any other bank. For a write operation, the LCR-ACTIVE-WRITE command sequences in Truth Table 2 must be completed on consecutive clock cycles. However, to simplify a synchronous flash controller operation, an unlimited number of NOPs or COMMAND INHIBITs can be issued throughout the command sequence. For additional protection, these command sequences must have the same bank address for the three cycles. If the bank address changes during the LCR-ACTIVE-WRITE command sequence, or if the command sequences are not consecutive (other than NOPs and COMMAND INHIBITs, which are permitted), the write and erase status bits (SR4 and SR5) will be set and the operation prohibited.Upon power-up and prior to issuing any operational commands to the device, the synchronous flash is initialized. After power is applied to VCC, VCCQ and VCCP (simultaneously), and the clock is stable, RP# is transitioned from LOW to HIGH. A delay (in one embodiment a 100 [mu]s delay) is required after RP# transitions HIGH in order to complete internal device initialization. The device is in the array read mode at the completion of device initialization, and an exectable command can be issued to the device.To read the device ID, manufacturer compatibility ID, device protect bit and each of the block protect bits, a READ DEVICE CONFIGURATION (90H) command is issued. While in this mode, specific addresses are issued to read the desired information. The manufacturer compatibility ID is read at 000000H; the device ID is read at 000001H. The manufacturer compatibility ID and device D are output on DQ0-DQ7. The device protect bit is read at 000003H; and each of the block protect bits is read at the third address location within each block (xx0002H). The device and block protect bits are output on DQ0.Three consecutive commands on consecutive clock edges are needed to input data to the array (NOPs and Command Inhibits are permitted between cycles). In the first cycle, a LOAD COMMAND REGISTER command is given with WRITE SETUP (40H) on A0-A7, and the bank address is issued on BA0, BA1. The next command is ACTIVE, which activates the row address and confirms the bank address. The third cycle is WRITE, during which the starting column, the bank address, and data are issued. The ISM status bit will be set on the following clock edge (subject to CAS latencies). While the ISM executes the WRITE, the ISM status bit (SR7) will be at 0. A READ operation to the bank under ISM control may produce invalid data. When the ISM status bit (SR7) is set to a logic 1, the WRITE has been completed, and the bank will be in the array read mode and ready for an executable command. Writing to hardware-protected blocks also requires that the RP# pin be set to VHH prior to the third cycle (WRITE), and RP# must be held at VHH until the ISM WRITE operation is complete. The write and erase status bits (SR4 and SR5) will be set if the LCR-ACTIVE-WRITE command sequence is not completed on consecutive cycles or the bank address changes for any of the three cycles. After the ISM has initiated the WRITE, it cannot be aborted except by a RESET or by powering down the part. Doing either during a WRITE may corrupt the data being written.Executing an ERASE sequence will set all bits within a block to logic 1. The command sequence necessary to execute an ERASE is similar to that of a WRITE. To provide added security against accidental block erasure, three consecutive command sequences on consecutive clock edges are required to initiate an ERASE of a block. In the first cycle, LOAD COMMAND REGISTER is given with ERASE SETUP (20H) on A0-A7, and the bank address of the block to be erased is issued on BA0, BA1. The next command is ACMIE, where A10, A11, BA0, BA1 provide the address of the block to be erased. The third cycle is WRITE, during which ERASE CONFRIM (DOH) is given on DQ0-DQ7 and the bank address is reissued The ISM status bit will be set on the following clock edge (subject to CAS latencies). After ERASE CONFIRM (D0H) is issued, the ISM will start the ERASE of the addressed block. Any READ operation to the bank where the addressed block resides may output invalid data. When the ERASE operation is complete, the bank will be in the array read mode and ready for an executable command. Erasing hardware-protected blocks also requires that the RP# pin be set to VHH prior to the third cycle (WRITE), and RP# must be held at VHH until the ERASE is completed (SR7=1). If the LCR-ACTIVE-WRITE command sequence is not completed on consecutive cycles (NOPs and COMMAND INHIBITs are permitted between cycles) or the bank address changes for one or more of the command cycles, the write and erase status bits (SR4 and SR5) will be set and the operation is prohibited.The contents of the Mode Register 148 may be copied into the NVMode Register 147 with a WRITE NVMODE REGISTER command. Prior to writing to the NVMode Register, an ERASE NVMODE REGISTER command sequence must be completed to set all bits in the NVMode Register to logic 1. The command sequence necessary to execute an ERASE NVMODE REGISTER and WRITE NVMODE REGISTER is similar to that of a WRITE. See Truth Table 2 for more information on the LCR-ACTIVE-WRITE commands necessary to complete ERASE NVMODE REGISTER and WRITE NVMODE REGISTER. After the WRITE cycle of the ERASE NVMODE REGISTER or WRITE NVMODE REGISTER command sequence has been registered, a READ command may be issued to the array. A new WRITE operation will not be permitted until the current ISM operation is complete and SR7=1.Executing a BLOCK PROTECT sequence enables the first level of software/hardware protection for a given block. The memory includes a 16-bit register that has one bit corresponding to the 16 protectable blocks. The memory also has a register to provide a device bit used to protect the entire device from write and erase operations. The command sequence necessary to execute a BLOCK PROTECT is similar to that of a WRITE. To provide added security against accidental block protection, three consecutive command cycles are required to initiate a BLOCK PROTECT. In the first cycle, a LOAD COMMAND REGISTER is issued with a PROTECT SETUP (60H) command on A0-A7, and the bank address of the block to be protected is issued on BA0, BA1. The next command is ACTIVE, which activates a row in the block to be protected and confirms the bank address. The third cycle is WRITE, during which BLOCK PROTECT CONFIRM (01H) is issued on DQ0-DQ7, and the bank address is reissued. The ISM status bit will be set on the following clock edge (subject to CAS latencies). The ISM will then begin the PROTECT operation. If the LCR-ACTIVE-WRITE is not completed on consecutive cycles (NOPs and COMMAND INHIBITs are permitted between cycles) or the bank address changes, the write and erase status bits (SR4 and SR5) will be set and the operation is prohibited. When the ISM status bit (SR7) is set to a logic 1, the PROTECT has been completed, and the bank will be in the array read mode and ready for an executable command. Once a block protect bit has been set to a 1 (protected), it can only be reset to a 0 if the UNPROTECT ALL BLOCKS command. The UNPROTECT ALL BLOCKS command sequence is similar to the BLOCK PROTECT command; however, in the third cycle, a WRITE is issued with a UNPROTECT ALL BLOCKS CONFIRM (D0H) command and addresses are "Don't Care." For additional information, refer to Truth Table 2. The blocks at locations 0 and 15 have additional security. Once the block protect bits at locations 0 and 15 have been set to a 1 (protected), each bit can only be reset to a 0 if RP# is brought to VHH prior to the third cycle of the UNPROTECT operation, and held at VHH until the operation is complete (SR7=1). Further, if the device protect bit is set, RP# must be brought to VHH prior to the third cycle and held at VHH until the BLOCK PROTECT or UNPROTECT ALL BLOCKS operation is complete. To check a block's protect status, a READ DEVICE CON FIGURATION (90H) command may be issued.Executing a DEVICE PROTECT sequence sets the device protect bit to a 1 and prevents a block protect bit modification. The command sequence necessary to execute a DEVICE PROTECT is similar to that of a WRITE. Three consecutive command cycles are required to initiate a DEVICE PROTECT sequence. In the first cycle, LOAD COMMAND REGISTER is issued with a PROTECT SETUP (60H) on A0-A7, and a bank address is issued on BA0, BA1. The bank address is "Don't Care" but the same bank address must be used for all three cycles. The next command is ACTIVE. The third cycle is WRITE, during which a DEVICE PROTECT (F1H) command is issued on DQ0-DQ7, and RP# is brought to VHH. The ISM status bit will be set on the following clock edge (subject to CAS latencies). An executable command can be issued to the device. RP# must be held at VHH until the WRITE is completed (SR7=1). A new WRITE operation will not be permitted until the current ISM operation is complete. Once the device protect bit is set, it cannot be reset to a 0. With the device protect bit set to a 1, BLOCK PROTECT or BLOCK UNPROTECT is prevented unless RP# is at VHH during either operation. The device protect bit does not affect WRITE or ERASE operations. Refer to Table 4 for more information on block and device protect operations.<tb> <sep>TABLE 4<tb> <sep>PROTECT OPERATIONS TRUTH TABLE<tb> <sep> <sep> <sep>CS<sep> <sep>WE<sep> <sep> <sep><tb> <sep>FUNCTION<sep>RP #<sep>#<sep>DQM<sep>#<sep>Address<sep>VccP<sep>DQ0-DQ7<tb> <sep>DEVICE UNPROTECTED<sep> <sep> <sep> <sep> <sep> <sep> <sep><tb> <sep>PROTECT SETUP<sep>H<sep>L<sep>H<sep>L<sep>60H<sep>X<sep>X<tb> <sep>PROTECT BLOCK<sep>H<sep>L<sep>H<sep>L<sep>BA<sep>H<sep>01H<tb> <sep>PROTECT DEVICE<sep>VHH<sep>L<sep>H<sep>L<sep>X<sep>X<sep>F1H<tb> <sep>UNPROTECT ALL<sep>H/VHH<sep>L<sep>H<sep>L<sep>X<sep>H<sep>D0H<tb> <sep>BLOCKS<tb> <sep>DEVICE PROTECTED<tb> <sep>PROTECT SETUP<sep>H or<sep>L<sep>H<sep>L<sep>60H<sep>X<sep>X<tb> <sep> <sep>VHH<tb> <sep>PROTECT BLOCK<sep>VHH<sep>L<sep>H<sep>L<sep>BA<sep>H<sep>01H<tb> <sep>UNPROTECT ALL<sep>VHH<sep>L<sep>H<sep>L<sep>X<sep>H<sep>D0H<tb> <sep>BLOCKSAfter the ISM status bit (SR7) has been set, the device/bank (SR0), device protect (SR3), bankA0 (SR1), bankal (SR2), writelprotect block (SR4) and crasc/unprotect (SR5) status bits may be checked. If one or a combination of SR3, SR4, SR5 status bits has been set, an error has occurred during operation The ISM cannot reset the SR3, SR4 or SR5 bits. To clear these bits, a CLEAR STATUS REGISTER (50H) command must be given. Table 5 lists the combinations of errors.<tb> <sep>TABLE 5<tb> <sep>STATUS REGISTER ERROR DECODE<tb> <sep>SR5<sep>SR4<sep>SR3<sep>ERROR DESCRIPTION<tb> <sep>0<sep>0<sep>0<sep>No errors<tb> <sep>0<sep>1<sep>0<sep>WRITE, BLOCK PROTECT or DEVICE<tb> <sep> <sep> <sep> <sep>PROTECT error<tb> <sep>0<sep>1<sep>1<sep>Invalid BLOCK PROTECT or DEVICE PROTECT,<tb> <sep> <sep> <sep> <sep>RP# not valid (VHH)<tb> <sep>0<sep>1<sep>1<sep>Invalid BLOCK or DEVICE PROTECT, RP#<tb> <sep> <sep> <sep> <sep>not valid<tb> <sep>1<sep>0<sep>0<sep>ERASE or ALL BLOCK UNPROTECT error<tb> <sep>1<sep>0<sep>1<sep>Invalid ALL BLOCK UNPROTECT, RP# not valid<tb> <sep> <sep> <sep> <sep>(VHH)<tb> <sep>1<sep>1<sep>0<sep>Command sequencing errorThe synchronous flash memory is designed and fabricated to meet advanced code and data storage requirements. To ensure this level of reliability, VCCP must be tied to Vcc during WRITE or ERASE cycles. Operation outside these limits may reduce the number of WRITE and ERASE cycles that can be performed on the device. Each block is designed and processed for a minimum of 100,000-WRITE/ERASE-cycle endurance.The synchronous flash memory offers several power-saving features that may be utilized in the array read mode to conserve power. A deep power-down mode is enabled by bringing RP# to VSS±0.2V. Current draw (ICC) in this mode is low, such as a maximum of 50 [mu]A. When CS# is HIGH, the device will enter the active standby mode. In this mode the current is also low, such as a maximum ICC current of 30 mA. If CS# is brought HIGH during a write, erase, or protect operation, the ISM will continue the WRITE operation, and the device will consume active Iccp power until the operation is completed.Referring to FIG. 16, a flow chart of a self-timed write sequence according to one embodiment of the present invention is described. The sequence includes loading the command register (code 40H), receiving an active command and a row address, and receiving a write command and a column address. The sequence then provides for a status register polling to determine if the write is complete. The polling monitors status register bit 7 (SR7) to determine if it is set to a 1. An optional status check can be included. When the write is completed, the array is placed in the array read mode.Referring to FIG. 17, a flow chart of a complete write status-check sequence according to one embodiment of the present invention is provided. The sequence looks for status register bit 4 (SR4) to determine if it is set to a 0. If SR4 is a 1, there was an error in the write operation. The sequence also looks for status register bit 3 (SR3) to determine if it is set to a 0. If SR3 is a 1, there was an invalid write error during the write operation.Referring to FIG. 18, a flow chart of a self-timed block erase sequence according to one embodiment of the present invention is provided. The sequence includes loading the command register (code 20H), and receiving an active command and a row address. The memory then determines if the block is protected. If it is not protected, the memory performs a write operation (D0H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the block is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH).FIG. 19 illustrates a flow chart of a complete block erase status-check sequence according to one embodiment of the present invention. The sequence monitors the status register to determine if a command sequence error occurred (SR4 or SR5=1). If SR3 is set to a 1, an invalid erase or unprotect error occurred. Finally, a block erase or unprotect error happened if SR5 is set to a 1.FIG. 20 is a flow chart of a block protect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if the block is protected. If it is not protected, the memory performs a write operation (01H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the block is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH).Referring to FIG. 21, a flow chart of a complete block status-check sequence according to one embodiment of the present invention is provided. The sequence monitors the status register bits 3, 4 and 5 to determine of errors were detected.FIG. 22 is a flow chart of a device protect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if RP# is at VHH. The memory performs a write operation (F1H) and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode.FIG. 23 is a flow chart of a block unprotect sequence according to one embodiment of the present invention. The sequence includes loading the command register (code 60H), and receiving an active command and a row address. The memory then determines if the memory device is protected. If it is not protected, the memory determines if the boot locations (blocks 0 and 15 ) are protected. If none of the blocks are protected the memory performs a write operation (D0H) to the block and monitors the status register for completion. An optional status check can be performed and the memory is placed in an array read mode. If the device is protected, the erase is not allowed unless the RP# signal is at an elevated voltage (VHH). Likewise, if the boot locations are protected, the memory determines if all blocks should be unprotected.FIG. 24 illustrates the timing of an initialize and load mode register operation. The mode register is programmed by providing a load mode register command and providing operation code (opcode) on the address lines. The opcode is loaded into the mode register. As explained above, the contents of the non-volatile mode register are automatically loaded into the mode register upon power-up and the load mode register operation may not be needed.FIG. 25 illustrates the timing of a clock suspend mode operation, and FIG. 26 illustrates the timing of another burst read operation. FIG. 27 illustrates the timing of alternating bank read accesses. Here active command are needed to change bank addresses. A full page burst read operation is illustrated in FIG. 28. Note that the full page burst does not self terminate, but requires a terminate command.FIG. 29 illustrates the timing of a read operation using a data mask signal. The DQM signal is used to mask the data output so that Dout m+1 is not provided on the DQ connections.Referring to FIG. 30, the timing of a write operation followed by a read to a different bank is illustrated. In this operation, a write is performed to bank a and a subsequent read is preformed to bank b. The same row is accessed in each bank.Referring to FIG. 31, the timing of a write operation followed by a read to the same bank is illustrated. In this operation, a write is performed to bank a and a subsequent read is performed to bank a. A different row is accessed for the read operation, and the memory must wait for the prior write operation to be completed. This is different from the read of FIG. 30 where the read was not delayed due to the write operation.Zero Latency Write Operation/zero Bus Turn AroundThe synchronous Flash memory provides for a latency free write operation. This is different from a SDRAM that requires the system to provide latency for write operations, just like a read operation. So the write operation does not take away from the system bus as many cycles as the SDRAM takes, and hence can improve the system read throughput, see FIG. 12 where the write data, Din, is provided on the same clock cycle as the write command and column address. The clock cycle, T1, of FIG. 12 does not need to be a NOP command (see FIG. 30). The read command can be provided on the next clock cycle following the write data. Thus, while the read operation requires that the DQ connections remain available for a predetermined number of clock cycles following the read command (latency), the DQ connections can be used immediately after the write command is provided (no latency). As such, the present invention allows for zero bus turn around capability. This is substantially different from the SDRAM, where multiple waits are required on the system bus when going between read and write operations. The synchronous flash provides these two features, and could improve bus throughput.Referring to FIG. 32, a system 300 of the present invention includes a synchronous memory 302 that has internal write latches 304 that are used to store write data received on the DQ inputs 306. The write latches are coupled to the memory array 310. Again, the memory array can be arranged in a number of addressable blocks. Data can be written to one block while a read operation can be performed on the other blocks. The memory cells of the array can be non-volatile memory cells. Data communication connections 306 are used for bi-directional data communication with an external device, such as a processor 320 or other memory controller.A data buffer 330 can be coupled to the data communication connections to manage the bi-directional data communication. This buffer can be a traditional FIFO or pipelined input/output buffer circuit. The write latch is coupled between the data buffer and the memory array to latch data provided on the data communication connections. Finally, a control circuit 340 is provided to manage the read and write operations performed on the array.By latching the input write data, the data bus 306 (DQ's) can be released and the write operation performed using the latched data. Subsequent write operations to the memory can be prohibited while the first write operation is being performed. The bus, however, is available to immediately perform a read operation on the memory.The present invention should not be confused with traditional input/output buffer architecture. That is, while prior memory devices used an input buffer on the DQ input path and an output buffer on the DQ output path, the clock latency used for both read and write operations are maintained the same. The present invention can include input/output buffer circuitry to provide an interface with the DQ connections and an external processor. The additional write latches allow the memory to isolate the write path/operation to one area of the memory while allowing data read operations on other memory areas.In one embodiment, a method of writing to a synchronous memory device is provided. The method comprises providing a write command and write data from a processor to the synchronous memory device on a first clock cycle. The write data is then stored in a write latch of the synchronous memory device, and a write operation is performed to copy the write data from the write latch to a memory array of the synchronous memory device. Finally, a read command is communicated from the, processor to the synchronous memory device on a second clock cycle immediately following the first clock cycle to initiate a read operation on the memory array.The present invention can also eliminate clock, or CAS, latency between read and subsequent write operations. Referring to FIG. 9, the LCR command (40H) is provided on clock cycle T1 immediately following the read column cycle (T0). As explained, the write operation command sequence includes at least three clock cycles: an LCR cycle, an active/row cycle, and a writelcolumn cycle. Depending upon the latency of the read operation, one or more NOP clock cycles may be provided to avoid bus contention. The preset invention, therefore, does not require latency between the read column command cycle and the LCR write cycle. The present invention, therefore, provides for more efficient data bus utilization by allowing for read-to-write without latency, and write-to-read without clock cycle delays.CONCLUSIONA synchronous flash memory has been described that includes an array of non-volatile memory cells. The memory array is arranged in rows and columns, and can be further arranged in addressable blocks. Data communication connections are used for bi-directional data communication with an external device, such as a processor or other memory controller. A data buffer can be coupled to the data communication connections to manage the bi-directional data communication. A write latch is coupled between the data buffer and the memory array to latch data provided on the data communication connections.The memory has been described that allows for zero-bus turn around following a write data cycle. That is, a read operation can be initiated immediately following a write data cycle. One method of operating a synchronous memory device comprises receiving write data on data connections, latching the write data in a write latch, and releasing the data connections after the write data is latched. A read operation can be performed on the synchronous memory device while the write data is transferred from the write latch to memory cells. Further, the memory device does not require any clock latency during a write operation.
An integrated BiCMOS semiconductor circuit has active moat areas (20) in silicon. The active moat areas (20) include electrically active components of the semiconductor circuit, which comprise active window structures for base and/or emitter windows. The integrated BiCMOS semiconductor circuit has zones where silicon is left to form dummy moat areas (26) which do not include electrically active components, and has isolation trenches (22, 24) to separate the active moat areas from each other and from the dummy moat areas. The dummy moat areas (26) comprise dummy window structures (34, 50) having geometrical dimensions and shapes similar to those of the active window structures for the base and/or emitter windows.
An integrated BiCMOS semiconductor circuit, comprising:active moat areas (20) in silicon, which active moat areas include electrically active components of the semiconductor circuit, the active components comprising active window structures for base and/or emitter windows;zones where silicon is left to form dummy moat areas (26) which do not include electrically active components; andisolation trenches (22, 24) to separate the active moat areas from each other and from the dummy moat areas; characterized in that the dummy moat areas (26) comprise dummy window structures (34, 50) having geometrical dimensions and shapes similar to those of the active window structures for the base and/or emitter windows.An integrated BiCMOS semiconductor circuit according to claim 1, wherein the total surface area of the dummy window structures in the dummy moat areas exceeds the total surface area of the active window structures in the active moat areas by at least one order of magnitude.An integrated BiCMOS semiconductor circuit according claim 1 or 2, wherein the dummy base window structures (34) in those layers (32) in which active base windows are formed in the active moat areas (20) and the dummy emitter window structures (50) in those layers (42) in which active emitter windows are formed in the active moat areas (20), are stacked within the dummy moat areas.
FIELD OF THE INVENTIONThe invention relates to integrated BiCMOS semiconductor circuits having active moat areas in silicon.BACKGROUND OF THE INVENTIONThere are integrated BiCMOS semiconductor circuits that have active moat areas in silicon. These moat areas include electrically active components of the semiconductor circuit, the active components comprising active window structures for base and/or emitter windows. The semiconductor circuit has zones where silicon is left to form dummy moat areas which do not include electrically active components. The semiconductor circuit further has isolation trenches to separate the active moat areas from each other and from the dummy moat areas.In the production of integrated BiCMOS semiconductor circuits, a plurality of silicon and oxide layers are deposited on a support wafer and patterned in consecutive steps. An example of such a stack of layers is shown in a schematic sectional view in FIG. 1 of the appending drawings. Upon patterning, stacks of layers, generally referred to as 1 in FIG. 1, form so called active moat areas 2. These areas are islands which will in the end contain electrically active components of the semiconductor circuit. The active moat areas 2 are separated by trenches 3 formed into the layers by etching. The trenches are filled with an isolating material 4 such as oxide. Above a trench 3, a shallow depression 3a may form in the oxide layer 4. Depending on the layout of the circuit, the distance between two adjacent active moat areas 2 can be wide, resulting in a broad trench 5. Where the trenches are too wide, deep depressions 6 in the oxide layer 4 will occur.These deep depressions 6 become a problem when performing a process of chemical mechanical polishing (CMP) on a layer.To avoid the occurrence of depressions in the oxide layer 4, so called dummy moat areas 7 are left (FIG. 2). These areas 7 are islands which are designed not to include electrically active components but simply to avoid large and deep depressions. Incidentally, the technique of leaving dummy moat areas 7 is known in the prior art to ensure correct planarization.Anisotropic plasma etching is used for the etching of fine structures. The etching duration may be pre-determined, but if the underlying layer is thin, e.g., a thin oxide film, it is essential to stop the etching in time before the underlying silicon gets damaged, but not before the desired structure is completed. This is particularly essential when dealing with small structures. Due to inaccuracies in the thickness of the layer to be etched and in the etchant composition, the calculation of the etching duration cannot be exact. Still, the completion of the etching process can be controlled more accurately by detecting an endpoint in the process. As explained in the article entitled, "Tungsten silicide and tungsten polycide anisotropic dry etch process for highly controlled dimensions and profiles, " by R. Bashir, et al., in J. Vac. Sci. Technol., Vol. 16(4), Jul/Aug 1998, pages 2118-2120, and in U.S. Pat. No. 6,444,542B2, the endpoint of the etching process can be detected by a change in the composition of the optical radiation by optical emission spectroscopy, by the plasma characteristics, i.e., high-frequency harmonics, or the discharge current, or by a change in reflection properties of the wafer when the etching process reaches the underlying layer. Reaching an oxide layer can also be used as an endpoint check (U.S. Pat. No. 5,496,764A). But, if the surface to be etched is very small compared to the total wafer surface, detection of the endpoint of the etching process with this approach is no longer possible.In U.S. Pat. No. 6,004,829A, it is proposed to enlarge the surface to be etched by inserting additional pad areas in forming an EPROM device. It is, however, well-known that large areas exhibit a higher etch-rate than small structures. If now the window structures to be etched are very small and delicate, and dummy surfaces are used for etch endpoint detection, the etch endpoint signal will occur prematurely, so that the optimum moment in time when the etching process should be terminated cannot be determined with sufficient precision.SUMMARY OF THE INVENTIONThe invention provides an integrated BiCMOS semiconductor with accurately etched very small geometries.Specifically, an integrated BiCMOS semiconductor circuit having active moat areas in silicon is provided. The active moat areas include electrically active components of the semiconductor circuit. The active components comprise active window structures for base and/or emitter windows. The circuit further has zones where silicon is left to form dummy moat areas which do not include electrically active components, and isolation trenches to separate the active moat areas from each other and from the dummy moat areas. The dummy moat areas comprise dummy window structures having geometrical dimensions and shapes similar to those of the active window structures for the base and/or emitter windows.In the production process of this integrated BiCMOS circuit, the active window structures for base and/or emitter windows in the active moat areas and the dummy window structures within the dummy moat areas having similar geometrical dimensions and shapes are formed simultaneously. The total surface area of the window structures which are exposed to the etchant is importantly in-creased by having both active and dummy window structures. Hence, a signal for the endpoint detection can be detected much more clearly than in a case where only small active window structures are etched. Since the dummy window structures are of similar geometrical shape and dimension as the active window structures, the signal for the detection of the etching endpoint for the small structures is distinct and not blurred by the effect of a different etching characteristic as it would be, if coarse or large dummy structures were used. So the optimum moment in time when the etching process shall be terminated is precisely determined by the endpoint signal. The integrated circuit according to the invention can be manufactured with high precision, avoiding over etching and large under-cutting which would otherwise result in an increase in emitter-base leakage, an enlarged emitter size and in the end cause a large variability in bipolar parameters. The proposed integrated circuit provides for reliable etch endpoint detection of very small structures independent of structure size.The total surface of the dummy window structures should preferably exceed that of the real window structures by at least one order of magnitude, thereby to increase the precision of the determination of the completion point of the etching process.In an embodiment of the invention, the dummy window structures in those layers in which active base windows are formed in the active moat areas and the dummy structures in those layers in which active emitter windows are formed in the active moat areas, are stacked within the dummy moat areas. This provides for a very economic use of moat area. The reliable etch endpoint detection scheme can be extended to a checkerboard pattern to allow a total of four sequential end-pointed etch processes, namely emitter and base openings for NPN and PNP, without requiring additional moat area.BRIEF DESCRIPTION OF THE DRAWINGSExample embodiments of the invention are described, with reference to the accompanying drawings, wherein:FIG.1 is a schematic sectional view through a first integrated semiconductor circuit from the state of the art;FIG. 2 is a schematic sectional view through a second integrated semiconductor circuit from the state of the art;FIGS. 3 - 6 are schematic sectional views through an integrated semiconductor circuit according to the invention, in successive steps of a production process;FIG. 7 is a schematic sectional view through an integrated semiconductor circuit, including a plurality of dummy window structures;FIG. 8 shows the layout of the set of dummy window structures of FIG. 7; andFIGS. 9A - 9C are three graphs, illustrating signals resulting from monitoring the composition of the etching medium, on the basis of the characteristic plasma emission, recorded against time.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSFIGS. 3 - 6 illustrate an integrated BiCMOS semiconductor circuit 10 according to the invention in a photolithographic production process.In FIG. 3, the integrated semiconductor circuit 10 is shown in a cross-sectional view. The integrated semiconductor circuit 10 is at an intermediate process stage and has already undergone several process steps which are known to those skilled in the art, further description of which is not needed for understanding the invention.In the illustrated process stage, the integrated semiconductor circuit 10 comprises a support wafer 12 covered by a buried oxide layer (BOX) 14. The BOX 14 supports a single-crystal silicon layer 16. The silicon layer 16 is divided into islands 18, forming active moat areas 20, which will in the end contain electrically active components (not shown in the figures) of the semiconductor circuit. The islands 18 are separated by deep trenches 22 and shallow trenches 24, filled with oxide to isolate the active moat areas 20 from each other. Further islands are remaining, forming dummy moat areas 26 to ensure correct planarization in a process of chemical mechanical polishing (CMP). On top of the active moat areas 20 and the dummy moat areas 26, a thin gate oxide film 30 is grown and then covered by a thin polysilicon layer 32. The thin polysilicon layer 32 comprises the first part of CMOS polysilicon gates on the chip.The creation of dummy structures in the dummy moat area 26 is explained below.In FIG. 4, the polysilicon layer 32 is patterned and etched to provide base window structures (not shown) in the active moat areas 20. The etching must be complete and must be stopped immediately when the gate oxide 30 is reached. Therefore, according to the principles of the invention, dummy base window structures 34 are created in the dummy moat areas 26 simultaneously with the active base window structures in the active moat areas 20. These dummy base window structures 34 have geometrical dimensions and shapes that are similar to those of the active base window structures in the active moat areas 20.After the base window structure patterning, the residual thin oxide film 30 is removed within the active base window structures and the dummy base window structures 34 (FIG. 5), e.g., by wet etching. Then a base silicon/polysilicon layer is deposited. This deposit grows as a single-crystal silicon layer 36 over the exposed single-crystal silicon 16 in the active base window structures of the active moat areas 20 and in the dummy base window structures 34 of the dummy moat areas 26, while it grows as a polycrystaline silicon layer 38 over the remaining polysilicon layer 32 and the exposed shallow trenches 24. The silicon layers 36, 38 are then covered with a screen oxide 40 in preparation for implantation and the next patterning step.The screen oxide 40 is removed and an inter-poly insulator stack 42 deposited (FIG. 6). The inter-poly insulator stack 42 comprises a thin oxide film 44, covered by a nitride film 46. Then a photoresist layer 48 is applied and patterned to create active emitter window structures (not shown) in the inter-poly insulator stack 42. Again, it is important to detect the endpoint for this step, because a defined thickness of the oxide film 44 must remain in the active emitter window structures. Therefore, according to the principles of the invention, dummy emitter window structures 50 are created in the dummy moat areas 26 simultaneously with the active emitter window structures in the active moat areas 20. Again, these dummy emitter window structures 50 have geometrical dimensions and shapes that are similar to those of the active emitter window structures in the active moat areas 20.In FIG. 6, only one dummy emitter window 50 is drawn for the sake of a clear presentation. In practice, however, multiple dummy emitter window structures 50 are normally created, as it is shown by example in FIG. 7.FIG. 8 shows an example of a layout pattern 60 for a plurality of dummy emitter window structures 50. Also shown in FIG. 8 is the outline of the dummy moat area in inner dot-dashed lines. Further the outline of the base poly-silicon layer 36 is indicated in outer dot-dashed lines, since the dummy emitter window structures 50 are stacked over the dummy base window structures. So, the dummy window structures for endpoint detection during the etching of active base windows and during the etching of active emitter windows can be arranged within the same dummy moat areas.The dimensions a and b are determined by the minimal width of the active window structures on the chip. The length c of the dummy window structures is adjustable and depends on the size of the dummy moat.The dummy base and/or emitter window structures, e.g., the layout which is illustrated in FIG. 8, is preferably applied to as much dummy moat areas 26 as are available on the wafer. The proportion of the area occupied by the emitters on BiCMOS chips is far below 1%. The use of a significant number of dummy window structures can increase the proportion of the total surface available for etching to 3 - 5%. As a result, a signal from monitoring the etching process will have much more significant changes, when the small structures are completed, which allows a reliable detection of the optimum etch endpoint.During an etching process according to the methods described above, the composition of the etching medium can be monitored by way of its characteristic plasma emission. FIGS. 9A - 9C show schematically the composition of the etching medium monitored as a function of its characteristic plasma emission over time t for different configurations. The optimum end etchpoint for the small structures in the particular configuration is indicated in the Figures by Topt.If according to the prior art, no dummy windows have been applied there will be no endpoint signal (FIG. 9A) when the etching medium reaches the oxide layer. The change in the composition of the etching medium cannot be measured, because the proportion of the area occupied by the active window structures only amounts to some parts in thousand, as compared to the total area available.If large dummy areas without window structures are provided in the wafer, as already proposed in the literature, monitoring the etchant composition will show a signal like the one in FIG. 9B. The endpoint signal E0here occurs too early and prior to the optimum moment in time for the termination of the etching process of the small window structures, since the etching of large areas proceeds in a different way from that of thin window structures.FIG. 9C shows that by using dummy window structures according to the invention, the optimum moment in time at which the etching process should be terminated can be determined with precision by means of the endpoint signal E0.
A computer system and a method used to access data from a plurality of memory devices with a memory hub. The computer system includes a plurality of memory modules coupled to a memory hub controller. Each of the memory modules includes the memory hub and the plurality of memory devices. The memory hub includes a sequencer and a bypass circuit. When the memory hub is busy servicing one or more memory requests, the sequencer generates and couples the memory requests to the memory devices. When the memory hub is not busy servicing multiple memory requests, the bypass circuit generates and couples a portion of each the memory requests to the memory devices and the sequencer generates and couples the remaining portion of each of the memory requests to the memory devices.
CLAIMS 1. A memory module, comprising: a plurality of memory devices; and a memory hub, comprising: a link interface receiving memory requests for access to at least one of the memory devices; a memory device interface coupled to the memory devices, the memory device interface coupling memory requests to the memory devices and generating a status signal indicating whether or not at least one of the memory requests is being serviced; a bypass circuit coupled to the link interface and the memory device interface, the bypass circuit generating and coupling a portion of each of the memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced; and a sequencer coupled to the link interface and the memory device interface, the sequencer generating and coupling memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is being serviced and generating and coupling the remaining portion of each of the memory requests not handled by the bypass circuit from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced. 2. The memory module of claim 1 wherein the status signal comprises: an active signal indicating that at least one of the memory requests is being serviced; and an idle signal indicating that at least one of the memory requests is not being serviced. <Desc/Clms Page number 13> 3. The memory module of claim 1 wherein the memory hub further comprises a multiplexer having data inputs coupled to the sequencer and the bypass circuit, a data output coupled to the memory device interface and a control input coupled to receive the status signal from the memory device interface, the multiplexer coupling the memory requests from the sequencer responsive to the status signal indicating that at least one of the memory requests is being serviced and coupling a portion of the memory request from the bypass circuit and a portion from the sequencer responsive to the status signal indicating that at least one of the memory requests is not being serviced. 4. The memory module of claim 1 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced. 5. The memory module of claim 3 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the multiplexer responsive to the status signal indicating that at least one of the memory requests is not being serviced. 6. The memory module of claim 1 wherein the memory device interface comprises a first-in, first-out buffer that is operable to receive and store memory requests received from the link interface and to transfer the stored memory requests to at least one of the memory devices in the order in which they were received. <Desc/Clms Page number 14> 7. The memory module of claim 1 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. 8. The memory module of claim 6 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. 9. The memory module of claim 1 wherein the memory device interface is coupled to the link interface, the memory device interface further receiving read data responsive to the memory requests and coupling the read data to the link interface. 10. The memory module of claim 1 wherein the link interface comprises an optical input/output port. 11. The memory module of claim 1 wherein the memory devices comprises dynamic random access memory devices. 12. The memory module of claim 1 wherein the portion of each of the memory requests generated and coupled by the bypass circuit includes the row address portion of each of the memory requests. 13. A memory hub, comprising: a link interface receiving memory requests; a memory device interface operable to output memory requests, the memory device interface generating a status signal indicating whether or not at least one of the memory requests is being serviced; <Desc/Clms Page number 15> a bypass circuit coupled to the link interface and the memory device interface, the bypass circuit generating and coupling a portion of each of the memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced; and a sequencer coupled to the link interface and the memory device interface, the sequencer generating and coupling the memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is being serviced and generating and coupling the remaining portion of each of the memory requests not handled by the bypass circuit from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced. 14. The memory hub of claim 12 wherein the status signal comprises: an active signal indicating that at least one of the memory requests is being serviced; and an idle signal indicating that at least one of the memory requests is not being serviced. 15. The memory hub of claim 13 wherein the memory hub further comprises a multiplexer having data inputs coupled to the sequencer and the bypass circuit, a data output coupled to the memory device interface and a control input coupled to receive the status signal from the memory device interface, the multiplexer coupling the memory requests from the sequencer responsive to the status signal indicating that at least one of the memory requests is being serviced and coupling a portion of the memory request from the bypass circuit and a portion from the sequencer responsive to the status signal indicating that at least one of the memory requests is not being serviced. <Desc/Clms Page number 16> 16. The memory hub of claim 13 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced. 17. The memory hub of claim 15 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the multiplexer responsive to the status signal indicating that at least one of the memory requests is not being serviced. 18. The memory hub of claim 13 wherein the memory device interface comprises a first-in, first-out buffer that is operable to receive and store memory requests received from the link interface and to transfer the stored memory requests to at least one of the memory devices in the order in which they were received. 19. The memory hub of claim 13 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. 20. The memory hub of claim 18 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. <Desc/Clms Page number 17> 21. The memory hub of claim 13 wherein the memory device interface is coupled to the link interface, the memory device interface further receiving read data responsive to the memory requests and coupling the read data to the link interface. 22. The memory hub of claim 13 wherein the link interface comprises an optical input/output port. 23. The memory hub of claim 13 wherein the memory devices comprises dynamic random access memory devices. 24. The memory hub of claim 13 wherein the portion of each of the memory requests generated and coupled by the bypass circuit includes the row address portion of each of the memory requests. 25. A computer system, comprising: a central processing unit ("CPU") ; a system controller coupled to the CPU, the system controller having an input port and an output port; an input device coupled to the CPU through the system controller; an output device coupled to the CPU through the system controller; a storage device coupled to the CPU through the system controller; and a plurality of memory modules, each of the memory modules comprising: a plurality of memory devices; and a memory hub, comprising: a link interface receiving memory requests for access to at least one of the memory devices; a memory device interface coupled to the memory devices, the memory device interface coupling memory requests to the memory devices and <Desc/Clms Page number 18> generating a status signal indicating whether or not at least one of the memory requests is being serviced; a bypass circuit coupled to the link interface and the memory device interface, the bypass circuit generating and coupling a portion of each of the memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced; and a sequencer coupled to the link interface and the memory device interface, the sequencer generating and coupling memory requests from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is being serviced and generating and coupling the remaining portion of each of the memory requests not handled by the bypass circuit from the link interface to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced; and a communications link coupling the output port of the system controller to the input port of the memory hub in each of the memory modules, and coupling the input port of the system controller to the output port of the memory hub in each of the memory modules. 26. The computer system of claim 26 wherein the status signal comprises: an active signal indicating that at least one of the memory requests is being serviced; and an idle signal indicating that at least one of the memory requests is not being serviced. 27. The computer system of claim 26 wherein the memory hub further comprises a multiplexer having data inputs coupled to the sequencer and the bypass circuit, a data output coupled to the memory device interface and a control input coupled to receive the status signal from the memory device interface, the <Desc/Clms Page number 19> multiplexer coupling the memory requests from the sequencer responsive to the status signal indicating that at least one of the memory requests is being serviced and coupling a portion of the memory request from the bypass circuit and a portion from the sequencer responsive to the status signal indicating that at least one of the memory requests is not being serviced. 28. The computer system of claim 26 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the memory device interface responsive to the status signal from the memory device interface indicating that at least one of the memory requests is not being serviced. 29. The computer system of claim 27 wherein the link interface further receives a link-in clock with each of the memory requests, the link-in clock being forwarded to the bypass circuit along with the memory requests and used by the bypass circuit to generate and couple memory requests to the multiplexer responsive to the status signal indicating that at least one of the memory requests is not being serviced. 30. The computer system of claim 26 wherein the memory device interface comprises a first-in, first-out buffer that is operable to receive and store memory requests received from the link interface and to transfer the stored memory requests to at least one of the memory devices in the order in which they were received. 31. The computer system of claim 26 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. <Desc/Clms Page number 20> 32. The computer system of claim 30 wherein the link interface comprises a first-in, first-out buffer that is operable to receive and store memory requests and to transfer the stored memory requests to the memory device interface in the order in which they were received. 33. The computer system of claim 26 wherein the memory device interface is coupled to the link interface, the memory device interface further receiving read data responsive to the memory requests and coupling the read data to the link interface. 34. The computer system of claim 26 wherein the link interface comprises an optical input/output port. 35. The computer system of claim 26 wherein the memory devices comprises dynamic random access memory devices. 36. The computer system of claim 26 wherein the portion of each of the memory requests generated and coupled by the bypass circuit includes the row address portion of each of the memory requests. 37. The computer system of claim 26 wherein the input and output ports of the system controller comprises a combined input/output port coupled to the communications link, and wherein the input and output ports of each of the memory hubs comprises a combined input/output port coupled to the communications link. 38. The computer system of claim 34 wherein the communications link comprises an optical communications link, wherein the input and output ports of the system controller comprises an optical input/output port coupled to the optical communications link and wherein the input and output ports <Desc/Clms Page number 21> of each of the memory hubs comprises a respective optical input/output port coupled to the optical communications link. 39. A method of accessing data in each of a plurality of memory devices on each of a plurality of memory modules, each of the memory modules including a memory hub, the method comprising: checking if a memory device interface located on the memory hub is servicing memory requests; if the memory device interface is servicing memory requests, sending memory requests through a sequencer located on the memory hub to the memory device interface; and if the memory device interface is not busy servicing memory requests, sending a portion of each of the memory requests through a bypass circuit located on the memory hub to the memory device interface and the remaining portion of each of the memory requests through the sequencer to the memory device interface. 40. The method of claim 39 wherein the portion of the each of the memory requests sent through the bypass circuit includes the row address portion of each of the memory requests.
<Desc/Clms Page number 1> MEMORY HUB BYPASS CIRCUIT AND METHOD TECHNICAL FIELD This invention relates to a computer system, and, more particularly, to a computer system having a memory hub coupling several memory devices to a processor or other memory access device. BACKGROUND OF THE INVENTION Computer systems use memory devices, such as dynamic random access memory ("DRAM") devices, to store instructions and data that are accessed by a processor. These memory devices are normally used as system memory in a computer system. In a typical computer system, the processor communicates with the system memory through a processor bus and a memory controller. The processor issues a memory request, which includes a memory command, such as a read command, and an address designating the location from which data or instructions are to be read. The memory controller uses the command and address to generate appropriate command signals as well as row and column addresses, which are applied to the system memory. In response to the commands and addresses, data is transferred between the system memory and the processor. The memory controller is often part of a system controller, which also includes bus bridge circuitry for coupling the processor bus to an expansion bus, such as a PCI bus. Although the operating speed of memory devices has continuously increased, this increase in operating speed has not kept pace with increases in the operating speed of processors. Even slower has been the increase in operating speed of memory controllers coupling processors to memory devices. The relatively slow speed of memory controllers and memory devices limits the data bandwidth between the processor and the memory devices. In addition to the limited bandwidth between processors and memory devices, the performance of computer systems is also limited by latency problems <Desc/Clms Page number 2> that increase the time required to read data from system memory devices. More specifically, when a memory device read command is coupled to a system memory device, such as a synchronous DRAM ("SDRAM") device, the read data are output from the SDRAM device only after a delay of several clock periods. Therefore, although SDRAM devices can synchronously output burst data at a high data rate, the delay in initially providing the data can significantly slow the operating speed of a computer system using such SDRAM devices. One approach to alleviating the memory latency problem is to use multiple memory devices coupled to the processor through a memory hub. In a memory hub architecture, a system controller or memory controller is coupled to several memory modules, each of which includes a memory hub coupled to several memory devices. The memory hub efficiently routes memory requests and responses between the controller and the memory devices. Computer systems employing this architecture can have a higher bandwidth because a processor can access one memory device while another memory device is responding to a prior memory access. For example, the processor can output write data to one of the memory devices in the system while another memory device in the system is preparing to provide read data to the processor. Although computer systems using memory hubs may provide superior performance, they nevertheless often fail to operate at optimum speed for several reasons. For example, even though memory hubs can provide computer systems with a greater memory bandwidth, they still suffer from latency problems of the type described above. More specifically, although the processor may communicate with one memory device while another memory device is preparing to transfer data, it is sometimes necessary to receive data from one memory device before the data from another memory device can be used. In the event data must be received from one memory device before data received from another memory device can be used, the latency problem continues to slow the operating speed of such computer systems. In addition, the memory hub is designed to handle multiple memory requests. Thus, it is only when the memory hub is busy servicing more than one memory request that the benefits of communicating with multiple memory <Desc/Clms Page number 3> requests are actually realized. Thus, when the memory hub is not busy, the slower and more complex logic used by the memory hub to handle multiple memory requests creates additional latency when servicing only one memory request. There is therefore a need for a memory hub that bypasses the normal logic used to handle multiple memory requests when only one memory request is being serviced. SUMMARY OF THE INVENTION The present invention is directed to a computer system and method of accessing a plurality of memory devices with a memory hub. The computer system includes a plurality of memory modules coupled to a memory hub controller. Each of the memory modules includes the plurality of memory devices and the memory hub. The memory hub includes a link interface, a sequencer, a bypass circuit, and a memory device interface. The link interface receives memory requests from the memory hub controller and forwards the memory requests to either the sequencer or both the sequencer and the bypass circuit based on the status of the memory device interface. The memory device interface couples memory requests to the memory devices. When the memory device interface is busy servicing one or more memory requests, the sequencer generates memory requests and couples the memory requests to the memory device interface. When the memory device interface is not busy servicing one or more memory requests, the bypass circuit generates memory requests and couples a portion of each of the memory requests to the memory device interface. The sequencer generates and couples the remaining portion of each of the memory requests to the memory device interface. The bypass circuit allows the memory requests to more quickly access the memory devices when the memory device interface is not busy, thereby avoiding the additional latency that would otherwise be created by the sequencer. As will be apparent, the invention is capable of modifications in various obvious aspects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive. <Desc/Clms Page number 4> BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram of a computer system according to one example of the invention in which a memory hub is included in each of a plurality of memory modules. Figure 2 is a block diagram of a memory hub used in the computer system of Figurel. DETAILED DESCRIPTION OF THE INVENTION A computer system 100 according to one example of the invention is shown in Figure 1. The computer system 100 includes a processor 104 for performing various computing functions, such as executing specific software to perform specific calculations or tasks. The processor 104 includes a processor bus 106 that normally includes an address bus, a control bus, and a data bus. The processor bus 106 is typically coupled to cache memory 108, which, as previously mentioned, is usually static random access memory ("SRAM"). Finally, the processor bus 106 is coupled to a system controller 110, which is also sometimes referred to as a"North Bridge"or"memory controller." The system controller 110 serves as a communications path to the processor 104 for a variety of other components. More specifically, the system controller 110 includes a graphics port that is typically coupled to a graphics controller 112, which is, in turn, coupled to a video terminal 114. The system controller 110 is also coupled to one or more input devices 118, such as a keyboard or a mouse, to allow an operator to interface with the computer system 100. Typically, the computer system 100 also includes one or more output devices 120, such as a printer, coupled to the processor 104 through the system controller 110. One or more data storage devices 124 are also typically coupled to the processor 104 through the system controller 110 to allow the processor 104 to store data or retrieve data from internal or external storage media (not shown). Examples of typical storage devices 124 include hard and floppy disks, tape cassettes, and compact disk read-only memories (CD-ROMs). <Desc/Clms Page number 5> The system controller 110 includes a memory hub controller 128 that is coupled to several memory modules 130a, 130b,... 130n, which serve as system memory for the computer system 100. The memory modules 130 are preferably coupled to the memory hub controller 128 through a high-speed link 134, which may be an optical or electrical communication path or some other type of communications path. In the event the high-speed link 134 is implemented as an optical communication path, the optical communication path may be in the form of one or more optical fibers, for example. In such case, the memory hub controller 128 and the memory modules will include an optical input/output port or separate input and output ports coupled to the optical communication path. The memory modules 130 are shown coupled to the memory hub controller 128 in a multi-drop arrangement in which the single high-speed link 134 is coupled to all of the memory modules 130. However, it will be understood that other topologies may also be used, such as a point-to-point coupling arrangement in which a separate high-speed link (not shown) is used to couple each of the memory modules 130 to the memory hub controller 128. A switching topology may also be used in which the memory hub controller 128 is selectively coupled to each of the memory modules 130 through a switch (not shown). Other topologies that may be used will be apparent to one skilled in the art. Each of the memory modules 130 includes a memory hub 140 for controlling access to 6 memory devices 148, which, in the example illustrated in Figure 1, are synchronous dynamic random access memory ("SDRAM") devices. However, a fewer or greater number of memory devices 148 may be used, and memory devices other than SDRAM devices may, of course, also be used. The memory hub 140 is coupled to each of the system memory devices 148 through a bus system 150, which normally includes a control bus, an address bus and a data bus. One example of the memory hub 140 of Figure 1 is shown in Figure 2. The memory hub 140 includes a link interface 152 that is coupled to the high- speed link 134. The nature of the link interface 152 will depend upon the characteristics of the high-speed link 134. For example, in the event the high-speed <Desc/Clms Page number 6> link 134 is implemented using an optical communications path, the link interface 152 will include an optical input/output port and will convert optical signals coupled through the optical communications path into electrical signals. In any case, the link interface 152 preferably includes a buffer, such as a first-in, first-out buffer 154, for receiving and storing memory requests as they are received through the high-speed link 134. The memory requests are stored in the buffer 154 until they can be processed by the memory hub 140. When the memory hub 140 is able to process a memory request, one of the memory requests stored in the buffer 154 is transferred to a memory sequencer 160. The memory sequencer 160 converts the memory requests from the format output by the memory hub controller 128 into a memory request having a format that can be used by the memory devices 148. These re-formatted request signals will normally include memory command signals, which are derived from memory commands contained in the memory requests received by the memory hub 140, and row and column address signals, which are derived from an address contained in the memory requests received by the memory hub 140. In the event one of the memory requests is a write memory request, the re-formatted request signals will normally include write data signals which are derived from write data contained in the memory request received by the memory hub 140. For example, where the memory devices 148 are conventional DRAM devices, the memory sequencer 160 will output row address signals, a row address strobe ("RAS") signal, an active high write/active low read signal ("W/R*"), column address signals and a column address strobe ("CAS") signal. The re-formatted memory requests are preferably output from the sequencer 160 in the order they will be used by the memory devices 148. However, the sequencer 160 may output the memory requests in a manner that causes one type of request, such as read requests, to be processed before other types of requests, such as write requests. The sequencer 160 provides a relatively high bandwidth because it allows the memory hub controller 128 to send multiple memory requests to the memory module 130 containing the memory hub 140, even though previously sent memory requests have not yet been serviced. As a result, the memory requests can <Desc/Clms Page number 7> be sent at a rate that is faster than the rate at which the memory module 130 can service those requests. The sequencer 160 simply formats the signals of one memory request while memory devices are servicing another memory request. In addition, the sequencer 160 may reorder the memory requests, such as placing a series of read requests before previously received write requests, which reduces the memory read latency. The memory sequencer 160 applies the re-formatted memory requests to a memory device interface 166. The nature of the memory device interface 166 will again depend upon the characteristics of the memory devices 148. In any case, the memory device interface 166 preferably includes a buffer, such as a FIFO buffer 168, for receiving and storing one or more memory requests as they are received from the link interface 152. The memory requests are stored in the buffer 168 until they can be processed by the memory devices 148. The memory requests are described above as being received by the memory hub 140 in a format that is different from the format that the memory requests are applied to the memory devices 148. However, the memory hub controller 128 may instead re-format the memory requests from the processor 104 (Figure 1) to a format that can be used by the memory devices 148. In such case, it is not necessary for the sequencer 160 to re-format the memory requests. Instead, the sequencer 160 simply schedules the re-formatted memory request signals in the order needed for use by the memory devices 148. The memory request signals for one or more memory requests are then transferred to the memory device interface 166 so they can subsequently be applied to the memory devices 148. As previously explained, the sequencer 160 can provide a memory bandwidth that is significantly higher than the memory bandwidth of conventional computer systems. Although the sequencer 160 provides this advantage when the memory hub controller 128 is issuing memory commands at a rapid rate, the sequencer 160 does not provide this advantage when the memory hub controller 128 is issuing memory requests to a memory module 130 at a rate that can be serviced by the memory module 130. In fact, the sequencer 160 can actually increase the read latency of the memory module 130 when no serviced memory requests are <Desc/Clms Page number 8> queued in the memory hub 140. The increased latency results from the need to store the memory requests in the sequencer 160, re-format the memory requests, schedule resulting control signals in the sequencer 160, and begin applying those control signals to the memory devices 148. Also, the memory sequencer 160 has a relatively slow clocking structure that can delay the memory hub 140 from issuing to the memory devices memory requests received from the memory hub controller 128. The memory hub 140 shown in Figure 2 avoids the potential disadvantage of using the memory sequencer 160 by including the bypass circuit 170. The bypass circuit 170 allows the memory requests to access the memory devices 148 more quickly when the memory device interface 166 is not busy servicing at least one memory request. As explained above, when multiple memory requests are not being handled by the sequencer 160, the advantages of servicing memory requests with the sequencer 160 no longer exist. Instead, the sequencer 160 increases the memory read latency. The bypass circuit 170, however, allows the memory hub 140 to decrease the access time of each memory request by handling an initial portion of the signal sequencing normally handled by the sequencer 160, and it preferably uses a faster clocking structure than the sequencer 160. Thus, the bypass circuit 170 increases the access time of the memory requests to the memory devices 148. The bypass circuit 170 includes conventional circuitry that converts each of the memory requests from the format output by the memory hub controller 128 into a memory request with a format that can be used by the memory devices 148. While the bypass circuit 170 may handle reformatting of the entire memory request, the bypass circuit 170 preferably handles the row address portion of the memory request. Similar to the memory sequencer 160 described above, the bypass circuit 170 receives the memory request from the link interface 154. The bypass circuit 170 then reformats the address portion of the memory request into a row address signal. The bypass circuit 170 outputs the row address signal to the memory device interface 166 and then outputs a row address strobe (RAS) to the memory device interface 166. These signals allow the memory device interface 166 to <Desc/Clms Page number 9> access the addressed row of one of the memory devices 148. By the time the memory devices have processed the portion of the memory request provided by the bypass circuit 170, the sequencer 160 is ready to provide the remaining portion of the memory request. As shown in Figure 2, the bypass circuit 170 utilizes a link-in clock 176 from the memory hub controller 128 to forward the row address and RAS signals to the memory device interface 166. The link-in clock 176 is received by the link interface 152 and forwarded to the bypass circuit 170. The bypass circuit includes logic that delays and balances the link-in clock 176 with the clock forwarded from the link interface 152 with the memory requests. More specifically, the link-in clock 176 is used to forward each memory request from the memory hub controller 128 to the memory hub 140, in particular to the link interface 152. The link-in clock 176 is then forwarded to the bypass circuit 170. The memory request output by the link interface 152 to the bypass circuit 170 uses a controller clock, which is a slower clock used by the memory hub 140 to process memory requests. The bypass circuit 170 delays and balances the link-in clock 176 with the controller clock, which allows the bypass circuit 170 to use the link-in clock 176 to service the row portion of the memory request. The faster link-in clock 176 allows the bypass circuit 170 to process and forward the row address and RAS signals more quickly than the controller clock used by the sequencer 160. While the bypass circuit 170 handles the row portion of the memory request from the link interface 152, the remaining portion of the memory request, for example the command signal and column address, is formatted and forwarded by the sequencer 160. This allows the sequencer 160 to format the remaining portion of the memory request, as explained above, while the read address and RAS signals are accessing the addressed row of one of the memory devices 148. Thus, the sequencer 160 does not have to service the row portion of the memory request. This structure increases the overall access time to the memory devices 148, thus reducing the latency of the memory hub 140, because the bypass circuit 170 forwards the row address and RAS signals to one of the memory devices more quickly than the sequencer 160. In addition, during the clock delays used by the row address and <Desc/Clms Page number 10> RAS signals to access one of the memory devices 148, the sequencer 160 is formatting and ordering the remaining signals of the memory request. Thus, once the remaining signals are formatted and ordered by the sequencer 160, they can be immediately coupled to the memory device 148 that has already been accessed by the row address signal. The bypass circuit 170 is utilized by the memory hub 140 when the memory device interface 166 is not busy servicing memory requests. The memory device interface 166 generates a high"ACTIVE/IDLE*"signal when the buffer 168 of the memory device interface 166 is active and contains, for example, one or more memory requests. The high ACTIVE/IDLE* signal indicates that the memory device interface is busy, thus memory requests can be more efficiently handled by using the sequencer 160. When the buffer 168 contains, for example, less than one memory request, the memory device interface generates a low"ACTIVE/IDLE*" signal. The low ACTIVE/IDLE* signal indicates that the memory device interface is not busy, thus the memory hub 140 uses the bypass circuit 170 and sequencer 160 to service memory requests. The ACTIVE and IDLE* conditions generated by the memory device interface 166 are not limited to the circumstances described above. For example, the memory device interface 166 may generate an ACTIVE signal based on the buffer 168 containing a certain percentage of memory requests and likewise an IDLE* signal when the number of memory requests is under a certain percentage. The memory hub 140, shown in Figure 2, further includes a multiplexer 172, which works in conjunction with the memory device interface 166 to service the memory requests. The multiplexer 172 has inputs coupled to the bypass circuit 170 and the sequencer 160, an output coupled to the memory device interface 166, and a control input coupled to the memory device interface 166. The multiplexer 172 uses the ACTIVE/IDLE* signal from the memory device interface 166 to couple memory requests to the memory device interface 166. When the multiplexer 172 receives an ACTIVE signal, or a high ACTIVE/IDLE* signal, the multiplexer 172 couples memory requests from the sequencer 160 to the memory device interface 166. Likewise, when the multiplexer 172 receives an IDLE* signal, <Desc/Clms Page number 11> or a low ACTIVE/IDLE* signal, the multiplexer 172 couples a portion of each memory request from the bypass circuit 170 to the memory device interface 166 and a portion of each memory request from the sequencer 160 to the memory device interface 166. The ACTIVE/IDLE* signal generated by the memory device interface 166 is also used to determine whether memory requests should be forwarded from the link interface 152 to the sequencer 160 or to both the bypass circuit 170 and the sequencer 160. Both the sequencer 160 and the bypass circuit 170 are coupled to the memory device interface 166. When the memory device interface 166 generates an ACTIVE signal, the sequencer 160 receives the memory requests from the link interface 152 and generates and couples memory requests to the multiplexer 172. When the memory device interface 166 generates an IDLE* signal, both the sequencer 160 and the bypass circuits receive the memory requests and handle specific portions of each of the memory requests, as described above. Although the present invention has been described with reference to preferred embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
A semiconductor device (10) and method of manufacture. A liner (62) composed of a high-K material having a relative permittivity of greater than eight is formed adjacent at least the sidewalls of a gate (28). Sidewall spacers (66) are formed adjacent the gate and spaced apart from the gate by the liner. The liner can be removed using an etch process that has substantially no reaction with a gate dielectric (34) of the gate.
CLAIMS What is claimed is: 1. A method of fabricating a semiconductor device (10), comprising the steps of: providing a layer of semiconductor material (16); forming a gate (28) including a gate dielectric (34) and a gate electrode (32) on the layer of semiconductor material, the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; forming a liner (62) from a material having a relative permittivity of greater than about 8, the liner formed adjacent at least the sidewalls of the gate; forming sidewall spacers (66) adjacent the gate and spaced apart from the gate by the liner ; implanting dopant species to form deep doped regions of a source and a drain in the layer of semiconductor material; removing the spacers; and removing the liner using an etch process that has substantially no reaction with the gate dielectric. 2. The method according to claim 1, wherein the liner includes liner segments extending laterally from the gate over the layer of semiconductor material and the spacers are formed on top of at least a portion of the laterally extending segments of the liner. 3. The method according to any of claims 1-2, further comprising the step of implanting dopant species to form extension regions of the source and the drain in the layer of semiconductor material after removal of the spacers and the liner. <Desc/Clms Page number 12> 4. The method according to claim 3, wherein the implantation of dopant species to form the extension regions is part of a solid phase epitaxy (SPE) process. 5. The method according to any of claims 1-4, wherein the liner is about 50 A to about 150 A thick. 6. A semiconductor device (10) comprising: a layer of semiconductor material (16); a gate (28) disposed on the layer of semiconductor material, the gate including a gate dielectric (34) and a gate electrode (32), the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; and a disposable liner (62) composed of a material having a relative permittivity of greater than about 8, the liner disposed adjacent at least the sidewalls of the gate, wherein the liner is removable by an etch process that has substantially no reaction with the gate dielectric. 7. The semiconductor device according to claim 6, further comprising disposable sidewall spacers (66) disposed adjacent the gate and spaced apart from the gate by the liner. 8. The semiconductor device according to claim 7, wherein the liner includes liner segments extending laterally from the gate over the layer of semiconductor material and the spacers are formed on top of at least a portion of the laterally extending segments of the liner. 9. The semiconductor device according to any of claims 6-8, wherein the liner is composed of one or more materials selected from Hf02, ZrO2, Ce02, Al203, Ti02 and mixtures thereof. <Desc/Clms Page number 13> 10. A semiconductor device (10) comprising: a layer of semiconductor material (16); a gate (28) disposed on the layer of semiconductor material, the gate including a gate dielectric (34) and a gate electrode (32), the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; a liner (62) disposed adjacent at least the sidewalls of the gate; and disposable sidewall spacers (66) composed of a material having a relative permittivity of greater than about 8, the spacers disposed adjacent the gate and spaced apart from the gate by the liner, wherein the spacers are removable by an etch process that has substantially no reaction with the liner.
<Desc/Clms Page number 1> SEMICONDUCTOR DEVICE FORMED WITH DISPOSABLE SPACER AND LINER USING HIGH-K MATERIAL AND METHOD OF FABRICATION TECHNICAL FIELD The present invention relates generally to the semiconductor devices and the fabrication thereof and, more particularly, to a semiconductor device fabricated with the use of a disposable high-K material layer. BACKGROUND Fabrication of semiconductor devices, such as metal oxide semiconductor field effect transistors (MOSFET) and complementary metal oxide semiconductor (CMOS) integrated circuits, involves numerous processing steps. Each step may potentially have an adverse effect on one or more device components. In a typical MOSFET, a source and a drain are formed in an active region of a semiconductor layer by implanting N-type or P-type impurities in a layer of semiconductor material. Disposed between the source and drain is a body region. Disposed above the body region is a gate electrode. The gate electrode and the body are spaced apart by a gate dielectric layer. It is noted that MOSFETs can be formed in bulk format (for example, the active region being formed in a silicon substrate) or in a semiconductor-on-insulator (SOI) format (for example, in a silicon film that is disposed on a insulating layer that is, in turn, disposed on a silicon substrate). A pervasive trend in modern integrated circuit manufacture is to produce transistors, and the structural features thereof, that are as small as possible. Although the fabrication of smaller transistors allows more transistors to be placed on a single monolithic substrate for the formation of relatively large circuit systems in a relatively small die area, slight imperfections in the formation of the component parts of a transistor can lead to poor transistor performance and failure of the overall circuit. As an example, during the fabrication process, certain unwanted portions of various device layers are removed using wet and/or dry chemical etching techniques. During the etching process desired portions of the layer being etched are protected by a disposable mask layer or a previously <Desc/Clms Page number 2> patterned device component formed from a material that has very little reaction with the etchant. Occasionally, the desired portion of the layer being etched is partially etched forming an undercut below the edges of the protective layer. Such an undercut can lead to a reduction in the operational performance of the device being fabricated. As an example, in some device fabrication processes, the gate dielectric can become undercut by the removal of a disposable spacer or liner. Accordingly, there exists a need in the art for semiconductor devices, such as MOSFETs, that are formed using techniques intended to minimize imperfections in the resulting device. SUMMARY OF THE INVENTION According to one aspect of the invention, a method of fabricating a semiconductor device is provided. The method includes the steps of providing a layer of semiconductor material; forming a gate including a gate dielectric and a gate electrode on the layer of semiconductor material, the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; forming a liner from a material having a relative permittivity of greater than about 8, the liner formed adjacent at least the sidewalls of the gate; forming sidewall spacers adjacent the gate and spaced apart from the gate by the liner ; implanting dopant species to form deep doped regions of a source and a drain in the layer of semiconductor material; removing the spacers; and removing the liner using an etch process that has substantially no reaction with the gate dielectric. According to another aspect of the invention, a semiconductor device is provided. The semiconductor device includes a layer of semiconductor material ; a gate disposed on the layer of semiconductor material, the gate including a gate dielectric and a gate electrode, the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; and a disposable liner composed of a material having a relative permittivity of greater than about 8, the liner disposed adjacent at least the sidewalls of the gate, <Desc/Clms Page number 3> wherein the liner is removable by an etch process that has substantially no reaction with the gate dielectric. According to yet another aspect of the invention, the invention is a semiconductor device. The semiconductor device includes a layer of semiconductor material ; a gate disposed on the layer of semiconductor material, the gate including a gate dielectric and a gate electrode, the gate electrode spaced from the layer of semiconductor material by the gate dielectric and the gate defining sidewalls ; a liner disposed adjacent at least the sidewalls of the gate; and disposable sidewall spacers composed of a material having a relative permittivity of greater than about 8, the spacers disposed adjacent the gate and spaced apart from the gate by the liner, wherein the spacers are removable by an etch process that has substantially no reaction with the liner. BRIEF DESCRIPTION OF DRAWINGS These and further features of the present invention will be apparent with reference to the following description and drawings, wherein: FIG. 1 is a schematic block diagram of a CMOS transistor formed in accordance with the present invention; FIG. 2 is a flow chart illustrating a method of forming the CMOS transistor of FIG. 1; and FIGs. 3A through 3C illustrate the CMOS transistor of FIG. 1 in various stages of manufacture. DISCLOSURE OF INVENTION In the detailed description that follows, identical components have been given the same reference numerals, regardless of whether they are shown in different embodiments of the present invention. To illustrate the present invention in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form. Referring initially to FIG. 1, a semiconductor device 10 is illustrated. The illustrated semiconductor device 10 is a metal oxide semiconductor field effect transistor (MOSFET) for use in, for example, the construction of a complementary <Desc/Clms Page number 4> metal oxide semiconductor (CMOS) integrated circuit. As one skilled in the art will appreciate, the fabrication techniques described herein can be used for other types of semiconductor devices (e. g. , other types of transistors, memory cells, etc. ) and the illustration of a MOSFET is merely exemplary. However, the semiconductor device 10 will sometimes be referred to herein as a MOSFET 12. The MOSFET 12 has an active region 14 formed in a layer of semiconductor material 16. The layer of semiconductor material 16 can be, for example, a silicon substrate for the formation of bulk type devices. Alternatively, the layer of semiconductor material 16 can be, for example, a silicon film formed on a layer of insulating material 17a (denoted by dashed lines in FIG. 1). The insulating layer 17a is, in turn, formed on a semiconductor substrate 17b so that the resultant devices are formed in semiconductor-on-insulator (SOI) format as is well known in the art. The active region 14 includes a source 18, a drain 20 and a body 22 disposed between the source 18 and the drain 20. The source 18 and the drain 20 respectively include deep doped regions 24a, 24b and extensions 26a, 26b. A gate 28 is disposed on the layer of semiconductor material 16 over the body 22 and defines a channel 30 within the body 22 and interposed between the source 18 and the drain 20. The gate 28 includes a gate electrode 32 spaced apart from the layer of semiconductor material 16 by a gate dielectric 34. As illustrated, the extensions 26a, 26b may laterally defuse under the gate 28 as is known in the art. Referring now to FIG. 2, a method 50 of forming the MOSFET 12 is illustrated. As will become apparent from the discussion below, the method 50 allows the formation of the MOSFET 12 where the gate dielectric 34 has substantially no undercut with respect to the gate electrode 32. This is accomplished by using a disposable high-K liner that has high etch selectivitiy with respect to the gate electrode 32. With additional reference to FIG. 3A, the method 50 starts in step 52 where the layer of semiconductor material 16 is provided. As indicated above, the layer of semiconductor material can be a semiconductor substrate (such as a silicon substrate) for the formation of bulk type devices or a semiconductor film (such as <Desc/Clms Page number 5> a silicon film) formed as part of an SOI substrate stack. If desired, isolation regions (for example, shallow trench isolation, or STI, regions) can be formed in the layer of semiconductor material 16 to define the size and placement of multiple active regions 14 within the layer of semiconductor material 16. The isolation regions are formed in step 54 of method 50, but for simplicity of the drawing figures attached hereto the isolation regions are not shown. Next, in step 56, a layer of material used to form the gate dielectric 34 is formed on the layer of semiconductor material 16. The gate dielectric 34 material is formed by growing or depositing a layer of gate dielectric material on top of the layer of semiconductor material 16. It is noted that the layer of material for the gate dielectric 14 will usually extend over at least the entire active region 14 and will be about 10 A to about 50 A thick. In the illustrated embodiment, the material used for the gate dielectric 14 is a"standard-K dielectric."A standard-K dielectric refers to a dielectric material having a relative permittivity, or K, of up to about 10. Relative permittivity is the ratio of the absolute permittivity (e) found by measuring capacitance of the material to the permittivity of free space (Eo), that is K = E/Eo. Example standard-K dielectric materials include, for example, silicon dioxide (K of about 3.9), silicon oxynitride (K of about 6 to 9 depending on the relative content of oxygen and nitrogen) and silicon nitride (K of about 6 to 9). Next, in step 58, a layer of material used to form the gate electrode 32 is grown or deposited on the layer of material used to form the gate dielectric 34. The layer of material used to form the gate electrode 32 is about 500 A to about 2,000 A thick. The material used from the gate electrode 32 can be, for example, polysilicon, polysilicon-germanium, titanium nitride (e. g. , TiN), tungsten (W) or tantalum nitride (e. g. , TaN, Ta3N5). After the layer of material used to form the gate dielectric 34 is grown or deposited in step 56 and the layer of material used to form the gate electrode 32 is grown or deposited in step 58, the gate 28, including the gate electrode 32 and the gate dielectric 34, are patterned in step 60. Techniques for patterning the gate 28 will be known to those skilled in the art and will not be described in detail herein. It is noted, however, that the layer of material used to form the gate dielectric 34 can be patterned separately from patterning of the layer of material <Desc/Clms Page number 6> used to form the gate electrode 32. In addition, patterning of the layer of material used to form the gate dielectric 34 can be conducted before formation of the layer of material used to form the gate electrode 32 in step 58. Alternatively, the layer of material used to form the gate dielectric 34 is not patterned or is patterned after implantation of ion species to dope the source 18 and the drain 20 as discussed below in more detail. With additional reference to Fig. 3B, the method 50 continues in step 64 by forming a liner 62 over the exposed areas of the active region 14, along the sidewalls of the gate 28 and on top of the gate 28. In the illustrated embodiment, the liner 62 is a layer of material formed to be about 50 A to about 150 A thick. The liner 62 is formed from a"high-K"material. As used herein, the term "high-K"refers to a material having a relative permittivity in one embodiment of about 10 or more and in another embodiment of about 20 or more. Exemplary high-K materials are identified below in Table 1. It is noted that Table 1 is not an exhaustive list of high-K materials. Other high-K materials include, for example, barium titanate (BaTiO3), barium strontate (BaSrO3), BST (Ba1 xSrxTio3) barium strontium oxide (Ba1 xSrxO3), PST (PbScxTal-X03), and PZN (PbZnxNbi-xOs). One skilled in the art will appreciate that other high-K materials may be available. TABLE 1 <tb> <tb> Dielectric <SEP> Material <SEP> Approximate <SEP> Relative<tb> <tb> Permittivity <SEP> (K)<tb> <tb> <tb> Alumina <SEP> (Al203) <SEP> 9<tb> <tb> zirconium <SEP> silicate <SEP> 12<tb> <tb> <tb> hafnium <SEP> silicate <SEP> 15<tb> <tb> <tb> <tb> lanthanum <SEP> oxide <SEP> (La203) <SEP> 20-30<tb> <tb> <tb> hafnium <SEP> oxide <SEP> (Hf02) <SEP> 40<tb> <tb> <tb> zirconium <SEP> oxide <SEP> (Zr02) <SEP> 25<tb> <tb> <tb> cesium <SEP> oxide <SEP> (Ce02) <SEP> 26<tb> <tb> <tb> bismuth <SEP> silicon <SEP> oxide <SEP> (B4Si2Or2) <SEP> 35-75<tb> <tb> <tb> titanium <SEP> dioxide <SEP> (Ti02) <SEP> 30<tb> <Desc/Clms Page number 7> <tb> tantalum <SEP> oxide <SEP> (Ta205) <SEP> 26<tb> <tb> tungsten <SEP> oxide <SEP> (WO3) <SEP> 42<tb> <tb> yttrium <SEP> oxide <SEP> (Y203) <SEP> 20<tb> It is noted that the K-values for both standard-K and high-K materials may vary to some degree depending on the exact nature of the dielectric material. Thus, for example, differences in purity, crystallinity and stoichiometry, may give rise to variations in the exact K-value determined for any particular dielectric material. As used herein, when a material is referred to by a specific chemical name or formula, the material may include non-stoichiometric variations of the stoichiometrically exact formula identified by the chemical name. For example, tantalum oxide, when stoichiometrically exact, has the chemical formula Ta205, but may include variants of stoichiometric Ta205, which may be referred to as TaxOy, in which either of x or y vary by a small amount. For example, in one embodiment, x may vary from about 1.5 to 2.5, and y may vary from about 4.5 to about 5.5. In another embodiment, x may vary from about 1.75 to 2.25, and y may vary from about 4 to about 6. Such variations from the exact stoichiometric formula fall within the definition of tantalum oxide. Similar variations from exact stoichiometry for all chemical names or formulas used herein are intended to fall within the scope of the present invention. For example, again using tantalum oxide, when the formula Ta205 is used, TaxOy is included within the meaning. Thus, in the present disclosure, exact stoichiometry is intended only when such is explicitly so stated. As will be understood by those of skill in the art, such variations may occur naturally, or may be sought and controlled by selection and control of the conditions under which materials are formed. In step 68 and as illustrated in greater detail in FIG. 3B, the method 50 continues by forming spacers (also known in the art as sidewall spacers) 66a and 66b. More specifically, after the liner has been formed in step 64, spacers 66a and 66b are formed adjacent the portion of the liner 62 disposed along the sidewalls of the gate 28 and on top of the liner 62 extending laterally from the gate 28 for a desired distance (e. g. , about 300 A to about 1,000 A). The spacers <Desc/Clms Page number 8> 66a and 66b are formed from a material such as a nitride (e. g. , silicon nitride, or Si3N). As will become more apparent below, the liner 62 and the spacers 66a, 66b are disposable. The spacers 66a and 66b and the gate 28 act as a self-aligned mask for implantation of the deep doped regions 24a and 24b in step 70. Implanting dopant species to form the deep doped regions 24a and 24b of the source 18 and the drain 20, respectively, is well known in the art and will not be described in great detail herein. Briefly, to form a P-type deep doped region ions, such as boron, gallium or indium, can be implanted with an energy of about 5 KeV to 30 KeV and a dose of about 1x1015 atoms/cm2 to about 5x10'5 atoms/cm2. N-type deep doped regions 24 can be formed by implanting ions, such as antimony, phosphorous or arsenic, at an energy of about 3 KeV to about 15 KeV and a dose of about 1x10'5 atoms/cm2 to about 1X1016 atoms/cm2. Following implantation of the deep doped source and drain regions 24a and 24b, an anneal cycle is carried out to recrystallize the layer of semiconductor material 16 at a high temperature of, for example, about 950 C to about 1, 000 C. It is noted that the ions used to form the deep doped regions 24a, 24b are implanted through the liner 62 and may laterally diffuse slightly under the spacers 66a, 66b as is conventional. With additional reference to Fig. 3C, in step 72 the spacers 66a, 66b are removed by a wet chemistry etching process. For example, if the spacers made from silicon nitride then an H3PO4 acid etching process can be used as is known in the art. Once the spacers 66a and 66b are removed in step 72, the liner 62 is removed in step 74. The liner 62 is removed using a wet or dry etching process as is appropriate for the high-K material selected for the liner. Although any high- K material identified above can be used for formation of the liner 62, Hf02, Zur02, Ce02, Al203 and Ti02 are well suited for use as the material for the liner 62. It is noted that the etching processes used for removing high-K materials generally do not react with standard-K materials used for dielectric layers. Therefore, the etching process selected for the high-K material of the liner has etch selectivity that will generally not react with the material used to form the gate dielectric 34 (e. g. , an oxide such as silicon oxide or silicon dioxide). Therefore, upon removing the liner 62, there will be no or very little removal of the gate <Desc/Clms Page number 9> dielectric 34 material. Accordingly, there will be substantially no undercut of the gate dielectric 34 under the gate electrode 32. As a result, the channel 30 will be well defined and operation of the MOSFET 12 will generally not be degraded by the undesired removal of a portion of the gate dielectric 34. As one skilled in the art will appreciate, if removal of a conventional liner (e. g. , made from silicon- nitride or silicon-oxide) was desired, the etching process would often also attack, or react with, other components of the transistor, such as the gate dielectric. Next, in step 76 the extensions 26a, 26b are implanted. The formation of shallow source 18 and drain 20 extensions, such as by using a lightly doped drain (LDD) technique, is well known in the art and will not be described in detail herein. Briefly, for a P-type extension region, ions such as boron, gallium or indium, can be implanted with an energy of about 1.0 Kev to about 3.0 KeV and a dose of about 1 x10'4 atoms/cm2 to about lx1015 atoms/cm2. For an N-type extension region, ions such as antimony, phosphorous or arsenic, can be implanted at an energy of about 0.3 KeV to about 1.5 KeV and a dose of about 1X1014 atoms/cm2 to about 1x1016 atoms/cm2. Following dopant implantation, a thermal anneal cycle is carried out to recrystallize the layer of semiconductor material 16 at a temperature of about 600 C to about 850 C. Alternatively, the extensions 26a, 26b can be formed using a solid phase epitaxy (SPE) process, especially when a lower temperature anneal cycle (e. g. , about 600 C) is desired. More specifically, SPE is used to amorphize the layer of semiconductor material 16 with an ion species, such as silicon, germanium, xenon, or the like. The energy and dosage of the ion species can be determined empirically for the device being fabricated. Next, dopant is implanted as described above to achieve the desired N-type or P- type doping and then the layer of semiconductor material 16 is recrystallized using a low temperature anneal (i. e. , at a temperature of less than about 700 C). As is known in the art, the extensions 26a, 26b may diffuse slightly under the gate 28 as is conventional. In an alternative embodiment, the liner 62 and the spacers 66a, 66b are both formed from the same or different high-K materials. In yet another embodiment of the present invention, the liner 62 is made from a standard-K material such as silicon dioxide, and the spacers 66a, 66b and/or the gate <Desc/Clms Page number 10> dielectric 34 is made from one or more high-K materials. In this embodiment, the etch selectivity of the spacer 66 material will minimize damage to the liner 62 material during removal, thereby minimizing the possibility of damage to the semiconductor material layer 16. Damage to the source 18 and/or the drain 20 could result in a degradation of MOSFET 12 performance. Although particular embodiments of the invention have been described in detail, it is understood that the invention is not limited correspondingly in scope, but includes all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
A method for fabricating an amorphous metal-metalloid alloy layer for use in an IC device comprises providing a substrate in a reactor that includes a dielectric layer having a trench, pulsing a metal precursor into the reactor to deposit within the trench, wherein the metal precursor is selected from the group consisting of CpTa(CO)4, PDMAT, TBTDET, TaCl5, Cp2Co, Co-amidinates, Cp2Ru, Ru-diketonates, and Ru(CO)4, purging the reactor after the metal precursor pulse, pulsing a metalloid precursor into the reactor to react with the metal precursor and form an amorphous metal-metalloid alloy layer, wherein the metalloid precursor is selected from the group consisting of BH3, BCl3, catechol borane, AlMe3, methylpyrrolidinealane, AlCl3, SiH4, SiH2Cl2, SiCl4, tetraalkylsilanes, GeH4, GeH2Cl2, GeCl4, SnCl4, trialkylantimony, SbMe3, SbEt3, arsine, and trimethylarsine, purging the reactor after the metalloid precursor pulse, and annealing the amorphous metal-metalloid layer at a temperature between 50°C and 700°C for 5 to 1200 seconds.
Claims 1. A method comprising: providing a substrate in a reactor; pulsing a metal-containing precursor into the reactor to deposit on the substrate; purging the reactor after the metal-containing precursor pulse; pulsing a metalloid-containing precursor into the reactor to react with the metal- containing precursor and form an amorphous metal-metalloid alloy layer; and purging the reactor after the metalloid-containing precursor pulse. 2. The method of claim 1, wherein the metal-containing precursor is selected from the group consisting of CpTa(CO)4, PDMAT, TBTDET, TaCl5, Cp2Co, Co- amidinates, Cp2Ru, Ru-diketonates, and Ru3(CO)i2. 3. The method of claim 1, wherein the metalloid-containing precursor is selected from the group consisting Of BH3, BCl3, catechol borane, AlMe3, methylpyrrolidinealane, AlCl3, SiH4, SiH2Cl2, SiCl4, tetraalkylsilanes, GeH4, GeH2Cl2, GeCl4, SnCl4, trialkylantimony, SbMe3, SbEt3, arsine, and trimethylarsine. 4. The method of claim 1, further comprising annealing the amorphous metal- metalloid alloy layer. 5. The method of claim 4, wherein the annealing process occurs at a temperature between 500C and 7000C for a time duration between 5 seconds and 1200 seconds. 6. The method of claim 1, wherein the pulsing of the metal-containing precursor, the purging after the metal-containing precursor pulse, the pulsing of the metalloid-containing precursor, and the purging after the metalloid-containing precursor are repeated until the amorphous metal-metalloid alloy layer reaches a desired thickness. 7. The method of claim 6, wherein the amount of metalloid-containing precursor pulsed into the reactor in successive pulses is varied to cause the amorphous metal-metalloid alloy layer to have a variable metalloid concentration across its thickness. 8. The method of claim 1, wherein a sufficient amount of the metalloid- containing precursor is pulsed into the reactor to fabricate an amorphous metal-metalloid alloy layer having a metalloid concentration that is between around 0.1% and around 50%. 9. The method of claim 1, further comprising: depositing a metal seed layer on the amorphous metal-metalloid alloy layer using an ALD process; and depositing a metal layer on the metal seed layer using a plating process. 10. The method of claim 9, wherein the metal comprises copper. 11. A method comprising: providing a substrate in a reactor, wherein the substrate includes a dielectric layer having a trench; pulsing a metal-containing precursor into the reactor to deposit within the trench, wherein the metal-containing precursor is selected from the group consisting of CpTa(CO)4, PDMAT, TBTDET, TaCl5, Cp2Co, Co-amidinates, Cp2Ru, Ru-diketonates, and Ru(CO)4; purging the reactor after the metal-containing precursor pulse; pulsing a metalloid-containing precursor into the reactor to react with the metal- containing precursor and form an amorphous metal-metalloid alloy layer within the trench, wherein the metalloid-containing precursor is selected from the group consisting Of BH3, BCl3, catechol borane, AlMe3, methylpyrrolidinealane, AlCl3, SiH4, SiH2Cl2, SiCl4, tetraalkylsilanes, GeH4, GeH2Cl2, GeCl4, SnCl4, trialkylantimony, SbMe3, SbEt3, arsine, and trimethylarsine; purging the reactor after the metalloid-containing precursor pulse; and annealing the amorphous metal-metalloid alloy layer at a temperature between 500C and 7000C for a time duration between 5 seconds and 1200 seconds. 12. The method of claim 11, wherein the pulsing of the metal-containing precursor, the purging after the metal-containing precursor pulse, the pulsing of the metalloid-containing precursor, and the purging after the metalloid-containing precursor are repeated until the amorphous metal-metalloid alloy layer reaches a desired thickness. 13. The method of claim 12, wherein the amount of metalloid-containing precursor pulsed into the reactor in successive pulses is varied to cause the amorphous metal-metalloid alloy layer to have a variable metalloid concentration across its thickness. 14. The method of claim 11, wherein a sufficient amount of the metalloid- containing precursor is pulsed into the reactor to fabricate an amorphous metal-metalloid alloy layer having a metalloid concentration that is between around 0.1% and around 50%. 15. An apparatus comprising: a substrate having a dielectric layer formed thereon and a trench etched into the dielectric layer; an amorphous metal-metalloid alloy layer formed on a bottom surface and sidewalls of the trench; and a copper layer formed on the amorphous metal-metalloid alloy layer within the trench. 16. The apparatus of claim 15, wherein the substrate comprises a bulk silicon structure or a silicon-on-insulator structure. 17. The apparatus of claim 16, wherein the substrate further includes germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, or a Group III- V material. 18. The apparatus of claim 15, wherein the amorphous metal-metalloid alloy layer comprises: a metal selected from the group consisting of tantalum, titanium, ruthenium, cobalt, palladium, tungsten, and platinum; and a metalloid selected from the group consisting of boron, aluminum, silicon, germanium, arsenic, antimony, tellurium, polonium, carbon, nitrogen, and iodine. 19. The apparatus of claim 15, wherein the amorphous metal-metalloid alloy layer has a thickness between around 1 nm and around 10 nm. 20. The apparatus of claim 15, wherein the copper layer comprises a copper seed layer and a copper layer formed on the copper seed layer.
AMORPHOUS METAL-METALLOID ALLOY BARRIER LAYER FOR IC DEVICES Background In the manufacture of integrated circuits, copper interconnects are generally formed on a semiconductor substrate using a copper dual damascene process. Such a process begins with a trench being etched into a dielectric layer and filled with a barrier layer, an adhesion layer, and a seed layer. A physical vapor deposition (PVD) process, such as a sputtering process, may be used to deposit a tantalum nitride (TaN) barrier layer and a tantalum (Ta) or ruthenium (Ru) adhesion layer (i.e., a TaN/Ta or TaN/Ru stack) into the trench. The TaN barrier layer prevents copper from diffusing into the underlying dielectric layer. The Ta or Ru adhesion layer is required because the subsequently deposited metals do not readily nucleate on the TaN barrier layer. This may be followed by a PVD sputter process to deposit a copper seed layer into the trench. An electroplating process is then used to fill the trench with copper metal to form the interconnect. As device dimensions scale down, the aspect ratio of the trench becomes more aggressive as the trench becomes narrower. The line-of-sight PVD process gives rise to issues such as trench overhang of the barrier, adhesion, and seed layers, leading to pinched-off trench and via openings during plating and inadequate gapfill. Additionally, for very thin films (e.g., less than 5 nm thick) on patterned structures, thickness and composition control in PVD is difficult. For very thin layers, much less material is deposited onto the feature sidewalls compared to on the field regions. One approach to addressing these issues is to reduce the thickness of the TaN/Ta or TaN/Ru stack, which widens the available gap for subsequent metallization. Unfortunately, this is often limited by the non-conformal characteristic of PVD deposition techniques. Accordingly, alternative techniques for depositing the barrier, adhesion, and seed layers are needed.Brief Description of the Drawings Figure 1 illustrates a metal interconnect having an amorphous barrier layer in accordance with implementations of the invention. Figure 2 is a method of forming an amorphous barrier layer in accordance with implementations of the invention. Detailed Description Described herein are systems and methods of preventing metal from diffusing out of interconnects used in integrated circuit devices. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations. Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Implementations of the invention provide amorphous barrier layers for use in integrated circuit applications. More specifically, in a dual damascene process for fabricating metal interconnects, the amorphous alloy layer of the invention may be used in place of conventional barrier layers that generally consist of metals such as tantalum and/or metal nitrides such as tantalum nitride, tungsten nitride, or titanium nitride. In some implementations of the invention, the amorphous barrier layer may consist of a metal-metalloid alloy layer where a metal such as tantalum, titanium, ruthenium, cobalt,palladium, tungsten, or platinum is alloyed with a metalloid such as boron, aluminum, silicon, germanium, tin, arsenic, antimony, tellurium, or polonium. In further implementations, the metal may be alloyed with an element that is not technically a metalloid but that may, under some circumstances, exhibit some metalloid behavior such as carbon, nitrogen, or iodine. Figure 1 illustrates a copper interconnect 100 formed within a trench 102 of a dielectric layer 104 upon a substrate 106. The copper interconnect 100 is located within metallization layers of an integrated circuit (IC) die and is used to interconnect transistors and other devices. The substrate 106 may be a portion of a semiconductor wafer. The dielectric layer 104 may be formed using conventional dielectric materials including, but not limited to, oxides such as silicon dioxide (SiO2) and carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane (PFCB), and fluorosilicate glass (FSG). In accordance with implementations of the invention, an amorphous metal- metalloid alloy barrier layer 108 and a metal adhesion layer 110 are formed between the copper interconnect 100 and the dielectric layer 104. In some implementations, the metal- metalloid alloy barrier layer 108 may be homogenous across its thickness, in other words, the ratio of metal-to-metalloid may be homogenous throughout the metal-metalloid barrier layer 108. In further implementations, the metal-metalloid barrier layer 108 may be a graded layer where the ratio of metal-to-metalloid varies across its thickness. The thickness of the metal-metalloid barrier layer 108 may range from around 1 nanometer (nm) to around 10 nm. In accordance with the invention, novel precursors are used in a plasma-enhanced atomic layer deposition (PEALD) process to form the amorphous metal-metalloid alloy barrier layer. These include precursors for metals such as tantalum, titanium, ruthenium, cobalt, palladium, tungsten, and platinum, precursors for metalloids such as boron, aluminum, silicon, germanium, tin, arsenic, antimony, tellurium, and polonium, as well as precursors for carbon, nitrogen, and iodine. Regarding the metal precursors, tantalum (Ta) precursors include, but are not limited to, CpTa(CO)4 (where Cp=cyclopentadienyl), pentakis-(dimethylamido)tantalum (PDMAT), terbutylimido tris(diethylamido)tantalum (TBTDET), and TaCl5. Cobalt (Co)precursors include, but are not limited to, CP2C0 and Co-amidinates. Ruthenium (Ru) precursors include, but are not limited to, Cp2Ru, Ru-diketonates, and Ru3(CO)i2. Regarding the metalloid precursors, boron (B) precursors include, but are not limited to, BH3, BCI3, and catechol borane. Aluminum (Al) precursors include, but are not limited to, AlMe3 (where Me=methyl), pyrrolidinealane, and AlCl3. Silicon (Si) precursors include, but are not limited to, SiH4, SiCl4, and tetraalkylsilanes. Germanium (Ge) precursors include, but are not limited to, GeH4 and GeCl4. Antimony (Sb) precursors include, but are not limited to, trialkylantimony, SbMe3, and SbEt3 (where Et=ethyl). Arsenic precursors include, but are not limited to, arsine and trimethylarsine. Additional precursors that may be used in implementations of the invention include, but are not limited to, ethyl iodine (C2H5I), C2H3I, iodomethane (CH3I), diiodomethane (CH2I2), triiodomethane (CH3I), nitrogen (N2), ammonia (NH3), ammonium chloride (NH4Cl), methane (CH4), and ethane (C2He). Figure 2 is a method 200 for fabricating an amorphous metal-metalloid alloy barrier layer and a metal interconnect in accordance with an implementation of the invention. The method 200 begins by providing a semiconductor substrate onto which the metal-metalloid alloy layer and the metal interconnect may be formed (process 202 of Figure T). The semiconductor substrate may be formed using a bulk silicon or a silicon- on-insulator substructure. In other implementations, the substrate may be formed using alternate materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, or other Group III- V materials. Although a few examples of materials from which the semiconductor substrate may be formed are described here, any material that may serve as a foundation upon which a semiconductor device may be built falls within the spirit and scope of the present invention. The substrate has at least one dielectric layer deposited on its surface. The dielectric layer may be formed using materials known for the applicability in dielectric layers for integrated circuit structures, such as low-k dielectric materials. Such dielectric materials include, but are not limited to, silicon dioxide (SiO2), carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane orpolytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. The dielectric layer may include pores or other voids to further reduce its dielectric constant. The dielectric layer may include one or more trenches and/or vias within which the metal-metalloid alloy layer and the metal interconnect will be formed. The trenches and/or vias may be patterned using conventional wet or dry etch techniques that are known in the art. The substrate may be housed in a reactor in preparation for a chemical vapor deposition process, such as a PEALD process. The substrate may be heated within the reactor to a temperature between around 25°C and around 3500C. The pressure within the reactor may range from 0.01 Torr to 5.0 Torr. One or more atomic layer deposition (ALD) process cycles are then carried out to deposit a metal-metalloid alloy layer. The ALD process cycle begins by introducing at least one pulse of a metal precursor into the reactor (204). At least one of the metal precursors described above may be used here, including but not limited to precursors that contain tantalum, titanium, ruthenium, cobalt, palladium, tungsten, or platinum. In various implementations of the invention, the following process parameters may be used for the metal precursor pulse. The metal precursor pulse may have a duration that ranges from around 0.1 seconds to around 10 seconds with a flow rate of up to 10 standard liters per minute (SLM). The specific number of metal precursor pulses may range from 1 pulse to 200 pulses or more depending on the desired thickness of the final metal-metalloid alloy layer. The metal precursor temperature may be between around 600C and 2500C. The vaporizer temperature may be around 600C to around 2500C. A heated carrier gas may be employed to move the metal precursor, with a temperature that generally ranges from around 500C to around 2000C. Carrier gases that may be used here include, but are not limited to, argon (Ar), xenon (Xe), helium (He), hydrogen (H2), nitrogen (N2), forming gas, or a mixture of these gases. The flow rate of the carrier gas may range from around 100 SCCM to around 700 SCCM. The precursor delivery line into the reactor may be heated to a temperature that ranges from around 600C to around 2500C, or alternately, to a temperature that is at least 25°C hotter than the volatile precursor flow temperature within the delivery line to avoid condensation of the precursor. Before discharge, the delivery line pressure may be set toaround 0 to 5 psi, the orifice may be between 0.1 mm and 1.0 mm in diameter, and the charge pulse may be between 0.5 seconds and 5 seconds. The equilibration time with the valves closed may be 0.5 seconds to 5 seconds and the discharge pulse may be 0.5 seconds to 5 seconds. An RF energy source may be during the alloy metal precursor pulse at a power that ranges from 5W to IOOOW and at a frequency of 13.56 MHz, 27 MHz, or 60 MHz. It should be noted that the scope of the invention includes any possible set of process parameters that may be used to carry out the implementations of the invention described herein. After the at least one pulse of the metal precursor, the ALD process cycle purges the reactor (206). The purge gas may be an inert gas such as Ar, Xe, N2, He, or forming gas and the duration of the purge may range from 0.1 seconds to 60 seconds, depending on the PEALD reactor configurations and other deposition conditions. In most implementations of the invention, the purge may range from 0.5 seconds to 10 seconds. In accordance with implementations of the invention, the ALD process continues by introducing at least one pulse of a metalloid precursor into the reactor as a co-reactant to react with the metal precursor (208). At least one of the metalloid precursors described above may be used here, including but not limited to precursors that contain boron, aluminum, silicon, germanium, tin, arsenic, antimony, tellurium, polonium, carbon, nitrogen, or iodine. In some implementations, the metalloid precursor may be added using a physical vapor deposition process and a target containing the desired metalloid. Process parameters similar to those provided above may be used for the co-reactant pulse, although the number of metalloid precursor pulses will generally be much lower than the number of metal precursor pulses since the metalloid is incorporated at low concentrations in accordance with implementations of the invention. For instance, in implementations of the invention, the process parameters for the metalloid precursor pulse include a pulse duration of between around 0.5 seconds and 10 seconds at a flow rate of up to 10 SLM, where the specific number of metalloid precursor pulses may range from 1 pulse to 200 pulses or more depending on the desired concentration of metalloid in the final metal-metalloid alloy layer. The number of metalloid precursor pulses is generally dependent on the number of metal precursor pulsesused. For example, given the number of metal precursor pulses used above, the number of metalloid precursor pulses necessary to produce a metalloid concentration in the final alloy layer that is between around 0.1% and around 50% may be calculated and used. A carrier gas may be employed to move the metalloid precursor. Carrier gases that may be used here include, but are not limited to, Ar, Xe, He, H2, N2, forming gas, or a mixture of these gases. The flow rate of the carrier gas may range from around 100 SCCM to around 700 SCCM. The metalloid precursor temperature may be between around 600C and 2500C. An RF energy source may be applied as described above. It should be noted that the scope of the invention includes any possible set of process parameters that may be used to carry out the implementations of the invention described herein. In some implementations of the invention, an optional plasma, such as an argon plasma, may be applied during the one or more metalloid precursor pulses (210). The plasma may be used to disassociate the metalloid element from the rest of the precursor, thereby freeing the metalloid element and allowing it to become incorporated into a metal- metalloid alloy layer. In implementations where a plasma is used, process parameters that may be used for the plasma include a flow rate of around 200 SCCM to around 600 SCCM. The plasma may be pulsed into the reactor with a pulse duration of around 0.5 seconds to around 10.0 seconds, with a pulse duration of around 1 to 4 seconds often being used. The plasma power may range from around 2OW to around IOOOW and will generally range from around 6OW to around 700W. A carrier gas such as He, Ar, or Xe may be used to introduce the plasma. A chuck upon which the semiconductor substrate is mounted may be biased and capacitively-coupled. After the at least one pulse of the metalloid precursor, the ALD process purges the reactor (212). The purge gas may be an inert gas such as Ar, Xe, N2, He, or forming gas and the duration of the purge may range from 0.1 seconds to 60 seconds, depending on the PEALD reactor configurations and other deposition conditions. In most implementations of the invention, the purge may range from 0.5 seconds to 10 seconds. Here, the purge removes excess metalloid precursor as well as by-products from the reaction between the metal precursor and the metalloid precursor.The above processes result in the formation of a metal-metalloid alloy layer on the substrate within a trench of the dielectric layer. If the metal-metalloid alloy layer has not yet reached a desired thickness, the above ALD process cycle may be repeated as necessary until the desired thickness is achieved (214). The metal-metalloid alloy layer produced is an amorphous layer that provides a barrier to metal diffusion. For instance, if the appropriate precursors are selected from the list above to produce a tantalum carbide (TaC) layer, the TaC layer will have an intentional carbon incorporation from a covalent bond resulting in a single phase barrier. If the carbon had been incorporated as an impurity, the result would be two different materials and phases that would result in a material with poor mechanical rigidity and inferior barrier properties that would permit for "grain boundary" type copper diffusion. Contrary to this, the amorphous TaC layer produced in accordance with the invention is a strong barrier to metal diffusion. The metalloid element may be precisely controlled by way of the ALD processing. For instance, the concentration of metalloid in the alloy barrier layer may be precisely controlled by varying the number of metalloid precursor pulses used in the deposition process. The concentration may be varied across the thickness of the alloy barrier layer by appropriately adjusting the number of metalloid precursor pulses in successive ALD process cycles. If the metalloid consists of aluminum, the aluminum concentration in the alloy layer may be graded to provide a barrier to electromigration. In further implementations of the invention, the metal-metalloid alloy layer may be further tailored to have a specific composition by manipulating process parameters during the deposition process. Process parameters that may be manipulated to establish a metal concentration gradient and/or a metalloid concentration gradient within the alloy layer include, but are not limited to, the specific precursors that are used in each process cycle, how long each precursor is flowed into the reactor during a process cycle, the precursor concentration and flow rate during each process cycle, the co-reactant used, how long each co-reactant is flowed into the reactor during a process cycle, the co-reactant concentration and flow rate during each process cycle, the sequence or order of the precursor and co- reactant, the plasma energy applied, the substrate temperature, the pressure within the reaction chamber, and the carrier gas composition. Furthermore, changing the parametersof each individual process cycle, or groups of successive process cycles, may also be used to tailor the metal-metalloid alloy layer. After the ALD process cycles used to form the metal-metalloid alloy layer are complete, an annealing process may be used to further incorporate the metalloid element into the alloy layer (216). In accordance with implementations of the invention, the anneal may take place at a temperature between around 500C and around 7000C for a time duration that may last from 5 seconds to 1200 seconds. Because the amorphous metal- metalloid alloy layer may become crystalline at temperatures over 7000C, the anneal will generally occur at temperatures below 4000C. In some implementations, the anneal may take place in an oxygen free ambient atmosphere, such as forming gas or a pure inert gas. In some implementations, the annealing process may be carried out after a metal interconnect has been formed on the metal-metalloid alloy barrier layer. In accordance with some implementations of the invention, one or more ALD process cycles may be used to deposit a metal seed layer, such as a copper seed layer, atop the metal-metalloid alloy barrier layer (218). The same reactor may be used for this ALD process. Copper metal precursors that may be used to form a conventional copper seed layer, as well as the required co-reactants and process parameters, are well known in the art. The ALD process to form the metal seed layer may be repeated as necessary to produce a metal seed layer having a sufficient thickness. Following the fabrication of the metal seed layer, the substrate may be transferred to a reactor containing a plating bath and a plating process may be carried out to deposit a metal layer, such as a copper layer, over the metal seed layer (220). The metal layer fills the trench to form the metal interconnect, generally a copper interconnect. In some implementations, the plating bath is an electroplating bath and the plating process is an electroplating process. In other implementations, the plating bath is an electroless plating bath and the plating process is an electroless plating process. In further implementations, alternate copper deposition processes may be used. Finally, a chemical mechanical polishing (CMP) process may be used to planarize the deposited copper metal and finalize the copper interconnect structure (222). Accordingly, a process has been described for fabricating an amorphous metal- metalloid alloy layer that may be used as a barrier layer for metal interconnects inintegrated circuit applications. An ALD process flow is used with metal precursors and metalloid precursors to form the amorphous alloy layer of the invention. The concentration of the metalloid element in the alloy layer is relatively low. The result is a conformal barrier layer that may be used in trenches with aggressive aspect ratios to reduce trench overhang and pinch-off risks. The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
A semiconductor device having both functional and non-functional or dummy lines, regions and/or patterns to create a topology that causes the subsequently formed spacers to be more predictable and uniform in shape and size.
What is claimed is: 1. A semiconductor device arrangement comprising a semiconductor substrate having a mixture of operational gate arrangements and non-operational gate arrangements on a surface of the substrate, the space between said gate arrangements being substantially the same distance, each operational and non-operational gate arrangement having dielectric spacers which are of uniform shape and size, each spacer having substantially the same width and physically contacting the substrate, wherein removal of the non-operational gate arrangements provides operational gate arrangements on said substrate having uniformly sized spacers and different distances between the operational gate arrangements.2. The semiconductor device of claim 1, wherein each operational gate arrangement and each non-operational gate arrangement comprises a thin oxide layer formed on the substrate and a polysilicon layer on the thin oxide layer.3. The semiconductor device of claim 1, wherein the dielectric spacers comprise at least one dielectric material selected from the group consisting of silicon oxide, silicon nitride, silicon oxynitride and silicon oxime.
This application is a divisional of application Ser. No. 08/993,830 filed Dec. 18, 1997, now U.S. Pat. No. 6,103,611.TECHNICAL FIELDThe present invention relates to semiconductor devices and manufacturing processes, and more particularly to methods and arrangements for improved spacer formation within a semiconductor device.BACKGROUND OF THE INVENTIONA continuing trend in semiconductor technology is to build integrated circuits with more and/or faster semiconductor devices. The drive toward this ultra large scale integration has resulted in continued shrinking of device and circuit dimensions and features. In integrated circuits having field-effect transistors, for example, one very important process step is the formation of the gate, source and drain regions for each of the transistors, and in particular the dimensions of the gate, source and drain regions. In many applications, the performance characteristics (e.g., switching speed) and size of the transistor are functions of the size (e.g., width) of the transistor's gate, and the placement of the source and drain regions there about. Thus, for example, a narrower gate tends to produce a higher performance transistor (e.g., faster) that is inherently smaller in size (e.g., narrower width).As is often the case, however, as the devices shrink in size from one generation to the next, some of the existing fabrication techniques are not precise enough to be used in fabricating the next generation of integrated circuit devices. For example, spacers are used in conventional semiconductor devices to provide alignment of the source and drain regions to the gates in transistors. Minor differences in the shape of the spacers can alter the operational characteristics of the device. This is especially true for integrated circuits that have a plurality of similar devices that are meant to share common operating characteristics. Accordingly, there is a continuing need for more efficient and effective fabrication processes for forming semiconductor gates, spacers and regions that are more precisely controlled.SUMMARY OF THE INVENTIONThe present invention provides methods and arrangements that increase the process control during the formation of spacers within a semiconductor device. For example, in accordance with one aspect of the present invention, the spacers are provided on a semiconductor device gate arrangement and used to form lightly doped drain (LDD) regions within a semiconductor device arrangement. In accordance with other aspects of the present invention, the spacers are provided on a polysilicon line within the semiconductor device.In accordance with one embodiment of the present invention, a method is provided for forming substantially uniformly sized spacers on transistor gate arrangements within a semiconductor device. The method includes forming a plurality of semiconductor device gate arrangements on a top surface of a substrate, such that two of the plurality of semiconductor device gate arrangements are positioned parallel to one another and separated by a defined space. The method includes forming the dielectric layer over at least a portion of each of the two semiconductor device gate arrangements and at least a portion of the defined space. Next, the method includes removing portions of the dielectric layer to form a plurality of spacers. Each of the spacers is physically connected to one of the semiconductor device gate arrangements and the substrate. Thus, because of the topology of the two semiconductor device arrangements, the spacers located within the defined space have a base width that is approximately the same. The method further includes configuring one of the two semiconductor device gate arrangements to control an electrical current between a source region and a drain region formed in the substrate and configuring the remaining one of the two semiconductor device gate arrangements to be non-operational. Thus, the non-operational transistor arrangement is provided for the purpose of controlling the topology and in particular the aspect ratio of the defined space between the operational and non-operational transistor gate arrangements.In accordance with yet another embodiment of the present invention, a method is provided for controlling the width of a spacer in a semiconductor device arrangement. The method comprises forming an operational semiconductor device gate arrangement on a substrate at a first position, and a non-operational semiconductor device gate arrangement at a second position on a substrate. As such, the operational and non-operational semiconductor device gate arrangements are adjacent to each other but not touching and define a critical space between them. The method includes forming a dielectric layer over at least a portion of the operational and non-operational semiconductor device gate arrangements and within the critical space. The method further includes removing portions of the dielectric layer to form a first spacer that is physically connected to a sidewall of the operational semiconductor device gate arrangement in the substrate. The first spacer extends into the critical space. A second spacer is also formed and is physically connected to a sidewall of the non-operational transistor gate arrangement and the substrate. A second spacer extends into the critical space. As a result of this arrangement, each of the first and second spacers extends into the critical space for substantially the same distance.In accordance with yet another embodiment of the present invention, a semiconductor device is provided that includes a substrate, a first semiconductor device gate arrangement, a second semiconductor device gate arrangement, a first dielectric spacer, and a second dielectric spacer. Within the substrate there is a source region and a drain region. The first semiconductor device gate arrangement has a first height and a first width and is formed on the substrate with the first width being centered over a first location on the substrate. The first semiconductor device gate arrangement is further configured to control an electrical current between the source region and the drain region formed in the substrate. The second semiconductor device gate arrangement has a second height and a second width and is formed on the substrate with the second width being centered over a second location on the substrate. The second location is separated from the first location by an initial space. The second semiconductor device gate arrangement is configured to be non-operational. The first dielectric spacer is physically connected to the substrate and a first sidewall of the first semiconductor device gate arrangement. The first sidewall of the first semiconductor device gate arrangement is of the first height and is located within the initial space. The first dielectric spacer has a first spacer width as measured at a base of the first dielectric spacer beginning at the first sidewall of the first transistor gate arrangement and extending into the initial space in a direction of the second location. The second dielectric spacer is physically connected to the substrate and a first sidewall of the second semiconductor device gate arrangement. The first sidewall of the second transistor gate arrangement has a second height and is located within the initial space. The second dielectric spacer has a second spacer width as measured at the base of the second dielectric spacer beginning at the first sidewall of the second semiconductor device gate arrangement and extending into the initial space in the direction of the first location. Thus, based on the arrangement of the first and second semiconductor device gate arrangements, and the resulting topology, the aspect ratio of the initial space causes the first spacer width and the second spacer width to be approximately the same.In accordance with certain embodiments of the present invention, the first semiconductor device gate arrangement includes a thin oxide layer formed on the substrate and a gate conductor including polysilicon formed on the thin oxide layer.In accordance with yet other embodiments of the present invention, the first dielectric spacer comprises silicon oxide, silicon nitride, silicon-oxynitride, and/or silicon oxime.In accordance with yet another aspect of the present invention, a method is provided for controlling the formation of spacers on a plurality of polysilicon lines that are formed within a semiconductor device. The method includes forming a plurality of polysilicon lines on a top surface of a substrate. The method further includes forming at least one dummy polysilicon line on the substrate, such that the dummy polysilicon line is substantially parallel to at least a portion of one of the polysilicon lines and is separated from that portion of the polysilicon line by a defined space that defined an aspect ratio. The method further includes covering the polysilicon lines and the dummy polysilicon line along with the top surface of the substrate below the defined space with at least one dielectric layer. The method further includes removing portions of the dielectric layer to form a plurality of separate dielectric spacers and a plurality of separate dummy dielectric spacers. Each of the dielectric spacers is connected to a sidewall of one of the plurality of polysilicon lines and the substrate. Each of the separate dummy dielectric spacers is connected to one of the dummy polysilicon lines and the substrate. Thus, because of the aspect ratio, the width of the dielectric spacers on the sidewalls of the polysilicon lines is more precisely controlled.The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:FIG. 1 depicts a cross-section of a portion of a prior-art semiconductor device having an operational transistor gate arrangement, spacers, and source and drain regions formed in a substrate;FIG. 2a depicts a cross-section of a portion of a prior-art semiconductor device having a plurality of operational transistor gate arrangements formed on a substrate and covered with a dielectric layer;FIG. 2b depicts the portion of FIG. 2a following an anisotropic etch back process in an etching tool, which formed spacers having different widths;FIG. 3 depicts a cross-section of a part of the portion in FIG. 2b further illustrating that the non-uniform topology of the portion in FIG. 2b resulted in the formation of spacers of different sizes (widths and/or shapes);FIG. 4a depicts a cross-section of an improved portion of a semiconductor device, as compared to portion in FIG. 2b, having a non-operational transistor gate arrangement included amongst the plurality of operational transistor gate arrangements in accordance with certain embodiments of the present invention;FIG. 4b depicts a cross-section of the portion in FIG. 4a following an anisotropic etching process which resulted in substantially uniformly sized spacers due to the more uniform topology, in accordance with certain embodiments of the present invention;FIGS. 5a and 5b depict the cross-section of the portion in FIG. 4b during and following, respectively, removal of the non-operational semiconductor device gate arrangements, in accordance with certain embodiments of the present invention;FIGS. 6a through 6c depict a portion of a cross-section of a semiconductor device having a plurality of polysilicon lines and/or operational semiconductor device gate arrangements formed on a substrate, to which has been added additional dummy polysilicon lines or non-operational semiconductor device gate arrangements, in accordance with certain embodiments of the present invention, to provide for the formation of substantially uniformly sized spacers;FIG. 7 depicts a cross-section of the portion in FIG. 6a following formation of a dummy polysilicon line or non-operational semiconductor device gate arrangement having a wider base, in accordance with certain embodiments of the present invention, that results in substantially uniformly sized spacers as in FIG. 6c; andFIG. 8 depicts a cross-section of the portion of the semiconductor device in FIG. 6c following formation of a second dielectric layer that has a flat top surface, in accordance with certain embodiments of the present invention.DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTSThe process steps and structures described below do not form a complete process flow for manufacturing integrated circuits and/or semiconductor devices. The present invention can be practiced in conjunction with integrated circuit fabrication techniques currently used in the art, and only so much of the commonly practice process steps are included as are necessary for understanding of the present invention. Figures representing cross-sections of the portion of an integrated circuit or semiconductor device during fabrication are not drawn to scale, but instead are drawn to illustrate the features of the present invention.In accordance with certain embodiments of the present invention, methods and arrangements are provided for improved control over the processes that are used to form spacers within semiconductor device arrangements and/or along polysilicon lines. As part of the invention, it was recognized that the topology and, in particular, the aspect ratio (e.g. height:width) of the spacing between semiconductor device gate arrangements on the substrate plays a particularly critical role in the formation of the spacers. As the design rules shrink, uniformity of the spacers is critical in cases where the spacers are used to mark/control the formation of the lightly doped (LDD) regions. Thus, in accordance with the present invention, the width of the spacers is better controlled during their formation by intentionally including non-operational transistor gate arrangements and/or dummy polysilicon lines to provide a controlled spacing and aspect ratio between the semiconductor device gate arrangements/polysilicon lines.FIG. 1 depicts a portion 10 of a cross-section of a prior-art semiconductor device having a substrate 12, a thin oxide layer 14, a gate conductor 16, spacers 18, source region 20a, and drain region 20b. Those skilled in the art will recognize that source region 20a and drain region 20b include lightly doped regions that extend under spacers 18. Thin oxide layer 14 is formed on the top surface 13 of substrate 12. Gate conductor 16 is formed on thin oxide layer 14. In accordance with certain embodiments of the present invention gate conductor 16 is a polysilicon line. Gate conductor 16 along with thin oxide layer 14 form an operational semiconductor device gate arrangement (such as a transistor gate arrangement) that can be used to control electrical current 21 as represented by the arrow shown between source region 20a and drain region 20b. The two spacers 18 preferably have equal widths 15 at their base as measured along top surface 13. Spacers 18 are typically used as a mask to form source region 20a and drain region 20b during a doping process, such as, for example, an ion implantation process.A wider portion 30 of a similar prior-art semiconductor wafer is depicted in FIGS. 2a and 2b. As shown, a plurality of semiconductor device gate arrangements (such as transistor gate arrangements) have been formed on substrate 12, including gate conductors 16a, 16b, 16c, and 16d. The center points of gate conductors 16a and 16b are separated from each other by a first space 17a. Similarly, gate conductors 16b and 16c are separated by a first space 17b as measured from the center point of their respective widths. However, a second space 19, which is larger than first spaces 17a-b, extends between the center points of gate conductors 16c and 16d. As mentioned above, in this type of prior art semiconductor device, the topology plays a critical role in determining the width of the spacers 18 that are formed with dielectric layer 22. The dielectric layer 22 is a conformal dielectric layer or film that is deposited across the exposed surfaces of substrate 12 (on top surface 13) and over the exposed surfaces of gate conductors 16a-d, etc. Dielectric layer 22 typically includes silicon oxide and or silicon nitride. In accordance with conventional spacer formation techniques, portion 30 in FIG. 2a is depicted in FIG. 2b within an etching tool 24 following exposure to an anisotropic etching plasma 26. Etching plasma 26 removes portions of dielectric layer 22 leaving behind spacers 18 and 18'. As shown, spacers 18 and 18' each physically contact the sidewalls of gate conductors 16a-d (as applicable) and the top surface 13 of substrate 12. Spacers 18 and 18' further contact an oxide layer 14 within each of the semiconductor device gate arrangements formed with gate conductors 16a-d. As shown, the spacers 18' formed within second space 19 are differently shaped and have a wider width at their base than the spacers 18 formed, for example, in first spaces 17a and 17b. Spacers 18' are shaped differently because of the topology associated with space 19, which is more open than space 17a and 17b, for example. As a result, the source region 20a and drain region 20b (not shown in FIG. 2b) that would be formed using spacers 18' as a mask would tend to have different characteristics than those formed using the narrower spacers 18. Such differences can have a deleterious effect on the semiconductor device being fabricated.By way of example, FIG. 3 depicts portion 10' of a semiconductor device similar to portion 10 in FIG. 1. However, portion 10' in FIG. 3 has wider spacers 18' and the source and drain regions 20a' and 20b' respectively, have slightly different shapes than those in FIG. 1. As a result, the semiconductor device arrangements in FIGS. 1 and 3 will tend to operate differently from each other. Thus, what is desired are improved methods and arrangements for providing increased process control during the formation of the spacers, and in particular, controlling the base width of the spacers to enhance uniformity within a plurality of similarly configured transistors and/or other like semiconductor devices.FIG. 4a shows an improved portion 30' in accordance with one embodiment of the present invention. Portion 30' in FIG. 4a is similar to portion 30 in FIG. 2a, with the exception of the addition of non-operational transistor gate arrangements as represented by gate conductors 100a and 100b. Gate conductor 100a has been added between gate conductors 16c and 16d to effectively divide second space 19 into two first spaces 17c and 17d, which are each substantially equivalent to first spaces 17a and 17b. Similarly, gate conductor 100b has been added next to gate conductor 16d leaving first spacer 17e therebetween. Both gate conductors 100a and 100b have been formed on a thin oxide layer 14 on substrate 12. The result of adding these additional non-operational transistor gate arrangements is that the topology of portion30' has been altered to provide more uniformity in the spaces/aspect ratios between gate conductors.Next, a dielectric layer 22' has been deposited over top surface 13 of substrate 12 and gate conductors 16a-d and 100a-b, etc. Dielectric layer 22' is applied, for example, using conventional chemical vapor deposition (CVD) or other like processes (e.g., plasma enhanced CVD), and in accordance with certain embodiments of the present invention, includes either silicon oxide, silicon nitride, or silicon-oxynitride.In FIG. 4b portion 30' has been subjected to an anisotropic etching plasma 26 within an etching tool 24. A plasma 26 is chosen that exhibits a high selectivity between dielectric layer 22' and the underlying structure, such as, for example, the top surface of substrate 12. As a result of the etching process, portions of dielectric layer 22' are etched away leaving behind spacers 18. As depicted, spacers 18 form along gate conductor 16a through 16b and on gate conductors 100a and 100b. For simplification of the drawings, the outermost spacers 18 on gate conductors 16a and 100b are shown as having approximately the same width as the other spacers 18, as would be the case if portion 30' were longer and had there been additional, similarly configured gate conductors. By adding non-operational gate conductors 100a and 100b to portion 30', the spacers 18 that are formed have substantially uniformly sized widths. Thus, the source and drain regions 20a and 20b (not shown in FIG. 4b) will be more uniformly shaped and sized.In FIG. 5a, portion 30' has been further processed to form portion 30'', in which a patterned resist mask 104 has been added to allow for the removal of gate conductors 100a and 100b. This is accomplished by exposing portion 30'' to an etching plasma 102 within etching tool 24, for example, to remove the exposed portions of gate conductors 100a and 100b, and spacers 18 attached thereto, and thin oxide layer 14 located below gate conductors 100a and 100b. The result of the etching process in FIG. 5a is depicted in FIG. 5b in which portion 30'' has had the non-operational transistor gate arrangements, that were added prior to the formation of spacers 18, removed. The patterned mask 104 has also been removed. It is recognized, however, that in many cases it will not be necessary to remove the non-operational transistor gate arrangements 100a, 100b and/or dummy polysilicon lines from the semiconductor device. In these cases, portion 30' remains within the completed semiconductor device, and/or integrated circuit.FIGS. 6a through 6c depict yet another section of portion 30' of a semiconductor device, in accordance with certain preferred embodiments of the present invention. In FIG. 6a, there is shown a second space 19' between polysilicon lines 16e and 16f. Applying the methods of the current invention, dummy polysilicon lines 100c and 100d have been added within second space 19' to provide more uniform topology and controlled aspect ratios during the formation of spacers when dielectric layer 22' is etched back. FIG. 6b depicts the location of dummy polysilicon lines 100c and 100d, as, for example, represented by their center points and the corresponding locations of top surface 13 and substrate 12. It is recognized of course that, as before, the present invention applies equally to transistor gate arrangements.In FIG. 6c the portion 30' has been etched back, and spacers 18 have been formed along polysilicon lines 16e through h and along dummy polysilicon lines 100c and 100d. Given the spacing and controlled aspect ratios provided in FIG. 16b by the addition of dummy polysilicon lines 100c and 100d, spacers 18 in FIG. 6c have substantially uniform sizes, and in particular their base width is substantially equivalent.In FIG. 7, another embodiment of the present invention is shown wherein a dummy polysilicon line 100e having a width 112, which is wider than the nominal widths of dummy polysilicon lines 100c and 100d in FIGS. 6a-c, has been formed between polysilicon line 16e and polysilicon line 16f. As depicted in FIG. 7 and in FIG. 6c, the spacing and controlled aspect ratios provided by adding dummy polysilicon lines 100c-d, allows spacers 18 to form with uniform widths. Thus, it is recognized that dummy polysilicon lines (and non-operational transistor gate arrangements) can be provided in a variety of widths and in some cases with different shapes, provided that the resulting aspect ratios are properly maintained to allow for the formation of spacers 18.FIG. 8 depicts an additional benefit of the present invention in which the portion of FIG. 6c has had a second dielectric layer 200 formed thereon. For example, second dielectric layer 200 can include silicon oxide which is used during the formation of local interconnects using conventional damascene techniques. As depicted, second dielectric layer 200 has been subjected to a chemical-mechanical polishing (CMP) process using a CMP tool 204. Thus, second dielectric layer 200 has a planarized/polished top surface 202. By having the altered topology provided by dummy polysilicon lines 100c and 100d, the CMP process will be benefited due to the more uniform underlying topology presented during the formation/deposition of second dielectric layer 200 and as presented to the CMP slurry during the CMP process. Thus, the results of the CMP process are expected to improve for many semiconductor devices because of the more uniform underlying topology presented. Without the uniform topology, it is possible that the CMP process will create an uneven top surface.Although the present invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
A compiler transformation with loop and data partitioning is disclosed. Logic may transform a target code to partition data automatically and/or autonomously based on a memory constraint associated with a resource such as a target device. Logic may identify a tag in the code to identify a task, wherein the task comprises at least one loop, the loop to process data elements in one or more arrays. Logic may automatically generate instructions to determine one or more partitions for the at least one loop to partition data elements, accessed by one or more memory access instructions for the one ormore arrays within the at least one loop, based on a memory constraint, the memory constraint to identify an amount of memory available for allocation to process the task. Logic may determine one ormore iteration space blocks for the parallel loops, determine memory windows for each block, copy data into and out of constrained memory, and transform array accesses.
1.An apparatus for transforming code, the apparatus includes:A memory storing the code; andA logic circuit coupled to the memory, the logic circuit being configured to:Identify a flag in the code, the flag is used to identify a task, wherein the task includes at least one loop in a loop nest, the loop is used to process data elements in one or more arrays, the loop Nesting includes one or more parallel loops; andAutomatically generating one or more partitions for determining the data element for the at least one loop based on a memory constraint, the data element being determined by one or more of the one or more arrays within the at least one loop Memory access instructions to access, the memory constraints are used to identify the amount of memory available for allocation to process the task.2.The apparatus of claim 1, wherein the logic circuit is configured to determine one or more iteration space blocks for the parallel loop, each iteration space block being used to identify a A subset of the one or more arrays is processed.3.The apparatus of claim 2, wherein the logic circuit is configured to determine a memory window for each of the iteration space blocks, wherein the memory window includes a memory window that can be used for allocation to process A portion of the memory amount of the task for a span of one iteration space block in the iteration space block, wherein the span is a continuation of a single iteration block of the task in the one or more arrays Data elements accessed during time.4.The apparatus of claim 1, wherein the logic circuit is configured to transform an array access.5.The apparatus of claim 1, wherein the logic circuit is configured to insert instructions for calling a runtime library to calculate an iterative space block for the at least one loop.6.The apparatus of claim 1, wherein the logic circuit is configured for inserting a data element for copying a data element from a host device before execution of an iteration space block of the task and at the iteration space block of the task Instructions copied to the host device after completion, wherein the iterative space block of the task includes a duration of the task, and during the duration of the task, portions of the one or more arrays A data element in a memory window associated with the iterative space block is accessed.7.The apparatus of claim 6, wherein the logic circuit is configured to insert an instruction for performing a data layout transformation while copying the data element from the host device.8.A method for transforming code, the method includes:A compiler logic circuit identifies a flag in the code that identifies a task, where the task includes at least one loop in a loop nest, the loop is used to process data elements in one or more arrays, so The loop nesting includes one or more parallel loops; andInstructions for automatically determining, based on memory constraints, one or more partitions for the at least one loop to partition data elements are generated by the compiler logic circuit, the data elements being targeted by the at least one loop The one or more arrays are accessed by one or more memory access instructions, and the memory constraints are used to identify an amount of memory that can be used to allocate to process the task.9.The method of claim 8, further comprising determining the memory constraint based on the amount of memory available to process the task at runtime.10.The method of claim 9, wherein automatically generating instructions comprises determining one or more iteration space blocks for the parallel loop, each iteration space block used to identify the data element to be processed according to the one or A subset of multiple arrays that are being processed.11.The method of claim 10, wherein automatically generating instructions comprises determining non-overlapping subsets of the data elements for the one or more iterative spatial blocks.12.The method of claim 10, wherein automatically generating instructions comprises determining a memory window for the iterative block of space, wherein the memory window includes a target memory for the amount of memory that can be used to allocate to process the task. The portion of a span of an iteration space block in the iteration space block, wherein the span is a data element in the one or more arrays that is accessed during the duration of a single iteration block of the task.13.The method of claim 8 wherein automatically generating instructions includes transforming array access.14.A system for transforming code, the system includes:Memory, including dynamic random access memory and said code; andA logic circuit coupled to the memory, the logic circuit being used to identify a flag for identifying a task in the code, wherein the task includes at least one loop in a loop nest, the loop is used to process Or multiple data elements in an array, the loop nesting includes one or more parallel loops; and the logic circuit is configured to automatically generate one or more for determining at least one loop based on a memory constraint Partitioning to partition instructions for data elements accessed by one or more memory access instructions for the one or more arrays within the at least one loop, the memory constraints being used to identify what can be used for The amount of memory allocated to process the task.15.The system of claim 14, wherein the logic circuit is configured to determine a non-overlapping span for a memory window.16.The system of claim 14, wherein the logic circuit is configured to insert a data element for copying a data element from a host device before execution of an iteration space block of the task and at the iteration space block of the task Instructions copied to the host device after completion, wherein the iterative space block of the task includes a duration of the task, and during the duration of the task, portions of the one or more arrays A data element in a memory window associated with the iterative space block is accessed.17.The system of claim 16, wherein the logic circuit is configured to insert instructions for performing a data layout transformation while copying the data element from the host device.18.The system of claim 17, wherein the data layout transformation includes data transmission compression to densely store the data elements.19.The system of claim 17, wherein the data layout transformation includes data transposition to reduce the stride of memory access.20.A non-transitory machine-readable medium includes instructions that, when executed by a processor, cause the processor to perform operations, the operations including:A flag for identifying a task is identified in the code, wherein the task includes at least one loop in a loop nest, the loop is used to process data elements in one or more arrays, and the loop nesting includes One or more parallel loops; andAutomatically generating instructions for determining one or more partitions for the at least one loop to partition data elements based on memory constraints, the data elements being directed to the one or more arrays within the at least one loop One or more memory access instructions, the memory constraints are used to identify the amount of memory that can be used to allocate to process the task.21.The machine-readable medium of claim 20, wherein automatically generating instructions comprises determining one or more iteration space blocks for the parallel loop, each iteration space block being used to identify which of the data elements will A subset that is processed by partitioning one or more arrays.22.The machine-readable medium of claim 21, wherein automatically generating instructions includes determining a memory window for the iterative space block, wherein the memory window includes the amount of memory that can be used to allocate to process the task Part of the span for one iteration space block of the iteration space blocks, wherein the span is a data element in the one or more arrays that is accessed during the duration of a single iteration block of the task.23.The machine-readable medium of claim 20, wherein automatically generating instructions includes transforming array access.24.The machine-readable medium of claim 20, wherein automatically generating instructions includes inserting data elements from a host device to be copied before execution of an iteration space block of the task and completing at the iteration space block of the task Instructions copied later to the host device, wherein the iteration space block of the task includes a duration of the task during which the partial access of the one or more arrays A data element in a memory window associated with the iterative space block.25.The machine-readable medium of claim 24, wherein automatically generating the instructions includes inserting instructions for performing a data layout transformation while copying the data elements from the host device.
Compiler transformations using loops and data partitionsTechnical fieldThe embodiments described herein are in the field of compilers. More specifically, embodiments relate to methods and arrangements for determining code and / or data layout transformations during precompilation and / or just-in-time compilation to partition data accessed by tasks to facilitate use of memory-limited resources.Background techniqueThe compiler transforms source code written in one language (such as C or C ++ or Fortran) into compiled code expressed in another language (such as assembly language, machine language, or higher-level code). The code can be executed by specific hardware. Compilers typically transform source code in stages such as analysis and synthesis. The analysis phase can generate an intermediate representation of the source code to make the resulting code easier to synthesize. The synthesis phase can perform tasks such as code optimization and code generation, code optimization is used to increase the speed and / or efficiency of compiled code, and code generation is used to generate compiled code.There are various high-level or low-level strategies for optimizing the target code. Advanced optimizations may involve machine-independent programming operations. Low-level optimizations may involve machine-independent transformations, such as optimizations involving task offloading. However, existing solutions are not efficient. For example, in the case where the data processing code runs on a separate device (s) that has its own small memory (the array being processed cannot fit in the processor as a whole), some existing solutions are Efficient.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A depicts an embodiment of a system including a host device, a memory, and a target device;1B-1C depict an embodiment of a target code transformed to offload to a target device, such as the target device illustrated in FIG. 1A;1D-1F depict embodiments of one-dimensional partitioning and two-dimensional data partitioning performed by a compiler logic circuit, such as the compiler logic circuit shown in FIG. 1A;FIG. 1G depicts another embodiment of a target code transformed to offload to a target device, such as the target device illustrated in FIG. 1A;1H-1I depict an embodiment of a data layout transformation including data transfer compression by a compiler logic circuit, such as the compiler logic circuit shown in FIG. 1A;1J-1K depict an embodiment of pseudo-code for code transformation by a compiler logic circuit, such as the compiler logic circuit shown in FIG. 1A;Figure 2 depicts an embodiment of a compiler logic circuit, such as the compiler logic circuit shown in Figure 1A;3A-3C depict a flowchart of an embodiment of a transform code;Figure 4 depicts another embodiment of a system including a host device, a memory, and a target device; and5-6 illustrate embodiments of a storage medium and a computing platform.Detailed waysThe following is a detailed description of the embodiments depicted in the drawings. This detailed description covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.Some platforms, such as field-programmable gate array (FPGA) -based systems, have a very limited amount of memory that can be accessed by all computing cores of the platform-typically no more than 256K (kilobytes). Other platforms have a limited amount of memory with some specific features, such as shared local memory in Intel HD graphics, which can be up to 128K. For various reasons, it may be desirable to offload tasks to a device with limited memory. The following discussion refers to a memory-constrained device as the target device, an offload task as the target code, and a transformed target code as the offload task code.In order to efficiently execute the target code in the target device, the host device transfers all data required to perform the task to an available memory in the target device, such as a 128K or 256K memory. In the case where the data is not adapted in the memory, the host device does not unload the object code, or the host device returns an error. Alternatively, the programmer can estimate the memory usage of the target code and manually modify the target code to fit the limitations of the target device. In addition, the amount of memory available to the target device to store data may vary between platforms, and may change at runtime based on, for example, variables associated with the execution of a task that are known only during the execution of that task.The programmer can identify the object code by including a flag or mark in the object code. In some embodiments, an application programming interface (API) may identify the flag or mark (eg, #pragma omp target) as an instruction for offloading a task to a target device. Many of the embodiments described herein relate to the OpenMP (Open Multiprocessing) Application Programming Interface (OpenMP Architecture Review Board, Version 4.5, November 2015). OpenMP supports multi-platform shared memory multi-processing programming in C, C ++, and Fortran on most platforms, instruction set architectures, and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows. Other embodiments may use other flags or tags and / or other APIs to identify the object code to the compiler.Generally speaking, methods and arrangements for transforming code are envisioned. Embodiments may identify object code that is offloaded to a target device and may automatically and / or autonomously generate instructions for determining one or more partitions for at least one parallel loop in the target code based on the memory constraints of the target device OR blocks to partition data elements accessed via one or more memory access instructions within the at least one cycle. Memory constraints can identify the amount of memory available for allocation to process object code.Many embodiments determine code and / or data layout transformations during early compilation and / or just-in-time compilation to partition data elements accessed by one or more memory access instructions for one or more arrays in the target code to Promote the use of memory-constrained resources such as local or remote target devices. Several embodiments perform compiler transformations coupled with the runtime library to implement automatic, multi-dimensional loops, and data partitions are operatively coupled with affine data transformations to (1) implement parallel loops to memory-constrained target devices Automatic offloading, and (2) utilize limited faster local storage to speed up memory-constrained workloads on the device. In some embodiments, the compiler can automatically generate all the required code to transform the data and copy the data to / from the target device and / or faster local storage.In several embodiments, the target device may perform parallel computing via multiple processors or processor cores. For example, the host device may include an Atom-based processor platform coupled with an FPGA-based target device, and may offload tasks for parallel processing on the FPGA-based target device.Many embodiments begin with the execution or compilation of user code by a host compiler or device. For embodiments in which the user code is compiled in advance by the compiler, the embodiments may automatically and / or autonomously perform code transformations based on estimates of memory usage of the target code and estimates of memory available for the target code. Partition the data accessed by the object code. Some embodiments, including a just-in-time compiler, may partition data accessed by the offloaded task based on the amount of memory available for use by the object code and the amount of memory the object code will access.For the discussion in this article, an iteration space block (also known as an iteration space block, block, or iteration space partition) is one or more processes that are processed simultaneously in parallel during the execution of the transformed object code or offload task code Loop or loop nested part. A memory window is an allocation of restricted memory that stores all data accessed by an iterative space block for the entire duration. Duration is the execution window of a block of iteration space nested in parallel loops in the target code. And the array span is the area inside the array that is accessed by a particular parallel iteration or block of parallel iterations. In other words, a span is all elements from the lowest index to the highest index accessed at a specific duration or duration block. The span may be multi-dimensional, in which case the elements that make up the multi-dimensional array of the span are calculated with their indices based on the lowest and highest inductive variable values of the iterative space block and the array index expressions involving induction on each dimension Those elements between the lowest and highest array indexes.Several embodiments also automatically and / or autonomously employ data layout transformations to use the available memory more efficiently. In many embodiments, the data layout transformation includes instructions that are executed at run time (during execution by the object code) to use the available memory more efficiently.For the following discussion and code example, a vector entity composed of multiple components is written in the same way as a scalar value but using, but using a bold italic font and using subscripts (if present) to represent the number of components. A binary arithmetic operation (+-* /) on two vector values produces a vector, where the operation is applied individually to each component. And, the dp operation is a dot product of scalars. Examples of such vector entities are: a vector of loop index variables, a vector of coefficients at a loop index variable in an array access expression, a set of array index functions at each dimension of a multidimensional array access.Representation of loop nesting on index variables i1..iN:for (int i1 = 0; i1 <up1; i1 ++) {...for (int iN = 0; iN <upN; iN ++) {...}...}The representation collapses intofor (int iN: 0N..UPN) {...}Among them, iN is a vector <i1, ... iN> of length N of the loop nesting index variable, 0N is a vector of zero, and UPN is a vector of the upper bound of the loop nesting. If it is clear from the context, N can be omitted.The multidimensional array access expression arr [c1 * i1 + d1] .. [cN * iN + dN] can be collapsed into arr [[cN * iN + dN]].In addition, the array span on a parallel iteration block or block is the smallest parallelepiped in the n-dimensional array index space, so that the (n-dimensional) array index calculated for any point from the iteration space point within the block belongs to The span.Embodiments may be designed to solve different technical issues related to memory-constrained resources, such as executing code that accesses data adapted in a limited amount of memory that is available for processing tasks. Other technical issues may include: identifying the task to be unloaded; generating instructions for the object code that access data adapted in a limited amount of memory available to the processing task; making the object code suitable for access can be copied to the object Data in a limited amount of memory available for execution of the object code; determining the amount of memory available for execution of the object code; copying data into memory-limited resources that will not be accessed by the object code; and / or similar technical issues.Different technical issues, such as those discussed above, may be solved by one or more different embodiments. For example, some embodiments that address issues associated with memory-constrained resources may do so through one or more different technical means, such as: identifying tags in code to identify tasks, where, The task includes at least one loop for processing data elements in one or more arrays; automatically generating one or more partitions for determining at least one loop for partitioning data elements based on memory constraints These data elements are accessed by one or more memory access instructions for one or more arrays within the at least one loop, the memory constraints identifying the amount of memory available for allocation to process the task; based on the runtime Determine memory constraints based on the amount of memory available to the processing task; determine memory constraints based on an estimate of the amount of memory available to the processing task; generate instructions to determine one or more partitions for the outer loop of the task, where one or more Outer loops include parallel loops; determined for parallel loops to be partitioned One or more iteration space blocks, each iteration space block is used to identify a subset of the data element to be processed as a partition of one or more arrays; non-overlapping subsets of data elements are determined for one or more iteration space blocks Set; a memory window is determined for each iteration space block in the iteration space block, where the memory window includes a portion of the amount of memory that can be allocated to handle tasks for the span of all accessed arrays, where the span All data elements in one or more arrays accessed over the duration of the iterative space block; determine non-overlapping spans for this memory window; determine one or more partitions for the serial loop of the outer loop of the task; determine the task One or more partitions of the outer loop; insert instructions to call the runtime library to calculate the iteration space block of one or more outer loops; insert instructions to call the runtime library to calculate one or more Memory windows for outer loops; partition one or more nested parallel outer loops; insert blocks of iteration space for tasks An instruction to copy a data element from the host device before execution and copy the data element to the host device after the completion of the iteration space block of the task, where the iteration space block of the task includes the duration of the task, and during the duration of the task, Memory access instructions access data elements in the memory window associated with the iterative space block; insert instructions to perform data layout transformations while copying data elements from the host device, and data transfer compression is used to selectively copy only the Iterate over data accessed during the execution of a block of space; transpose data elements to reduce the stride of memory access; collapse at least one loop to reduce the number of serial loops in loop nesting, where the at least one loop includes loop embedding Sets and so on.Several embodiments include systems with multiple processor cores, such as a central server, access point, and / or station (STA) such as a modem, router, switch, server, workstation, netbook, mobile device (laptop Computers, smartphones, tablets, etc.), sensors, meters, controls, instruments, monitors, home or office appliances, Internet of Things (IoT) devices (watches, glasses, headphones, etc.), and more. Some embodiments may provide, for example, indoor and / or outdoor "smart" grid and sensor services. In various embodiments, these devices are related to specific applications, such as healthcare, home, commercial office and retail, security, industrial automation and surveillance applications, and transportation applications (automotive, driverless vehicles, airplanes, etc.) and the like.Turning now to the drawings, FIG. 1 illustrates an embodiment of a system 1000. The system 1000 is a host device such as a computer, and the system 1000 has a host processor (s) 1020, a memory 1030, and a target device 1060. One or more buses and / or point-to-point communication links may interconnect the host processor (s) 1020, the memory 1030, and the target device 1060. In some embodiments, the system 1000 may include distributed computing systems, supercomputers, high-performance computing systems, computing clusters, mainframe computers, small computers, client-server systems, personal computers (PCs), workstations, servers, portable computers , Laptop, tablet, handheld device such as a personal digital assistant (PDA), or other device for processing, displaying, or transmitting information. Further embodiments enable larger server configurations. Note that the term "processor" refers to a processor with a single core or a processor package with multiple processor cores.As shown in FIG. 1, the host processor (s) 1020 may include one or more processors, and may be coupled with the memory 1030 to execute the compiler logic circuit 1022. The compiler logic circuit 1022 may be a circuit of a processor that performs the functions of a compiler via a state machine, hard-coded logic, and / or execution of compiler code. In some embodiments, the compiler logic circuit 1022 performs early compilation, and in other embodiments, the compiler logic circuit 1022 performs on-the-fly compilation 1024.The host processor (s) 1020 may execute the compiled code 1040, which is an executable version of the user code 1032 in the memory 1030. The user code 1032 may include one or more instances of an object code 1034, which is marked with a tag such as #pragma omptarget to identify the object code as code to be uninstalled to the target device 1060. In some embodiments, the object code 1034 may also include a tag such as "#pragmaomp parallel for" to indicate a loop that executes in parallel with one or more other loops.The host processor (s) 1020 may utilize the compiler logic circuit 1022 to compile the user code 1032 to create compiled code for execution by the host processor (s) 1020 and the target device 1060. Compilation may involve an analysis phase in which code is analyzed and transformed into intermediate code that dispatches user data 1036 to registers of the host processor (s) 1020 for execution and identifies that user code 1032 is to be compiled for use in The target code 1034 is executed on the target device 1060. The analysis phase may associate one or more arrays of data in each loop or loop nest of the target code 1034 with the target data 1038 in the memory 1030. The compiler logic circuit 1022 may insert a copy instruction into the target code 1034 to copy the target data 1038 to the limited memory 1070 of the target device 1060.In this embodiment, the compiler logic circuit 1022 may automatically and / or autonomously generate instructions for transforming the data layout of the target code 1034 and / or the target data 1038. The target data 1038 goes to and from the target device 1060 The task data 1072 is copied to the limited memory 1070. In other embodiments, the compiler logic circuit 1022 may generate instructions to transform the data layout of the target code 1034 and / or the target data 1038 based on preferences, default settings, or input from a user.In many embodiments, the compiler logic circuit 1022 may be coupled with the runtime library 1050 to implement automatic n-dimensional looping and data partitioning, where n is the depth of the parallel portions of the loop nesting. In such embodiments, the compiler logic circuit 1022 may insert instructions for invoking the runtime library 1050 to calculate partition parameters including the iteration space block for each loop in the outermost nesting and the memory window size . The iteration space block and array index expression for each outer loop determine the data elements that the array access expression accesses in each outer loop to perform the loop nested process. Note that the examples described in this article focus on the specific process performed by loop nested access to data rather than loop nested execution, because embodiments will partition access to data elements such that blocks of code 1082 are offloaded by tasks All required data elements are adapted in the available memory for the task data 1072 in the restricted memory 1070. In other embodiments, the compiler logic circuit 1022 may generate or insert code for performing runtime library functions.After inserting one or more calls to the runtime library 1050 to determine the partition parameters, the compiler logic circuit 1022 may nest loops into blocks or partition the loop nesting to pass the outermost loop of depth n Each parallel loop in the nest adds 1 additional loop to create, for example, 2 * n loops instead of n loops. The result is two loop nests of depth n, one (inner) loop nested inside the other (outer) loop nest. The compiler 1022 then selects the loop stride for the outer loop nesting and the loop boundary for the inner loop nesting so that the span of data accessed by the inner loop nesting fits within the restricted memory.After nesting the loops into blocks to create 2 * n loops, a local n-dimensional "window" allocated in the restricted memory for the accessed array is created. For example, the one-dimensional (1-D) partition illustrated in FIG. 1B creates a memory window for task data 1072 in the restricted memory 1070, and it is called 1-D because the compiler logic circuit 1022 only Modify the object code for outer loop nesting. For a two-dimensional (2-D) partition, such as the embodiment illustrated in FIG. 1C, the compiler logic circuit 1022 may create a two-dimensional memory window allocated in the restricted memory 1070. Note that many embodiments generate partitions for any number of dimensions and are not limited to 1-D or 2-D partitions.After creating the memory window, the compiler logic circuit 1022 may insert instructions in task offload 1026 to copy the data elements from the original array arr to the local array loc in their corresponding memory window before calculation, and for the new Each iteration of the outer loop copies the processed data elements back to the original array arr in the target data 1038 in host memory. In several embodiments, while copying data elements from the original array, the compiler logic circuit 1022 may optionally transform the data layout to reduce the amount of data copied (the number of data elements) and / or improve loop nesting Memory access mode. For example, if an array index within a loop nest accesses only odd-numbered indexes, the loop nest uses only half of the 16 data elements, for example. The compiler logic circuit 1022 may insert instructions for copying only the odd index of the array arr into the local array loc to fall within the memory constraints of the 8 data elements. Note that the task offload code 1026 represents a copy of the object code 1038 that is being modified or transformed by the compiler logic circuit 1022.Once the compiler logic circuit 1022 inserts the copy instruction, the compiler logic circuit may transform the array access within the inner loop nest to change the base address from the original array to its corresponding window and update the array index expression.FIG. 1B depicts an embodiment of transforming the target code 1110 into a task offload code 1120 to offload to a target device, such as the target code 1034, task offload 1026, and target device 1060 illustrated in FIG. 1A. Object code 1110 depicts a parallel loop nesting including a tag "#pragma omp target parallel for" identifying a thread for parallel processing. The outer loop is the first loop "for (i = 0; i <4, i ++) ...". The second (serial or nested) loop is "for (j = 0; j <4; j ++)", and the array of elements accessed by the loop nest is the array r "arr [4 * i + j ] ". The examples use the C ++ programming language, but other embodiments can use any programming language.For the purpose of this embodiment, we assume that the number of actual or estimated data elements that can be adapted in the task data 1072 of the restricted memory 1070 (memory constraint) is 8 data elements, and the array r (arr) Includes 16 data elements. Therefore, the compiler logic circuit 1022 may insert runtime library calls to calculate partition parameters such as the actuality that makes the loop-nested outer loop block into more than one block or partition it into more than one block. Or estimated memory constraints, the number of iteration space blocks or blocks in the new outer loop for ii (par_is_chunkp), the span W of the local array loc (also known as loc_size_xxxp), and the index offset s (also (Called adjxxxp). Note that xxx represents an identifier for a specific array, and may include numbers, characters, or alphanumeric strings.Based on the analysis of the object code 1110 and the actual or estimated array r data, the compiler logic circuit 1022 may determine to divide the outer loop into X blocks, so as to divide the data accessed by the loop by X. In this example, the compiler logic circuit 1022 divides the outer loop into 2 blocks, which effectively divides the data access by 2. Based on the analysis of the object code 1110, the compiler logic circuit 1022 may determine the number of dimensions of the partition and whether to further partition one or more nested loops nested in the loop or perform a data layout transformation. In this embodiment, the compiler logic circuit 1022 divides the loop nesting "for (i = 0; i <4, i ++) ..." in the object code 1110 into a task offload 1120 including a first loop nesting "for ( ii = 0; ii <2, ii ++) ... ”and the second loop nests an iteration space block of“ for (i = ii * 2; i <(ii + 1) * 2, ii ++) ... ”. The compiler logic circuit 1022 may also insert code for obtaining a value of 2 for the loop-nested par_is_chunkp in the object code 1110 from the runtime library.Object code 1110 initializes the array r (arr) to 16 elements with the statement "int arr [16]". The statement “const int W = 8” initializes the constant W to a memory constraint equal to 8 data elements, because in this embodiment, the compiler logic circuit 1022 is partitioning a loop nest. The statement "intloc [W]" allocates a memory window in an array loc of eight data elements in the task data 1072 of the restricted memory 1070. The array arr is an array of data elements located in the memory 1030 of the host device, and the array loc is an array loc in the restricted memory 1070 that receives the target data 1038 from the memory 1030.Note that Figures 1B and 1C are * not * similar to the actual code. They illustrate the concept of data chunks. In practice, for example, 'const int W = 8' * may * look like 'const int W = get_span_for_array (arr, ...);'. In addition, the example setting W = 8 is not only because 8 is a limited memory size, but also because the runtime library calculates 8 as the window size of arr. In other embodiments, it may be any number that fits into the constraint, such as 6.The statement "loc [:] = arr [s: W]" copies the W data elements starting at array element s from the target data 1038 into the loc array. The array element s has a value of 0 at ii = 0 and at ii A variable value of 8 at = 1.The establishment of this new outer loop of the spatial iteration block causes the target device 1060 to execute a new loop nesting "for (i = ii * 2; i <ii + 1) * 2; ii ++)" to process twice. The statement "int s = ii * W" in the outer for loop ii = 0; ii <2; ii ++, the loop starts when ii equals zero, increments ii to equal one, and ends when ii equals one. Therefore, s = 0 when processing the first block and s = 8 when processing the second block. In other words, the compiler logic circuit 1022 divides the original loop nesting into two blocks by creating a new outer loop that uses a span of W data elements and an index offset s to nest the loop into two blocks. The first block The blocks are from i = 0 to i = 1 and the second block is from i = 2 to i = 3.The compiler logic circuit 1022 generates a new or modified instruction for loading data into the array loc by generating a second instruction and generating a new array instruction "loc [4 * i + js]", the second instruction For incrementing the original outer loop through only half of the array index (or 1 / n, where n = 2), the new array instruction "loc [4 * i + js]" passes the target data 1038 from the memory 1030 The index of the data of the task data 1072 loaded and loaded into the restricted memory 1070 is modified to transform the array access. Note that n does not correspond to the number of cycles obtained. n is the number of iteration space blocks (or blocks) of the original i, which is 4 / (chunk_size).After dividing the outer loop by 2, each iteration of the parallel for loop iterations in task offload 1120 will load the 8 data elements at index from array r in memory 1030 to local array loc of restricted memory 1070 To perform loop nested processing. The code transformation illustrated in FIG. 1B is called a one-dimensional (1-D) partition because the compiler logic circuit 1022 divides the iteration space of a single outermost parallel loop into more than one partition or block.FIG. 1C depicts another embodiment of transforming the target code 1210 into a task offload code 1220 to offload to a target device, such as the target code 1034, the task offload code 1026, and the target device 1060 illustrated in FIG. 1A. In this embodiment, the compiler logic circuit determines to partition two parallel loops (outer parallel loop and nested parallel loops), which is a 2-D partition. The value of par_is_chunkp (both Ei and Ej) is 2, the value of loc_size_arrayip (Li) is 1 * Ei, the value of loc_size_arrayjp (Lj) is 1 * Ej, the value of adjarrayip (si) is ii * Ei * 1, and adjarrayjp The value of (sj) is ii * Ej * 1. Note that arrayi and arrayj replace xxx nested for the first and second parallel loops, respectively.The compiler logic circuit 1022 may determine that the 1-D partition used for the code transformation for the object code 1210 will not satisfy the memory constraints based on the analysis of the object code 1210. For example, the span 1300 for each index i for the outer loop is illustrated in FIG. 1D. FIG. 1D illustrates the span 1300 of data elements accessed by each index i (i = 0 to i = 3) in the array i for the outer-layer parallel loop in the object code 1210 for the 1-D partition. Even if the loop for i = 0 accesses only 4 data elements (these data elements are represented by x), the values i = 0, 4, 8, and 12 span 13 data elements in the array. These 13 data elements exceed the memory constraints of the 8 data elements assumed for illustrative purposes, even if the compiler logic circuit 1022 partitions the outer loop into 4 blocks, one block exists for each index value i. Therefore, the compiler logic circuit 1022 may determine to apply the 2-D partition. Note that the compiler logic circuit 1022 may also or alternatively apply a data layout transformation to copy only the actual data elements accessed, which will change the span of the loc array for each iteration i to 4 data elements, but This will be discussed in a later example.For 2-D partitioning, the compiler logic circuit 1022 may delinearize the array expression arr [4 * j + i] to arr [i] [j]. In other words, the compiler logic circuit 1022 can change the array access from a linear access to a non-linear access based on the left or vector of the column value [i] and the row value [j], which linear access progresses through the Rows and columns of data or vice versa. However, de-linearization is not always possible. Its success depends on the actual linear array index expression and the bounds of the loop index participating in the expression. But it is possible in many practical situations.As illustrated in FIG. 1C, the compiler logic circuit 1022 may insert instructions for partitioning each parallel loop nesting by par_is_chunkp and for determining an iteration space block 1400 or "block". Note that the compiler logic circuit 1022 may determine blocks for the object code 1210, in which the blocks do not overlap. Similarly, the compiler logic circuit 1022 may determine a memory window in the restricted memory 1070 for a data space partition such as the 2-D data space partition 1500 illustrated in FIG. 1E, and the compiler logic circuit 1022 may The span of the verification block has non-overlapping memory windows. In other words, the compiler logic circuit 1022 can verify that four blocks do not access overlapping memory windows or load data from the overlapping memory windows into loc for processing by more than one of these blocks, because Accessing overlapping memory windows or loading data into loc from overlapping memory windows may produce invalid results.After determining the iteration space and memory window, the compiler logic circuit 1022 may insert instructions for calling the runtime library to determine partition parameters such as Ei, Ej, Li, Lj, si, and sj. The compiler logic circuit 1022 may generate instructions that include two new outer loops, including a modified version of the two original parallel loops, representing the division of the original 2D iteration space into the Ei iterations and A 2D (Ei, Ej) block of Ej iterations along the j dimension. 4 / Ei and 4 / Ej represent the number of blocks along each dimension. The compiler logic circuit 1022 may use the statement "int loc [Li] [Lj]" to allocate the array window allocated in the restricted memory 1070, and the compiler logic circuit 1022 may use the statement "loc [:] [ :] = Arr [si: Li] [sj: Lj] "to insert a data element from the original array in the memory 1030 to the restricted memory 1070 in the target device 1060, and use the statement" arr "after the calculation [si: Li] [sj: Lj] = loc [:] [:] "is stored back to the memory 1030. Note that the compiler logic circuit 1022 can also apply data layout transformations by copying instructions or code. Note that 'loc [:] [:] = arr [si: Li] [sj: Lj]' is for 'for (int i = 0; i <Li; i ++) for (int j = 0; j <Lj; j ++) loc [i] [j] = arr [si + i] [sj + j]; a shortened representation of '.After inserting code for copying the original array to the local array, the compiler logic circuit 1022 may generate instructions for transforming the array access. For example, the compiler logic circuit 1022 may transform the outer-layer parallel loop from "for (i = 0; i <4; i ++)" to "for (i = ii * Ei; i <(ii + 1) * Ei; i ++ ) "And transform the nested parallel loop from" for (j = 0; j <4; j ++) "to" for (j = jj * Ej; j <(jj + 1) * Ej; j ++) " . The compiler logic circuit 1022 may also initialize si to be equal to "ii * Ei * 1", and initialize sj to be equal to jj * Ej * 1. In addition, the compiler logic circuit 1022 may add an offset to the local window array access "loc [i-si] [j-sj]" to handle local arrays in restricted memory.FIG. 1G depicts an embodiment of transforming the target code 1610 into a task offload code 1620 to offload to a target device, such as the target code 1034, task offload 1026, and target device 1060 illustrated in FIG. 1A. FIG. 1G shows the object code 1610. The compiler logic circuit 1022 uses three-dimensional (3-D) partitioning and data layout transformation in the form of data compression to transform the object code 1610 into task offloading code 1620.The compiler logic circuit 1022 can be used for the pseudo-code of the offload code 1620 based on bsize0 (1,2) and nblocks0 (1,2). The block size and the number of blocks (partitions) of the loop of (1,2). The compiler logic circuit 1022 may calculate the values of bsize0 (1,2) and nblocks0 (1,2) based on various factors such as coefficients in array index expressions and such as used to find the illustrations in Figure 1K The algorithm of the embodiment of the block size pseudo-code 1900.As can be seen, the copy-in sequence labeled "vectorizable inward copy sequence" in the task offload code 1620 is an element-by-element copy of the accessed element from arr to loc. Although it may not seem very efficient at first, in many cases this compression of data elements during copying of data elements from arr to loc can be beneficial compared to uncompressed copying. For example, uncompressed copy copies data in large blocks, but it copies a large amount of unused data, which wastes memory bandwidth. On the other hand, compressed replication loop nesting is vectorizable and can show good performance on many platforms. In general, the compiler logic circuit 1022 can use a cost model to see if compressed replication is efficient. For example, if the arr length is N and the number of data elements accessed is n, a higher n / N ratio is likely to mean less gain from compression, and vice versa.For clarity, the above description does not have nested serial loop nesting, and the examples in the following sections do not have nested serial loop nesting. However, data compression in the presence of nested serial loop nesting can also be performed by using the techniques presented in the embodiments described herein. The main differences from the case of serial loops that are not nested are:Gap in array access caused by the serial part of the array index expression (a linear combination of serial loop indexes) will not be compressed, so the overall copy may be less efficient, andO The local memory window of an array will increase its size by the array span across the entire serial nested iteration space-this will affect the calculation of bsize0 (1,2) so that the memory window still fits into the device memory.The block size calculation algorithm described in conjunction with FIG. 1K will be changed so that when sorting the coefficients at the loop index in the array index expression, the index at the parallel loop index cannot be re-run with the coefficient at the serial loop index Sorting; if this limitation makes it impossible to sort the coefficients as needed, data compression may or may not be possible.In addition, in order to maintain data consistency, copying data elements to the target device and copying back the copied data elements should maintain a one-to-one mapping of elements accessed on the host device to corresponding elements on the target device and one-to-one mapping. After mapping.1H-1I depict an embodiment of a data layout transformation including data transfer compression by a compiler logic circuit, such as the compiler logic circuit 1022 shown in FIG. 1A. FIG. 1H illustrates an embodiment 1700 of a compressed copy from an array r (arr) in the memory 1030 to a local array (loc) in the restricted memory 1070. Copy each box in boxes 1-19 from arr to loc for processing the blocks identified as block iteration 0 and block iteration 1. Solid black boxes (such as 0, 1, 5, 6, 10, and 11) illustrate boxes that are actually accessed by these blocks nested in a loop. In other words, according to the object code 1610 illustrated in FIG. 1G, the index of the accessed array is arr [17 * i2 + 5 * i1 + i0].Object code 1610 accesses only about half of the data elements. As discussed in conjunction with FIG. 1G, in this embodiment, if the compiler logic circuit 1022 does not perform a data layout transformation that includes data compression, the compiler logic circuit 1022 inserts the code into a task offload (not shown) to address The block iteration 0 makes a block copy of the entire block of the arr from data element 0 to data element 5 and the block iteration 1 makes the entire block of the arr from data element 6 to data element 11 in block copy.FIG. 1H illustrates the implementation of a compressed copy from the array r (arr) in memory 1030 to a local array (loc) in restricted memory 1070 for task offload code 1620 as discussed in FIGS. 1G-1H. Example 1800. The compiler logic circuit 1022 may generate vectorizable instructions for copying data elements from arr to loc (and from loc back to arr) using data compression to advantageously reduce the memory bandwidth associated with copying data elements. The compiler logic circuit 1022 copies each of the black boxes 0, 1, 5, 6, 10, 11, 17, and 18 from arr to loc for processing the ones identified as block iteration 0 and block iteration 1. Block.1J-1K depict an embodiment of a data layout transformation including data transfer compression by a compiler logic circuit, such as the compiler logic circuit shown in FIG. 1A.FIG. 1J illustrates an embodiment 1800 of pseudocode for transforming automatic and / or autonomous code of a runtime library such as the runtime library 1050 illustrated in FIG. 1A. The general form of the object code 1034 is shown below. It is a 'serial' loop nest (S loops) nested within a 'parallel' loop nest (P0 loops)-for a total of P0 + S loops. Within the serial loop nesting, there is a P0 + 1-dimensional access to the array arr. The index expression of each parallel dimension is a linear function of the corresponding loop index, the final dimension is the collapse of all the 'serial' dimensions into one dimension and the index expression is a linear function on all the 'serial' loop indexes. The compiler logic circuit 1022 may employ existing techniques to first convert a multidimensional array access to a 1-D array access (if needed (e.g., if the number of dimensions is not P0 + 1)), and then 'de-linearize' and make it Become that form. The compiler logic circuit 1022 may decide to use all P0 available parallel loops for block formation, so we limit the number of block (or partitioned) parallel loops to P (which is <= P0). For example, in some cases, de-linearization is not possible, and array access remains 1-D in the form A [dp (cS '+ 1, iS' + 1) + d], where S 'is S + P0 -1 and P is 1.The general form of object code 1034 is:for (int iP: 0P..PUPP-1P) {...for (int jS: 0S..SUPS-1S) {...Ak [[aP * iP + ckP]] [dp (bkS, jS) + dk]...Bn [[xP * iP + ynP]] [dp (znS, jS) + wn]...}...}In the general form of object code 1034:K, n——represents the number of sequential accesses to arrays A and B; different accesses can reside in different positions within the number of loops, in which case the sequential loop index will be different; if the array index expresses The expression does not depend on the specific loop index, so the coefficient of the array index expression is considered to be 0.iP-vector of parallel loop indexes (i1, ..., iP).jS-a vector of sequential loop indices (j1, ..., jS).PUPP-the vector of the upper bound of the parallel loop (PUP1, ..., PUPP).SUPS-the vector of the upper bound of the sequential loop (SUP1, ..., SUPS).For array A:aP-a vector of coefficients (a1, ..., aP) at the parallel index, which is the same for all accesses.bkS-a vector of coefficients at the sequential index (bk1, ..., bkS).ckP-vector of addends in the parallel index function (ck1, ..., ckP).dk-the addend of the array index function in the last serial dimension.For array B:xP-a vector of coefficients (x1, ..., xP) at the parallel index, which is the same for all accesses.zkS-vector of coefficients at the sequential index zk1, ..., zkS).ykP-the vector of addends in the parallel index function (yk1, ..., ykP).wk-the addend of the array index function in the last serial dimension.The compiler logic circuit 1022 can transform the general form of the object code through the following operations:(1)Insert a call to the runtime library 1050 to calculate the partition parameters based on the current values of the PUP and other related parameters.(2)The loop nesting is partitioned by, for example, wrapping the loop nesting as another loop nesting of depth P and inserting instructions for computing P-dimensional parallel iteration space blocks.(3)For all accessed arrays, a P-dimensional memory window is created that is allocated in restricted memory 1070.(4)Data copy instructions inserted between the original array and its memory window in restricted memory 1070:-From the original array in memory 1030 to its memory window in the local array of restricted memory 1070-before the inner parallel loop nesting begins.-From the memory window in the local array of the restricted memory 1070 to the original array in the memory 1030-after the nesting of the inner-layer parallel loop ends.-If the data layout transformation is considered profitable, the compiler logic circuit 1022 may perform the data layout transformation in a copy instruction.(5)Transform array access:-Change the base address from the original array to its corresponding memory window in the local array in restricted memory 1030.-Update array index expression.-If data layout transformation is enabled, the array index expression update involves changing the addend part and also changing the coefficients at the loop index.Referring again to FIG. 1A, the memory 1030 may be a main memory (eg, a dynamic random access memory (DRAM)) for the platform such as a double data rate type 3 (DDR3) or a type 4 (DDR4) synchronous DRAM (SDRAM) ), Shared memory, hard drive, solid-state drive, and / or memory-like, and are locally attached and / or remote. In the present embodiment, the memory 1030 is locally attached to the host processor (s) 1020.The memory 1030 may include user code 1032. The compiler logic circuit 1022 may compile the user code 1032 for execution on the host processor (s) 1020, thereby generating the compiled code 1040.The user code 1032 may include object code 1034 for execution on the target device 1060. The compiler logic circuit 1022 may analyze the object code and compile the code using a code transformation and optionally a data layout transformation to generate a task offload 1026. Thereafter, the compiler logic circuit 1022 may compile the offload task into a different language, such as machine code or assembly language, before offloading the code to the target device 1060 as the task offload code 1082.The memory 1030 may be a host device, a memory in the system 1000, and may include user data 1036, and the user data may include target data 1038. User data 1036 may include data elements accessed by user code 1032 during execution. The target data 1038 may include data elements, such as an array r, accessed by the target code during execution.The memory 1030 may further include a runtime library 1050. The runtime library 1050 may include support for: (1) storing the representation of the array index function and constructing a block (partitioned) loop-nested iteration space in the memory 1030. The runtime library 1050 may also (2) calculate block characteristics based on information from (1) and the amount of available limited memory and provide them with code generated by the compiler logic circuit 1022 upon request, such as:-par_is_chunkP-n-dimensional blocks of the iteration space in parallel, so that the span of all arrays on the block fits into the restricted memory.-loc_size_XXXP-the span of array XXX on par_is_chunkP.-loc_size_XXXS-The span of the array XXX on the sequentially nested part of the loop; if the span is greater than the available memory, chunking may fail.-adjXXXP-the adjusted vector of array index expressions in the parallel dimension of array XXX.-adjXXXS-adjustment of the array index expression in the order dimension of array XXX.The runtime library 1050 can calculate and adjust adjXXXP and adjXXXS so that the adjusted array access expression applied to the local window of the original array is adapted to [0..size], where size (size) corresponds to the local window edge The dimensions of the dimensions.In several embodiments, the runtime library 1050 calculates these parameters as follows:During a single iteration of the nested parallel parts of the loop, the sequential loop indexes go from zero 0S to their upper bound SUPS-1S. Thus, the smallest local window of an array that is only accessed once is the span of the array index function over the entire sequential iteration space. The sequential part of the k-th array index function of array A can be expressed as:fAk (jS) = dp (bkS +, jS +) + dp (bkS-, jS-) + dk;Among them, s + represents a subset of 1..s that makes the corresponding loop index in the array index expression a non-negative subset:bkS +> = 0And s- represents a complementary subset such thatbkS- <0.Taking into account that all lower bounds are zero, the minimum and maximum values of the function on the sequential iteration space are:minjsfAk = dp (bkS-, SUPS--1S-) + dkmaxjsfAk = dp (bkS +, SUPS + -1S +) + dkThe global minimum and global maximum on all index functions that access array A are:minfA = minkfAkmaxfA = maxkfAkThe size of the local window of A in the order dimension is calculated asloc_size_As = maxfA--minfAAnd the adjustment of the order of the array index expression is calculated asadjAS = minfAThe volume of A's local window is:VA = loc_size_A1 * loc_size_A2 * ... * loc_size_Ap * loc_size_AsWhere loc_size_Am is the combined span of the index function of the m-th dimension on all accesses to array A on par_is_chunkm. All these functions have the form am * im + ckm, so assuming am> = 0, the combined span is:loc_size_Am = (am * (im + par_is_chunkm) + maxkckm) – (am * im + minkckm) == Am * par_is_chunkm + (maxkckm-minkckm)Embodiments of the runtime library 1050 may select any natural integer value of par_is_chunki, provided that:VA + VB <= available_memoryIn one embodiment, the runtime library 1050 may select an ownership of par_is_chunki equal to one. The adjustment of the array index expression in the parallel dimension is selected so that the value of the expression is in the interval [0, loc_size_Ai], the adjustment is:adjAi =-(minkckm);Also, if am <0, the formula may be slightly more complicated.Referring again to FIG. 1A, the target device 1060 may include a device such as a floating-point gate array (FPGA), the target device 1060 having a plurality of target processors 1080 that can access the restricted memory 1070. The target device 1060 may receive the task offload 1082 from the host processor (s) 1020 and execute the task offload 1082. Performing the task offload 1082 may include copying a portion of the target data 1038 from the array in the memory 1030 to a memory window in the local array to process the target data 1072 of the user code 1032.FIG. 1K depicts an embodiment of pseudo code 1900 that may reside in runtime library 1050, which is used to determine the block size of a data layout transformation by a compiler logic circuit such as FIG. 1A The compiler logic circuit 1022 is shown in. Note that the "//" mark indicates a comment for describing the pseudo-code 1900.The prerequisites for implementing this pseudo-code 1900 are described in terms of a linearized form of memory access. Assume that each memory access is expressed as a base address + offset, where the offset is a linear expression in the form, where ik is the loop nesting dependent variable, ak is the coefficient, and N is the loop nesting depth. For simplicity, the coefficients are considered non-negative. The algorithm can be extended to handle negative coefficients. Assume loop nesting is perfect. The algorithm calculates a valid range of block sizes for each dimension, and the compiler is free to choose a block size within that range based on other heuristics.Pseudo-code 1900 starts with a loop for each memory access in a loop nest, performing the following actions. The runtime library 1050 may determine a set of coefficients according to an address expression for a data element in the memory 1030. The set of coefficients may form a tuple, such as [a0: 0], [a1: 1], ... [an: n]. Each coefficient can contain at least its value and the number of the dimension to which it belongs. For example, in a loop nest with a depth of 2, the dependent variables i and j target dimensions 0 and 1, respectively. The memory C [a * i + b * j] refers to a set of two coefficients:1:Value is a, dimension is 0, or simply [a: 0] or [a: i]2:Value is b, dimension is 1, or simply [b: 1] or [b: j]After the coefficients are aggregated, the runtime library 1050 can reorder the elements in ascending order so that the value of the coefficient at position n in the sorted sequence is less than or equal to the value of the coefficient at position n + 1. For example, if a0> a2> a1, the ordered sequence of the set of coefficients is [a1: 1], [a2: 2], [a0: 0].Once the coefficients are sorted, the runtime library 1050 can initialize all block ranges to have the largest interval. The interval is two numbers in pairs: lower bound and upper bound such that lower bound <= upper bound. For the purpose of this embodiment, only intervals with a lower bound> = 0 are considered.For each coefficient other than the last coefficient, the upper bound is set equal to the maximum upper bound, the current coefficient dimension is set to the dimension index of the current coefficient, and the current coefficient value is set to the value of the current coefficient. If the current coefficient value is not zero, the next coefficient value is set to the value of the next coefficient, and the upper bound is set to the value of the next coefficient divided by the value of the current coefficient.Thereafter, the runtime library 1050 may determine the range of intersections of the current interval, and may calculate ranges of the same dimension from other memory accesses. The intersection of intervals x, y is a new interval consisting of:Lower bound = maximum (lower bound x, lower bound y) andUpper bound = minimum (upper bound x, upper bound y).Here are a few additional examples of block size calculations:For example 1, let's assume that the loop is nested to a depth of 4 and contains the following address expressions:# 1: A [17 * i3 + 2 * i2 + 5 * i1 + 9 * i0]# 2: A [20 * i3-3 * i2 + 6 * i1 + 8 * i0]For memory access # 1:sorted_coeffs = {[2: i2], [5: i1], [9: i0], [17: i3]}block_range [2] = [1; 5/2] = [1; 2];block_range [1] = [1; 9/5] = [1; 1];block_range [0] = [1; 17/9] = [1; 1];block_range [3] = [1; MAX];For memory access # 2:sorted_coeffs = {[3: i2], [6: i1], [8: i0], [20: i3]}block_range [2] = [1; 6/3] = [1; 2]And intersect with the previous [1,2] = [1,2];block_range [1] = [1; 8/6] = [1; 1]And intersect with the previous [1,1] = [1,1];block_range [0] = [1; 20/8] = [1; 2]And intersect with the previous [1,1] = [1,1];block_range [3] = [1; MAX]And intersect with the previous [1; MAX] = [1; MAX];For example 2, let us assume that the loop is nested to a depth of 2 and contains the following address expressions:# 1: A [100 * i0 + i1]# 2: B [100 * i0]For memory access # 1:sorted_coeffs = {[1: i1], [100: i0]}block_range [1] = [1; 100/1] = [1; 100];block_range [0] = [1; MAX];For memory access # 2:sorted_coeffs = {[0: i1], [100: i0]}block_range [1] = [1; MAX]And intersect with the previous [1; 100] = [1; 100];block_range [0] = [1; MAX]And intersect with the previous [1; MAX] = [1; MAX];In several embodiments, the compiler logic may have restrictions related to the input code. In one embodiment, the input to the compiler logic circuit is a cyclic tree with any number of accesses to any number of arrays at any non-parallel level in the cyclic tree. The P outer loops of the tree should form a perfect parallel loop nesting:No other code can be interleaved between loops, including the case where there are two loops at the same level.Loop-nested iterations can be performed in any order; that is, the definition of the values <x1, ..., xP> of the iP loop index parameters can be performed before or after another iteration <y1, ..., yP> Iteration without affecting the result of the calculation.In some of these embodiments, the loops inside a parallel loop nest may have any nesting structure. The following further restrictions apply:1.All array index functions on the loop index must be linear2.(1)Loop boundaries and (2) array access parameters must be loop tree invariants3.Given an array arr, arrk [[akP * iP + ckP]] [...] (k = 1..K) in the form of a set of K accesses to the array, and the outermost layer of depth P Nested in parallel loops, the following conditions must be met:For any x and y, and for all j = (1..P), axj = ayjThat is, the coefficients at the parallel index should be the same in all accesses to the same array.4.The span of any accessed array on any two multidimensional parallel iterations should not overlap. In the 2D partitioning example in FIG. 1C, there are no nested serial loops and the span is only one point in each 2D iteration. There are no two parallel iterations accessing the same memory location, and we can apply 2D partitioning. However, in the case of applying to the 1-D partition of the 2D partition example in FIG. 1C, the 1-D span of the array arr on the outermost parallel loop overlaps. So even if the available device memory is 13 elements (sufficient to cover the entire span), the 1-D partition will still be impossible due to overlap. This is because when there is overlap, copying the local window back to the array in memory 1030 will overwrite the elements calculated by different parallel iterations.If the nested parallel loops are imperfect or limit 3 does not hold for all parallel loops, the transformation treats P as the maximum number L <P, so that all limits hold for the L outermost parallel loops.In addition, whenever possible, the compiler logic circuits and runtime libraries can attempt multi-dimensional partitioning, otherwise they can revert to 1-D partitions such as the embodiment shown in Figure 1B. For clarity, the illustrated embodiment shows two arrays, but partitioning can be done with any number of arrays and any number of accesses to each array.Note that the expression arr [s: W] refers to a section in arr having a start index s and a length W. Similarly, loc [:] refers to the section of the loc array that covers all its elements.Many embodiments describe sample code for OpenMP, but OpenMP currently does not provide a clear syntax for expressing multi-dimensional parallel loop nesting, and the embodiments described herein require this clear syntax to apply multi-dimensional partitioning. So, currently we use perfect loop nesting of loops marked with "#pragma omp parallel for" without adding any additional clauses to the inner parallel loop. This is a legal solution to the lack of explicit syntax, since it is assumed that nested OpenMP parallelism is disabled for the target device, then treating those nested parallel loops as multidimensional parallel loops will not break OpenMP semantics.FIG. 2 depicts an embodiment of a compiler logic circuit 2000, such as the compiler logic circuit 1022 shown in FIG. 1A. The compiler logic circuit 2000 may include a production compiler for compiling a binary file for distribution, a compiler for compiling a binary file for a user or a developer, or an instant compiler for compiling a binary file while executing the binary file.The compiler logic circuit 2000 may include a circuit; a circuit for executing code, a combination of code and a processor; or a combination of code and processor for executing code. For example, the compiler logic circuit 2000 may include a state machine and / or an application specific integrated circuit (ASIC) to perform some or all of the functions of the compiler logic circuit 2000.The compiler logic circuit 2000 may compile source code in a phase including an intermediate phase and a machine code generation phase. In addition, the compiler logic circuit 2000 may automatically and / or autonomously identify the object code to be unloaded to the target device, and generate instructions that transform the code to fit the memory constraints of the target device. Specifically, the compiler logic circuit 2000 can transform the object code to partition the data access so that the data accessed by the partition or block of the object code will fit within the memory constraints. For example, if the target device is a multi-processor accelerator and includes limited memory that is shared with all processors or specialized limited memory that is particularly fast for the purpose of processing the target code, the compiler logic circuit 2000 may include a means for identifying The task identifier of the target code, and includes a code conversion logic circuit 2020 for converting the target code to meet the memory constraint requirements of offloading tasks to the target device.The compiler logic circuit 2000 may include a task identifier 2010, a runtime library 2012, and a code conversion logic circuit 2020. In many embodiments, the object code resides in user code that will execute on the host device. The object code may include a mark or flag for identifying the object code within the user code as the code to be offloaded to the target device.The runtime library 2012 may include logic circuitry for performing calculations on the object code to determine partition parameters for partitioning access to the data into iterative space blocks or blocks. For example, the runtime library 2012 may include code for computing an n-dimensional block of a parallel iteration space such that the span of all parallel arrays on the block does not exceed memory constraints. The runtime library 2012 may include code for calculating a span of an array on an n-dimensional block of a parallel iteration space. The runtime library 2012 may include code for calculating the span of an array over a serial or sequential portion of a loop nest. The runtime library 2012 may include code for computing an adjustment for an array index expression in a parallel dimension for an array. And the runtime library 2012 may include code for calculating an adjustment for an array index expression in an order dimension for the array.The code conversion logic circuit 2020 may include a circular partition logic circuit 2030 and a code generation logic circuit 2050. The loop partition logic circuit 2030 can partition a parallel loop and a sequential loop. A sequential loop can be a loop that is nested in a parallel loop nest. In many embodiments, the circular partition logic circuit 2030 may include a runtime library caller 2032, a P-dimensional circular partitioner 2034, and a memory window determiner 2040.The runtime library caller 2032 may insert code (or instructions) for calling a runtime library function. Runtime library functions may reside in runtime library 2012 and / or be added to object code during compilation. In many embodiments, the runtime library caller 2032 may call a function included in the runtime library 2012 for calculating partition parameters.The P-dimensional loop partitioner 2034 may nest one or more parallel loops into P-dimensional iterative space blocks. In some embodiments, the P-dimensional loop partitioner 2034 may partition one or more outer loops nested in parallel loops into 1-dimensional iterative space blocks, may partition one or more outer loops and each A nested loop is partitioned into 2-D iteration space blocks. The P-dimensional circular partitioner 2034 determines the number of blocks based on analysis of the target code and memory constraints for the target device. For example, if it is determined that de-linearization is possible for parallel loop nesting, the P-dimensional loop partitioner 2034 may perform 1-D partitioning for the parallel loop nesting.The memory window determiner 2040 may determine a data space memory window covering the span of each of the iterative space blocks. In other words, the memory window determiner 2040 may map the data elements of each iteration space block nested across all parallel loops to determine non-overlapping allocations in the limited memory of the target device. If the spans of the iteration space blocks overlap, the memory window loop determiner 2040 may instruct the code generation logic circuit 2050 to perform a data layout transformation so that the spans of the iteration space blocks do not overlap.The code generation logic circuit 2050 may generate an instruction (or code) for partitioning a P-dimensional parallel loop nest in the target code. The code generation logic circuit 2050 may include a copy logic circuit 2060 and an array access transformation 2080. The copy logic circuit 2060 may generate a copy instruction for insertion into the unload task, the copy instruction copies data elements from the memory in the host device to the limited memory in the target device before processing the iterative space block of the target code, and Data elements are copied back to the host device after processing the iterative space blocks of the object code. In addition, the copy logic circuit 2060 may include a data transformation logic circuit 2040 that may transform the data layout of the data elements copied from the host device to the target device for processing iterative space blocks of the target code.The data layout transformation logic circuit 2040 may include a data transmission compression 2032, a transpose 2040, and a loop collapse 2046. In other embodiments, additional loop optimizations may also benefit from data layout transformations, such as data padding used to achieve higher vectorization.Data transmission compression 2032 may compress data during transmission from the host device to the target device. Blocks that transmit sparse data cause additional pressure on the data transmission subsystem by transmitting redundant data and making more calls to the data transmission API. After execution on the target device is completed, the data is copied from the local memory serial port back to the original output array.Data transfer compression 2032 may modify the issued code for corresponding execution on the target device to reflect changes in access to the element. For example, data transfer compression 2032 may copy only data elements accessed by an iterative space block of object code, rather than copying all data elements within a continuous block of host memory, such as in conjunction with Figures 1A-1K The continuous data element block in the array r in question.The following are the original source or object code, the task offload code with a 2-D partition, and the task offload code with data compression:Memory allocation on the target device is reduced by a factor:(9*bsize1 * bsize0 + 10 * bsize1 * 2 * bsize0) / (2 * bsize0 * bsize1) = (9 + 20) / 2.This is a factor of about 15 times. The amount of data transmitted between the host device and the target device is also reduced by approximately the same factor.Transpose 2044 can transpose rows and columns of data while copying data elements from the host device to the target device. For example, some target data may represent a table, and the data elements of the table may reside in an array as a series of rows of the table or as a series of columns of the table. Some user codes in user code contain both unit stride and non-unit stride accesses within the same loop nest. In some examples, if a copy instruction transposes rows and columns of data elements from a table based on the pace of access to the data elements, the target code accesses the data more efficiently.To illustrate, the object code can reside in an array as a series of rows meaning that each data element of the first row resides in the array, followed by each element of the second row, and so on through The entire table. If the object code accesses each data data in the first column in the series, and then accesses each data element in the second column in the series, the data element access for the column data is in steps of the data elements in each row Number, so access will be improved to have a unit stride (ie, adjacent memory entries) instead of a stride across all data elements of each row.The following code transformations are examples of transpose:Loop collapse 2046 may apply a loop collapse technique to expose additional opportunities for vectorization and reduce loop maintenance overhead in deep loop nesting. In other words, the same nested loops can be vectorized to reduce the number of nested loops in the target code unloaded to the target device. In a further embodiment, the loop collapse 2046 may apply a loop collapse technique to reduce loop nesting depth without vectorization.The following code transformation is an example of a loop collapse:Array access transformation 2080 may change the base address from the original array to its corresponding memory window in local data in restricted memory. In addition, the array access transformation 2080 can update the array index expression if necessary. For example, some embodiments modify the array index expression to take into account data compression during transmission of data from the host device to the target device.3A-3C depict a flowchart of an embodiment of transforming code for offloading to a target device with a dedicated restricted memory, such as fast memory on a graphics accelerator card, on an FPGA board Restricted memory shared among multiple memories, or other restricted memory. FIG. 3A illustrates a flowchart 3000 of transforming an object code to be offloaded to a target device. Flowchart 3000 begins with a flag identified by a compiler logic circuit in a code to identify a task, where the task includes at least one loop for processing data elements (element 3005) in one or more arrays. In many embodiments, a compiler logic circuit (such as compiler logic circuit 2000 in FIG. 2 and compiler logic circuit 1022 in FIG. 1A) may receive user code for compilation, the user code including offloading to another The target code of one device or processor, the other device or processor is called the target device. Target devices can provide advantages for processing target code, such as fast but limited memory, parallel processing, and more. In several embodiments, the memory on the target device has a memory constraint that prevents some target code from being unloaded without modification. Object code may include loops in loop nests, and loop nests may include one or more parallel loops. The memory requirements for the data accessed by the loop may exceed memory constraints. In other words, the object code cannot be offloaded to the target device unless the object code is modified to reduce memory requirements. The compiler logic circuit identification flag may identify a line in the object code for unloading that marks the beginning and / or end of the object code, so that the compiler logic circuit may transform and unload the object code.After identifying the flags, the compiler logic circuit may automatically generate instructions for determining one or more partitions for at least one loop to partition data elements based on memory constraints, the data elements being used by one within the at least one loop Accessed by one or more memory access instructions of one or more arrays, memory constraints are used to identify the amount of memory available for allocation to process tasks (element 3010). Generally speaking, the compiler logic circuit can generate instructions for creating task offload code. The task offloading code can perform the same process as the original object code, but can partition the object code into blocks of iterative space that utilize a certain amount of memory based on memory constraints. In other words, the compiler logic may transform the code such that one or more parallel loop nests of an array that process data are partitioned into iterative space blocks nested by these parallel loops, the partitions being based on the amount of data accessed by the iterative space blocks. As a result, the target device can process one target code block at a time without exceeding memory constraints. In a further embodiment, the compiler logic circuit also changes or transforms the data layout of the data elements of the target code to increase the efficiency of using memory bandwidth and limited memory.FIG. 3B illustrates a flowchart 3100 for determining a block size for a data layout transformation. Flowchart 3100 begins by determining a set of coefficients based on an address expression (element 3105). Compiler logic circuits (such as compiler logic circuit 2000 in FIG. 2 and compiler logic circuit 1024 in FIG. 1A) may generate data elements that are used to discover data elements to be copied to the target device for execution of task offload code Instructions for address expressions. In some embodiments, the code may reside in a runtime library that is accessed by or inserted into the target code to generate task offload code. The expression may include a linear function of the array index with a scalar coefficient, such as A [17 * i3 + 2 * i2 + 5 * i1 + 9 * i0]. Scalar coefficients include 17, 2, 5, and 9.The compiler logic can access the coefficients from memory (element 3110) and reorder the elements in ascending order of the values of the coefficients. For example, the sorted coefficients for A [17 * i3 + 2 * i2 + 5 * i1 + 9 * i0] are {[2: i2], [5: i1], [9: i0], [17: i3 ]}.After reordering the elements in ascending order, the compiler logic can initialize the block range to the maximum interval (element 3120). For example, for A [17 * i3 + 2 * i2 + 5 * i1 + 9 * i0], the maximum interval of the block range [2] = 5/2 = [1; 2], and the maximum interval of the block range [1] = 9/5 = [1; 1], the maximum interval of the block range [0] = 17/9 = [1; 1], and the maximum interval of the block range [3] = [1; MAX].Once the block range is initialized to the maximum interval, the compiler logic can determine a valid range of block sizes for each dimension (element 3130). The compiler logic can intersect the range of the current interval with the range calculated for the same dimension from other memory accesses. The intersection of intervals x and y is a new interval composed ofLower bound = maximum (lower bound x, lower bound y) andUpper bound = minimum (upper bound x, upper bound y).For illustration, the second memory access in the above example may be A [20 * i3-3 * i2 + 6 * i1 + 8 * i0]. The sorted coefficients for the second interval are {[3: i2], [6: i1], [8: i0], [20: i3]}. For example, the maximum interval of the block range [2] = 6/3 = [1; 2], the maximum interval of the block range [1] = 8/6 = [1; 1], and the maximum interval of the block range [0] = 20 / 8 = [1; 1], and the maximum interval of the block range [3] = [1; MAX].When comparing the maximum interval of each block range, the intersection of the block range [2] has a lower limit—that is, the maximum value of [1,1] and an upper limit—that is, the maximum value of [2,2]. The intersection of] is [1,2]. Similarly, the intersection of block range [1] is [1,1], the intersection of block range [0] is [1,1], and the intersection of block range [3] is set to [1, MAX].3C illustrates a flowchart for transforming code using a compiler logic circuit, such as the compiler logic circuit 2000 illustrated in FIG. 2 and the compiler logic circuit 1022 in FIG. 1A. The flowchart begins by inserting a call to a runtime library function (element 3205) for computing partition parameters. The compiler logic circuit may transform the user code into intermediate code during the analysis phase, and then the compiler logic circuit may begin to transform the object code within the user code that is identified as being offloaded to the target device. Insertion of one or more calls to the runtime library may cause the target code to call the runtime library to execute functions, and the host device or runtime environment may execute these functions more efficiently than the target device. In some embodiments, the target device can execute assembly or machine language. In other embodiments, the target device may execute a higher level language.After inserting a call to the runtime library, the compiler logic may insert code (element 3210) to determine a block of parallel iteration spaces (which may also be referred to as a block or partition) of the P dimension. The runtime library function may determine par_is_chunkp for one or more parallel loop nesting, and par_is_chunkp is an n-dimensional iteration space block of the parallel iteration space. Compiler logic can insert iteration space code (such as code for establishing constants and variables) to create one or more new outer loops as needed to create iteration space partitions.Thereafter, the compiler logic may insert code for allocating memory windows in restricted memory, and for multi-dimensional partitioning, may insert code for allocating p-dimensional windows in restricted memory (element 3215). In many embodiments, the compiler logic can insert code to initialize a one-dimensional or multi-dimensional local array (such as loc) in a restricted memory that has been calculated for each iterative block of unloaded code The amount of memory.The compiler logic can also insert code for iteration space blocks for the target code to copy data elements from the host device memory to the target device before processing the data elements, and for processing the target code for each iteration of the outer loop nesting After iterating over the space block, the data element is copied back to the code of the host device (element 3220). In several embodiments, the compiler logic circuit may insert a copy instruction for copying data elements from a host array in a memory of a host device to a local array in a limited memory of a target device before calculation by an iterative space block. And insert a copy instruction for copying the data elements from the local array to the host array after the calculation by the iterative space block is complete.If the compiler logic circuit decides to perform a data layout transformation (element 3225), the compiler logic circuit can determine the data element accessed within each nested loop (element 3230) and insert code to transform the data layout (element Element 3245). For example, the compiler logic may copy only the data elements accessed by the iterative space block of the object code to compress the data, transpose the data layout, and / or collapse some of the nested loops.If the compiler logic determines that no data layout transformation is performed (element 3225), the compiler logic may modify the array index expression in the object code to process the data elements of the iterative space block instead of all the data elements (element 3230) ). For example, compiler logic can add offsets to array element expressions in array access instructions.After modifying the array index expression, the compiler logic can determine if there is any further object code to be processed (element 3235). If there are more object codes to be processed, the flowchart 3200 returns to element 3205. Otherwise, the compiler logic may end the compilation of the user code (element 3250). Completing the compilation process may involve transforming the object code into a form executable for offloading to the target device, and transforming the user code into a form executable for the host device.FIG. 4 illustrates an embodiment of a system 4000 such as the system 1000 in FIG. 1A and the devices 400 and 450 in FIG. 4. System 4000 is a computer system with multiple processor cores, such as distributed computing systems, supercomputers, high-performance computing systems, computing clusters, mainframe computers, small computers, client-server systems, personal computers (PCs), workstations , Server, portable computer, laptop, tablet, handheld device such as a personal digital assistant (PDA), or other device for processing, displaying, or transmitting information. Similar embodiments may include, for example, entertainment devices such as portable music players or portable video players, smartphones or other cellular phones, telephones, digital video cameras, digital still cameras, external storage devices, and the like. Further embodiments enable larger server configurations. In other embodiments, the system 4000 may have a single processor with one core or more than one core. Note that the term "processor" refers to a processor with a single core or a processor package with multiple processor cores.As shown in FIG. 4, system 4000 includes a motherboard 4005 for mounting platform components. The motherboard 4005 is a point-to-point interconnect platform including a first processor 4010 and a second processor 4030 coupled via a point-to-point interconnect 4056, such as a superpath interconnect (UPI). In other embodiments, the system 4000 may be another bus architecture, such as a multidrop bus. In addition, each of the processors 4010 and 4030 may be a processor package having multiple processor cores, and the processors 4010 and 4030 include processor core (s) 4020 and 4040, respectively. Although the system 4000 is an example of a two-slot (2S) platform, other embodiments may include more than two slots or include one slot. For example, some embodiments may include a four-slot (4S) platform or an eight-slot (8S) platform. Each socket is the installation of a processor and can have a socket identifier. Note that the term platform refers to a motherboard having certain installed components such as a processor 4010 and a chipset 4060. Some platforms may include additional components, and some platforms may only include sockets for mounting processors and / or chipsets.The first processor 4010 includes an integrated memory controller (IMC) 4014 and point-to-point (P-P) interfaces 4018 and 4052. Similarly, the second processor 4030 includes an IMC 4034 and P-P interfaces 4038 and 4054. IMC 4014 and 4034 couple processors 4010 and 4030 to corresponding memories-memory 4012 and memory 4032, respectively. The memories 4012 and 4032 may be part of a main memory (for example, a dynamic random access memory (DRAM)) for the platform (such as the main memory 478 in FIG. 4), such as a double data rate type 3 (DDR3) or Type 4 (DDR4) part of synchronous DRAM (SDRAM). In this embodiment, the memories 4012 and 4032 are locally attached to the respective processors 4010 and 4030. In other embodiments, the main memory may be coupled to the processor via a bus and a shared memory hub.Processors 4010 and 4030 include caches coupled to processor core (s) 4020 and 4040, respectively. The first processor 4010 is coupled to the chipset 4060 via P-P interconnects 4052 and 4062, and the second processor 4030 is coupled to the chipset 4060 via P-P interconnects 4054 and 4064. Direct media interfaces (DMI) 4057 and 4058 can be coupled to P-P interconnects 4052 and 4062, and P-P interconnects 4054 and 4064, respectively. DMI can be a high-speed interconnect such as DMI 3.0 that facilitates, for example, eight Gigabits per second (GT / s). In other embodiments, the processors 4010 and 4030 may be interconnected via a bus.The chipset 4060 may include a controller hub such as a platform controller hub (PCH). The chipset 4060 may include a system clock for performing a clock control function, and includes an interface for an I / O bus to facilitate connection of peripheral devices on the platform, such as a universal serial bus (USB), Peripheral Component Interconnect (PCI), Serial Peripheral Interconnect (SPI), Integrated Interconnect (I2C), and more. In other embodiments, the chipset 4060 may include more than one controller hub, such as a chipset with a memory controller hub, a graphics controller hub, and an input / output (I / O) controller hub.In this embodiment, the chipset 4060 is coupled to the Trusted Platform Module (TPM) and UEFI, BIOS, and flash memory component 4074 via an interface (I / F) 4070. The TPM 4072 is a dedicated microcontroller designed to protect hardware by integrating a cryptographic key into the device. The UEFI, BIOS, and flash components 4074 can provide pre-boot code.In addition, the chipset 4060 includes an I / F 4066 for coupling the chipset 4060 with a high-performance graphics driver, a graphics card 4065, and an accelerator card 4067. I / F 4066 may be, for example, Peripheral Component Interconnect Enhancement (PCI-e). The graphics card 4065 and the accelerator card 4067 may include a target device such as the target device 1060 illustrated in FIG. 1A.Referring again to FIG. 4, each I / O device 4092 is coupled to the bus 4081 and the bus bridge 4080 and I / F 4068. The bus bridge 4080 couples the bus 4081 to the second bus 4091, and I / F 4068 connects the bus 4081 to the chipset 4060 . In one embodiment, the second bus 4091 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 4091. These devices include, for example, a keyboard 4082, a mouse 4084, a communication device 4086, and a data storage unit 4088 that can store codes such as a compiler code 4098. The compiler code 4098 may include code for implementing the compiler logic circuit 1022 illustrated in FIG. 1A, and may also include code for implementing the compiler logic circuits 4022 and 4042 in the processor cores 4020 and 4040, respectively. Code. The compiler code 4098 can compile object code located in a memory such as memory 4012, memory 4032, register 4016, register 4036, data store 4088, I / O device 4092, and / or any other data accessible by system 4000 storage.In addition, an audio I / O 4090 may be coupled to the second bus 4091. Many of the devices or units in the I / O device 4092, the communication device 4086, and the data storage unit 4088 may reside on the motherboard 4005, and the keyboard 4082 and mouse 4084 may be additional peripheral devices. In other embodiments, some or all of the I / O devices 4092, communication devices 4086, and data storage unit 4088 are additional peripheral devices and do not reside on the motherboard 4005.FIG. 5 illustrates an example of a storage medium 5000 for storing code, such as the compiler code 4098 as illustrated in FIG. 4. The storage medium 5000 may include an article of manufacture. In some examples, storage medium 5000 may include any non-transitory computer-readable or machine-readable medium, such as optical, magnetic, or semiconductor storage. The storage medium 5000 may store various types of computer-executable instructions, such as instructions for implementing the logic flows and / or techniques described herein. Examples of computer-readable or machine-readable storage media may include any tangible media capable of storing electronic data, including volatile or non-volatile memory, removable or non-removable memory, erasable or non-erasable Memory, writable or rewritable memory, etc. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and so on. Examples are not limited in this context.FIG. 6 illustrates an example computing platform 6000 such as the system 1000 illustrated in FIG. 1A and the system 4000 illustrated in FIG. 4. In some examples, as shown in FIG. 6, the computing platform 6000 may include a processing component 6010, other platform components 6030, or a communication interface 6030. According to some embodiments, the computing platform 6000 may be implemented in a computing device such as a server in a system such as a data center that supports a manager or controller for managing configurable computing resources as mentioned above Or server farm. Further, the communication interface 6030 may include a wake-up radio (WUR) and may be capable of waking up a master radio of the computing platform 6000.According to some examples, the processing component 6010 may perform processing operations or logic for the device 6015 described herein. The processing component 6010 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), programmable logic device (PLD), digital signal processor (DSP), field programmable gate array (FPGA), memory cell, logic gate, register, semiconductor device, chip, microchip, chipset, and so on. Examples of software elements that may reside in the storage medium 6020 may include software components, programs, applications, computer programs, application programs, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, Software module, routine, subroutine, function, method, procedure, software interface, application program interface (API), instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or their Any combination. However, determining whether an example is implemented using hardware and / or software elements may vary according to any number of factors such as the desired calculation rate, power level, heat resistance, processing cycle budget, input data rate, output data Speed, memory resources, data bus speed, and other design or performance constraints expected for a given example.In some examples, other platform components 6025 may include common computing elements, such as one or more processors, multi-core processors, coprocessors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices , Video cards, audio cards, multimedia input / output (I / O) components (eg, digital displays), power supplies, and more. Examples of memory units may include, but are not limited to, various types of computer-readable and machine-readable storage media in the form of one or more higher-speed memory units, such as read-only memory (ROM), random-access memory (ROM), RAM), dynamic RAM (DRAM), double data rate DRAM (DDR AM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrical memory Erase programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, austenitic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) Memory, magnetic or optical cards, device arrays such as redundant arrays of independent disks (RAID), solid state memory devices (eg, USB memory), solid state drives (SSDs), and any other type of storage medium suitable for storing information.In some examples, communication interface 6030 may include logic and / or features that support a communication interface. For these examples, communication interface 6030 may include one or more communication interfaces that operate in accordance with various communication protocols or standards to communicate over a direct or network communication link. Direct communication may occur via the use of communication protocols or standards described in one or more industry standards, including descendants and variants, such as those communication protocols or standards associated with the PCI Express Specification. Network communications may occur via the use of communication protocols or standards, such as those described in one or more Ethernet standards issued by the Institute of Electrical and Electronics Engineers (IEEE). For example, one such Ethernet standard may include IEEE 802.3-2012, Carrier Sense Multiple Access (CSMA / CD) access method with collision detection and physical layer specifications (hereinafter "IEEE 802.3") published in December 2012. . Network communication may also occur according to one or more OpenFlow specifications, such as the OpenFlow hardware abstraction API specification. Network communications can also occur in accordance with the Infiniband (Infinite Bandwidth) Architecture Specification, Volume 1, Version 1.3 ("Infiniband Architecture Specification") published in March 2015.The computing platform 6000 may be part of a computing device, which may be, for example, a server, server array or server farm, web server, web server, Internet server, workstation, microcomputer, mainframe computer, supercomputer, network appliance, web appliance , Distributed computing systems, multiprocessor systems, processor-based systems, or a combination thereof. Accordingly, the functions and / or specific configurations of the computing platform 6000 described herein may be included or omitted in various embodiments of the computing platform 6000 according to appropriate needs.The components and features of the computing platform 6000 may be implemented using any combination of discrete circuits, ASICs, logic gates, and / or single-chip architectures. Further, where appropriate, a microcontroller, a programmable logic array, and / or a microprocessor, or any combination of the foregoing may be used to implement the features of the computing platform 6000. Note that hardware, firmware, and / or software elements may be collectively or individually referred to herein as "logic."It should be appreciated that the exemplary computing platform 6000 shown in the block diagram of FIG. 6 may represent a functionally descriptive example of many potential implementations. Accordingly, the division, omission, or inclusion of block functions depicted in the drawings cannot be deduced that the hardware components, circuits, software, and / or elements used to implement these functions must be divided, omitted, or included in the embodiments.One or more aspects of the at least one example may be implemented by representative instructions stored on at least one machine-readable medium, the instructions representing various logic within the processor, the instructions being read by a machine, computing device, or system Logic that causes a machine, computing device, or system to execute the techniques described herein. Such representations, called "IP cores", can be stored on a tangible machine-readable medium and can be supplied to individual customers or production facilities to be loaded into a manufacturing machine that actually manufactures the logic or processor.Examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASICs), Programmable logic devices (PLDs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and more. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods , Procedures, software interfaces, application programming interfaces (APIs), instruction sets, calculation codes, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware and / or software elements can vary according to any number of factors such as the desired calculation rate, power level, heat resistance, processing cycle budget, input data rate, output data rate, Memory resources, data bus speed, and other design or performance constraints expected for a given implementation.Some examples may include an article of manufacture or at least one computer-readable medium. Computer-readable media may include non-transitory storage media for storing logic. In some examples, non-transitory storage media may include one or more types of computer-readable storage media capable of storing electronic data, including volatile or non-volatile memory, removable or non-removable memory , Erasable or non-erasable memory, writable or rewritable memory, etc. In some examples, logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, sub-examples Procedures, functions, methods, procedures, software interfaces, APIs, instruction sets, calculation codes, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some embodiments, a computer-readable medium may include a non-transitory storage medium for storing or maintaining instructions that, when executed by a machine, computing device, or system, cause the machine, computing device, or system to perform execution according to the described examples Method and / or operation. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and so on. The instructions may be implemented according to a predefined computer language, manner, or syntax for instructing a machine, computing device, or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled, and / or interpreted programming language.Some examples may be described using the expression "in one example" or "example" and their derivatives. These terms mean that a particular function, structure, or characteristic described with reference to an example is included in at least one example. The appearances of the phrase "in one example" in various places in the specification are not necessarily all referring to the same example.Some examples can be described using the expressions "coupled" and "connected" and their derivatives. These terms are not necessarily intended as synonyms for each other. For example, a description using the terms "connected" and / or "coupled" may indicate that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.In addition, in the foregoing “specific embodiments,” it can be seen that various features may be combined in a single example for the purpose of making the present disclosure smooth. This method of the disclosure should not be construed as reflecting the intent that the claimed examples require more features than are explicitly described in each claim. Rather, the subject matter of the invention is reflected in less than all features of the disclosed single example, as reflected in the appended claims. Therefore, the appended claims are included in the "detailed description", and each claim itself is also a separate example. In the appended claims, the terms "including" and "inwhich" are used as concise English equivalents of the respective terms "comprising" and "wherein," respectively. In addition, the terms "first", "second", "third", etc. are used only as markers, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and / or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.A data processing system suitable for storing and / or executing program code will include at least one processor coupled directly or indirectly to a memory element through a system bus. The memory elements may include local memory, mass storage, and cache memory used during the actual execution of the program code, which provides temporary storage of at least some program code to reduce the need to remove from mass storage during execution The number of times the code was retrieved. The term "code" covers a wide range of software elements and constructs, including applications, drivers, procedures, routines, methods, modules, firmware, microcode, and subroutines. Thus, the term "code" may be used to refer to any collection of performances that, when performed by a processing system, performs a desired operation or operations.The logic circuits, devices, and interfaces described herein may perform functions implemented in hardware and also implemented using code executing on one or more processors. A logic circuit refers to hardware or hardware and code that implements one or more logic functions. A circuit system is hardware and may refer to one or more circuits. Each circuit can perform a specific function. Circuitry in a circuit system may include discrete electrical components, integrated circuits, chip packages, chipsets, memories, and the like, interconnected with one or more conductors. Integrated circuits include circuits created on a substrate such as a silicon wafer and may include components. And integrated circuits, processor packages, chip packages, and chipsets may include one or more processors.The processor may receive signals such as instructions and / or data at the input (s) and process the signals to generate at least one output. When the code is executed, the code changes the physical state and characteristics of the transistors that make up the processor pipeline. The physical state of the transistor translates into logical bits 1 and 0 stored in a register within the processor. The processor can transfer the physical state of the transistor into a register and the physical state of the transistor to another storage medium.The processor may include circuitry for performing one or more sub-functions that are implemented to perform the overall function of the processor. An example of a processor is a state machine or an application specific integrated circuit (ASIC) including at least one input and at least one output. The state machine may manipulate the at least one input to generate at least one output by performing a predetermined series of serial and / or parallel manipulations or transformations on the input.Logic as described above may be part of the design for integrated circuit chips. The chip design is created using a graphical computer programming language and stored in a computer storage medium or data storage medium such as a disk, tape, physical hard drive, or virtual hard drive such as in a storage access network. If the designer does not manufacture the chip or lithographic mask used to manufacture the chip, the designer directly or indirectly, either by physical means (for example, by providing a copy of the storage medium storing the design) or by electronic means (for example, via the Internet) Send the resulting design to such an entity. The stored design is then converted into an appropriate format (eg, GDSII) for manufacturing.Manufacturers can distribute the resulting integrated circuit chips in raw wafer form (ie, a single wafer with multiple unpackaged chips), in bare die form, or in packaged form. In the latter case, the chip is mounted in a single-chip package (such as a plastic carrier with its leads fixed to a motherboard or other higher-level carrier) or in a multi-chip package (such as having one or two sides Or buried interconnected ceramic carriers). In any event, the chip is then integrated with other chips, discrete circuit elements, and / or other signal processing equipment as (a) an intermediate product (such as a processor board, server platform, motherboard) or (b) The part of the final product.Several embodiments have one or more potentially advantageous aspects. For example, automatically or autonomously transforming the target code to execute in a memory-constrained environment advantageously facilitates the use of memory-restricted resources such as the target devices described herein. Determining an iteration space block (also known as a block or partition) advantageously partitions the data usage made by each iteration space block of the target code, so data elements for the target code can be adapted in restricted memory. Generating instructions to determine one or more partitions advantageously facilitates the use of object code on multiple different target devices with different memory constraints. Transforming the data layout advantageously improves the efficiency of access to data elements and reduces memory bandwidth requirements. Compressing the data layout advantageously improves the efficiency of access to data elements and reduces memory bandwidth requirements. The transposed data layout advantageously improves the efficiency of access to data elements and reduces memory bandwidth requirements. The collapsed loop advantageously improves the efficiency of access to data elements and reduces memory bandwidth requirements.Examples of further embodimentsThe following examples relate to further embodiments. The details in the examples may be used anywhere in one or more embodiments.Example 1 is a device for transforming code. The device includes: a memory, including code; and a logic circuit coupled to the memory, the logic circuit is used to identify a flag for identifying a task in the code, wherein the task includes at least one loop in a loop nest, the loop is used for For processing data elements in one or more arrays, loop nesting includes one or more parallel loops; and logic circuits are used to automatically generate one or more partitions for determining at least one loop based on memory constraints to Data elements are partitioned instructions that are accessed by one or more memory access instructions for one or more arrays within the at least one loop, and memory constraints are used to identify the amount of memory available for allocation to process tasks. In Example 2, the apparatus of Example 1, wherein the logic circuit is configured to determine a memory constraint based on an amount of memory available for processing tasks at runtime. In Example 3, the apparatus of Example 1, wherein the logic circuit is configured to determine a memory constraint based on an estimate of an amount of memory available for processing tasks. In Example 4, the apparatus of Example 1, wherein the logic circuit is configured to generate instructions for determining one or more partitions for an outer loop for a task, wherein the one or more outer loops include parallel cycle. In Example 5, the apparatus of Example 1, wherein the logic circuit is configured to determine one or more iteration space blocks for a parallel loop, each iteration space block used to identify A subset of multiple arrays that are being processed. In Example 6, the apparatus of Example 5, wherein the logic circuit is configured to determine a non-overlapping subset of data elements for one or more iterative spatial blocks. In Example 7, the apparatus of Example 5, wherein the logic circuit is configured to determine a memory window for each of the iterative space blocks, wherein the memory window includes a memory window that can be allocated for processing tasks. The portion of the amount of memory for the span of one iteration space block, where the span is a data element in one or more arrays that is accessed during the duration of a single iteration block of a task. In Example 8, the apparatus of Example 1, wherein the logic circuit is configured to determine a non-overlapping span for a memory window. In Example 9, the apparatus of Example 1, wherein the logic circuit is configured for transforming array access. In Example 10, the apparatus of Example 1, wherein the logic circuit is configured to insert instructions for calling a runtime library to calculate an iteration space block for at least one loop.In Example 11, the apparatus of Example 1, wherein the logic circuit is configured to insert a data element for copying a data element from a host device before an iteration space block of a task is executed and copy after the iteration space block of the task is completed Instructions to the host device, wherein the task's iteration space block includes the duration of the task, during which the portion of the one or more arrays accesses data elements in the memory window associated with the iteration space block . In Example 12, the apparatus of Example 11, wherein the logic circuit is configured to insert an instruction for performing a data layout transformation while copying a data element from the host device. In Example 13, the apparatus of Example 11, wherein the data layout transformation includes data transmission compression to densely store data elements. In Example 14, the apparatus of Example 11, wherein the data layout transformation includes a data transposition to reduce a stride of a memory access. In Example 15, the apparatus of Example 11, wherein the data layout transformation includes loop collapse to reduce the number of serial loops in loop nesting, wherein at least one loop includes loop nesting.In Example 16, a method for transforming code. The method includes: identifying a flag for identifying a task in a code by a compiler logic circuit, wherein the task includes at least one loop in a loop nest, the loop is used to process data elements in one or more arrays, and the loop embeds The set includes one or more parallel loops; and instructions automatically generated by a compiler logic circuit for determining one or more partitions for at least one loop to partition data elements based on memory constraints, the data elements being determined by the at least One or more memory access instruction accesses to one or more arrays within a loop, memory constraints are used to identify the amount of memory available for allocation to process tasks. In Example 17, the method of Example 16, further comprising determining a memory constraint based on an amount of memory available to process tasks at runtime. In Example 18, the method of Example 16, further comprising determining a memory constraint based on an estimate of an amount of memory available for processing tasks. In Example 19, the method of Example 16, wherein automatically generating the instructions includes generating instructions for determining one or more partitions for an outer loop for a task, wherein the one or more outer loops include parallel loops. In Example 20, the method of Example 19, wherein automatically generating the instructions includes determining one or more iteration space blocks for a parallel loop, each iteration space block used to identify a data element to be identified by one or more Partitioning of the array while processing a subset. In Example 21, the method of Example 20, wherein automatically generating the instructions includes determining a non-overlapping subset of data elements for one or more iterative spatial blocks. In Example 22, the method of Example 20, wherein automatically generating the instructions includes determining a memory window for an iteration space block, wherein the memory window includes an amount of memory available for allocation to process tasks for the iteration space block. The part of the span of an iteration space block, where the span is a data element in one or more arrays that is accessed during the duration of a single iteration block of a task.In Example 23, the method of Example 22, wherein automatically generating the instructions includes determining non-overlapping spans for the memory window. In Example 24, the method of Example 16, wherein automatically generating the instructions includes transforming an array access. In Example 25, the method of Example 16, wherein automatically generating the instructions includes inserting instructions for calling a runtime library to calculate an iteration space block for at least one loop. In Example 26, the method of Example 16, wherein automatically generating the instructions includes inserting data elements for copying from the host device before the iteration space block of the task is executed and copying to the host after the iteration space blocks of the task are completed An instruction of the device, wherein the task's iteration space block includes a duration of a task during which a portion of one or more arrays accesses a data element in a memory window associated with the iteration space block. In Example 27, the method of Example 26, wherein automatically generating the instructions includes inserting instructions for performing a data layout transformation while copying a data element from the host device. In Example 28, the method of Example 26, wherein the data layout transformation includes data transfer compression to selectively copy data accessed during execution of the iterative space block of the task. In Example 29, the method of Example 26, wherein the data layout transformation includes transposing the data elements to reduce a stride of memory access. In Example 30, the method of Example 26, wherein the data layout transformation includes collapsing at least one loop to reduce the number of serial loops in loop nesting, wherein at least one loop includes loop nesting.Example 31 is a system for transforming code. The system includes: a memory, including a dynamic random access memory and code; and a logic circuit coupled to the memory, the logic circuit is used to identify a flag for identifying a task in the code, wherein the task includes at least A loop for processing data elements in one or more arrays, loop nesting including one or more parallel loops; and logic circuits for automatically generating one for at least one loop based on memory constraints Or multiple partition instructions to partition data elements accessed by one or more memory access instructions for one or more arrays within the at least one loop, memory constraints are used to identify Amount of memory. In Example 32, the system as described in Example 31, further comprising a target device coupled to the logic circuit, the target device for executing instructions to process the task. In Example 33, the system of Example 31, wherein the logic circuit is configured to determine a memory constraint based on an amount of memory available for processing tasks at runtime. In Example 34, the system of Example 31, wherein the logic circuit is configured to determine a memory constraint based on an estimate of an amount of memory available for processing tasks.In Example 35, the system of Example 31, wherein the logic circuit is configured to determine one or more partitions of an outer loop for a task, wherein the one or more outer loops include parallel loops. In Example 36, the system of Example 35, wherein the logic circuit is configured to determine one or more iteration space blocks for a parallel loop, each iteration space block used to identify A subset of multiple arrays that are being processed. In Example 37, the system of Example 36, wherein the logic circuit is configured to determine a non-overlapping subset of data elements for one or more iterative spatial blocks. In Example 38, the system of Example 36, wherein the logic circuit is configured to determine a memory window for each of the iteration space blocks, wherein the memory window includes a memory window that is available for allocation to process tasks. The portion of the amount of memory for the span of one iteration space block, where the span is a data element in one or more arrays that is accessed during the duration of a single iteration block of a task. In Example 39, the system of Example 31, wherein the logic circuit is configured to determine a non-overlapping span for a memory window. In Example 40, the system of Example 31, wherein the logic circuit is configured for transforming array access. In Example 41, the system of Example 31, wherein the logic circuit is configured to insert instructions for calling a runtime library to calculate an iteration space block for at least one loop. In Example 42, the system of Example 31, wherein the logic circuit is configured to insert a data element for copying a data element from a host device before the iteration space block of the task is executed and copy after the iteration space block of the task is completed Instructions to the host device, wherein the task's iteration space block includes the duration of the task, during which the portion of the one or more arrays accesses data elements in the memory window associated with the iteration space block . In Example 43, the system of Example 42, wherein the logic circuit is configured to insert an instruction for performing a data layout transformation while copying a data element from the host device. In Example 44, the system of Example 42, wherein the data layout transformation includes data transfer compression to densely store data elements. In Example 45, the system of Example 42, wherein the data layout transformation includes a data transposition to reduce a stride of memory access. In Example 46, the system of Example 42, wherein the data layout transformation includes loop collapse to reduce the number of serial loops in loop nesting.Example 47 is a non-transitory machine-readable medium containing instructions that, when executed by a processor, cause the processor to perform operations including identifying a flag in code to identify a task, where the task includes At least one loop in a loop nest for processing data elements in one or more arrays, the loop nest includes one or more parallel loops; and automatically generated for determining at least one loop based on memory constraints Instructions to partition one or more of the data elements that are accessed by one or more memory access instructions for one or more arrays within the at least one loop, and memory constraints are used to identify which are available for allocation for processing The amount of memory for the task. In Example 48, the machine-readable medium of Example 47, wherein the operations further include determining a memory constraint based on an amount of memory available to process tasks at runtime. In Example 49, the machine-readable medium of Example 47, wherein the operations further include determining a memory constraint based on an estimate of an amount of memory available to process the task. In Example 50, the machine-readable medium of Example 47, wherein automatically generating the instructions includes generating instructions for determining one or more partitions for an outer loop for a task, wherein the one or more outer loops include Loop in parallel. In Example 51, the machine-readable medium of Example 47, wherein automatically generating the instructions includes determining one or more iteration space blocks for a parallel loop, each iteration space block used to identify data elements Or a subset of multiple array partitions to be processed. In Example 52, the machine-readable medium of Example 47, automatically generating the instructions includes determining a non-overlapping subset of data elements for one or more iterative spatial blocks. In Example 53, the machine-readable medium of Example 52, wherein automatically generating instructions includes determining a memory window for a block of iteration space, wherein the memory window includes an amount of memory available for allocation to process tasks for the iteration space Part of the span of an iterative space block in a block, where the span is a data element in one or more arrays that is accessed during the duration of the iterative space block. In Example 54, the machine-readable medium of Example 53, wherein automatically generating the instructions includes determining non-overlapping spans for the memory window. In Example 55, the machine-readable medium of Example 47, wherein automatically generating the instructions includes transforming an array access. In Example 56, the machine-readable medium of Example 47, wherein automatically generating the instructions includes inserting instructions for calling a runtime library to calculate an iterative space block for at least one loop.In Example 57, the machine-readable medium of Example 47, wherein automatically generating the instructions includes inserting to copy data elements from the host device before the iteration space block of the task is executed and after the iteration space block of the task is completed Instructions copied to the host device, where the task's iteration space block includes the duration of the task during which part of the one or more arrays accesses data in the memory window associated with the iteration space block element. In Example 58, the machine-readable medium of Example 57, wherein automatically generating the instructions includes inserting instructions for performing a data layout transformation while copying data elements from the host device. In Example 59, the machine-readable medium of Example 57, wherein the data layout transformation includes data transfer compression to selectively copy data accessed during execution of an iterative space block of a task. In Example 60, the machine-readable medium of Example 57, wherein the data layout transformation includes transposing the data elements to reduce a stride of memory access. In Example 61, the machine-readable medium of Example 57, wherein the data layout transformation includes collapsing at least one loop to reduce the number of serial loops in loop nesting, wherein at least one loop includes loop embedding set.Example 62 is a device for transforming code. The device comprises: means for identifying a flag for identifying a task in code, wherein the task includes at least one loop in a loop nest, the loop is used to process data elements in one or more arrays, and the loop nest Including one or more parallel loops; and for automatically generating one or more partitions for at least one loop based on memory constraints to determine one or more partitions for one or more arrays within the at least one loop A means of partitioning instructions for a data element accessed by a memory access instruction, the memory constraint identifying an amount of memory available for allocation to process tasks. In Example 63, the apparatus of Example 62, further comprising: means for determining a memory constraint based on an amount of memory available for processing tasks at runtime. In Example 64, the apparatus of Example 62, further comprising: means for determining a memory constraint based on an estimate of an amount of memory available for processing tasks. In Example 65, the apparatus of Example 62, wherein the means for automatically generating instructions comprises means for generating instructions for determining one or more partitions of an outer loop for a task, wherein one or The multiple outer loops include parallel loops. In Example 66, the apparatus of Example 65, wherein the means for automatically generating instructions comprises means for determining one or more iteration space blocks for a parallel loop, each iteration space block for identifying data The subset of elements that will be processed as a partition of one or more arrays. In Example 67, the apparatus of Example 66, wherein the means for automatically generating instructions comprises means for determining a non-overlapping subset of data elements for one or more iterative spatial blocks. In Example 68, the apparatus of Example 66, wherein the means for automatically generating instructions comprises means for determining a memory window for an iterative space block, wherein the memory window includes memory available for allocation to process tasks The part of a quantity that spans one iteration space block in an iteration space block, where the span is a data element in one or more arrays that is accessed during the duration of a single iteration block of a task. In Example 69, the apparatus of Example 62, wherein the means for automatically generating instructions comprises means for determining a non-overlapping span for a memory window. In Example 70, the apparatus of Example 69, wherein the means for automatically generating instructions comprises means for transforming array access.In Example 71, the apparatus of Example 69, wherein the means for automatically generating instructions comprises means for inserting instructions for calling a runtime library to calculate an iterative space block for at least one loop. In example 72, the apparatus of example 69, wherein the means for automatically generating instructions comprises means for inserting data elements from a host device to be copied before the iteration space block of the task is executed and in the iteration space of the task Means for copying instructions to a host device after completion of a block, wherein an iteration space block of the task includes a duration of the task, during which part of one or more arrays accesses a block associated with the iteration space block A data element in a memory window. In Example 73, the apparatus of Example 72, wherein the means for automatically generating the instructions includes means for inserting instructions for performing a data layout transformation while copying a data element from the host device. In Example 74, the apparatus of Example 72, wherein the data layout transformation includes data transfer compression to selectively copy data accessed during execution of the iterative space block of the task. In Example 75, the device of Example 72, wherein the data layout transformation includes transposing the data elements to reduce a stride of memory access. In Example 76, the device of Example 72, wherein the data layout transformation includes collapsing at least one loop to reduce the number of serial loops in loop nesting, wherein at least one loop includes loop nesting.
Some embodiments include an integrated assembly having digit lines which extend along a first direction, and which are spaced from one another by intervening regions. Each of the intervening regions has a first width along a cross-section. Pillars extend upwardly from the digit lines; and the pillars include transistor channel regions extending vertically between upper and lower source/drain regions. Storage elements are coupled with the upper source/drain regions. Wordlines extend along a second direction which crosses the first direction. The wordlines include gate regions adjacent the channel regions. Shield lines are within the intervening regions and extend along the first direction. The shield lines may be coupled with at least one reference voltage node. Some embodiments include methods of forming integrated assemblies.
CLAIMSl/we claim,1 . An integrated assembly, comprising :digit lines extending along a first direction ; the digit lines being spaced from one another by intervening regions; each of the digit lines having a first width along a cross-section orthogonal to the first direction ; each of the intervening regions also having the first width along the cross-section ; each of the digit lines having a top surface at a first height;vertically-extending pillars over the digit lines; each of the vertically-extending pillars comprising a transistor channel region and an upper source/drain region ; lower source/drain regions being under the channel regions and being coupled with the digit lines; the transistor channel regions extending vertically between the lower source/drain regions and the upper sou rce/drain regions; each of the vertically- extending pillars having the first width along the cross-section ; the intervening regions extending upwardly to between the vertically- extending pillars and comprising the first width from top su rfaces of the upper source/drain regions to bottom surfaces of the digit lines;storage elements coupled with the upper source/drain regions; wordlines extending along a second direction which crosses the first direction ; the wordlines including gate regions adjacent the channel regions; andshield lines within the intervening regions and extending along the first direction ; each of the shield lines having a top surface at a second height which is greater than or equal to the first height.2. The integrated assembly of claim 1 wherein the storage elements are capacitors.3. The integrated assembly of claim 1 wherein the vertically- extending pillars comprise one or more semiconductor materials.4. The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; wherein one of columns is an edge colu mn ; the edge colu mn having one of the intervening regions extending along one side, and having an edge region extending along a second side in opposing relation to said one side; the shield lines within the intervening regions being first shield lines and being configu red as vertically-extending plates; one of the shield lines being within the edge region and being a second shield line; the second shield line being configured different than the first shield lines and comprising an elbow region connecting a vertically-extending region to a horizontally-extending region.5. The integrated assembly of claim 1 wherein each of the shield lines has a second width along the cross-section ; and wherein the second width is less than or equal to about one-half of the first width.6. The integrated assembly of claim 5 wherein the second width is less than or equal to about one-third of the first width.7. The integrated assembly of claim 1 wherein each of the lower sou rce/drain regions has a top surface at a third height, and wherein the second height is greater than or equal to the third height.8. The integrated assembly of claim 7 wherein each of the wordlines has a bottom surface at a fourth height, and wherein the second height is less than the fourth height.9. The integrated assembly of claim 1 wherein the digit lines comprise first conductive material, the shield lines comprise second conductive material and the wordlines comprise third conductive material ; and wherein at least one of the first, second and third conductive materials is different from at least one other of the first, second and third conductive materials.10. The integrated assembly of claim 1 wherein the digit lines comprise first conductive material, the shield lines comprise second conductive material and the wordlines comprise third conductive material ; wherein the first, second and third conductive materials are a same composition ; and wherein said same composition comprises metal.1 1 . The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; and further comprising a metal-containing reference structu re under the memory array; each of the shield lines having a bottom surface directly adjacent to an upper surface of the metal-containing reference structure.12. The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; wherein each of the shield lines has an end along a peripheral edge of the memory array; and further comprising :a reference structu re offset from the memory array; andinterconnects extending from the ends of the shield lines to the reference structure.1 3. The integrated assembly of claim 12 wherein the reference structure is a metal-containing plate.14. The integrated assembly of claim 12 wherein the reference structure is vertically offset from the memory array.1 5. The integrated assembly of claim 12 wherein the reference structure is laterally offset from the memory array.1 6. The integrated assembly of claim 1 2 wherein at least a portion of the reference structure is laterally offset from the memory array and is also vertically offset from the memory array.1 7. The integrated assembly of claim 1 2 wherein the memory array is within a memory deck of a vertically-stacked arrangement of decks.1 8. The integrated assembly of claim 1 7 wherein the vertically- stacked arrangement of decks includes a lower deck u nder the memory deck; the lower deck comprising control circuitry which is coupled with circuitry of the memory deck.1 9. The integrated assembly of claim 18 wherein the reference structure is along the lower deck.20. The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; wherein each of the shield lines has a first end and has a second end in opposing relation to the first end ; and further comprising :a first reference structure laterally offset from a first side of the memory array;a second reference structure laterally offset from a second side of the memory array;first interconnects extending from the first ends of the shield lines to the first reference structure; andsecond interconnects extending from the second ends of the shield lines to the second reference structure.21 . The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; wherein each of the shield lines has a first end and has a second end in opposing relation to the first end ; and further comprising :a first reference structure laterally offset from a first side of the memory array;a second reference structure laterally offset from a second side of the memory array;first interconnects extending from the first ends of a first set of the shield lines to the first reference structure; andsecond interconnects extending from the second ends of a second set of the shield lines to the second reference structure; the second set comprising different shield lines than the first set.22. The integrated assembly of claim 1 wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along colu mns of the memory array and the wordlines extend along rows of the memory array; and further comprising :a reference structure peripherally surrou nding the memory array; andinterconnects extending from the shield lines to the reference structure.23. The integrated assembly of claim 22 wherein the reference structure is vertically offset from the memory array.24. A method of forming an integrated assembly, comprising : forming a support structu re comprising insulative material over a reference structure; the reference structure comprising metal and being configu red as a horizontally-extending expanse; forming a stack over the support structure; the stack comprising semiconductor material over digit line material ;patterning the stack into rails extending along a first direction ; the rails being spaced from one another by first trenches; the patterning punching through the insulative material to leave an upper su rface of the reference structure exposed along bottoms of the first trenches; each of the rails having a top surface, and having sidewall surfaces extending downwardly from the top su rface; the patterning of the stack into the rails forming the digit line material into digit lines which extend along the first direction ;forming insulative shells that cover the top surfaces and the sidewall surfaces of the rails; the insulative shells narrowing the first trenches; the upper surface of the reference structure being exposed along bottoms of the narrowed first trenches;forming conductive shield lines within the narrowed first trenches and directly against the exposed upper surface of the reference structure at the bottoms of the narrowed first trenches;forming second trenches which extend along a second direction ; the second direction crossing the first direction ; the second trenches patterning upper regions of the rails into pillars and not patterning lower regions of the rails; the lower regions of the rails including the digit lines;forming wordlines within the second trenches;doping bottom sections of the semiconductor material to form lower sou rce/drain regions; the lower sou rce/drain regions being coupled with the digit lines;doping top sections of the semiconductor material to form upper source/drain regions; channel regions being vertically between the lower source/drain regions and the upper source/drain regions; the wordlines being adjacent the channel regions; andforming storage elements coupled with the upper sou rce/drain regions.25. The method of claim 24 wherein the bottom sections of the semiconductor material are doped prior to forming the wordlines; and wherein the top sections of the semiconductor material are doped after forming the wordlines.26. The method of claim 24 further comprising :forming conductive shield material within the narrowed first trenches; the conductive shield material substantially filling the narrowed first trenches; andreducing a height of the conductive shield material so that the conductive shield material vertically overlaps the digit lines and only lower segments of the semiconductor material of the rails; the conductive shield material having the reduced height being the conductive shield lines.27. The method of claim 26 wherein the lower segments of the semiconductor material which are vertically-overlapped by the shield material include an entirety of the lower source/drain regions.28. The method of claim 26 wherein the height of the conductive shield material is reduced prior to forming the wordlines.29. The method of claim 26 wherein the height of the conductive shield material is reduced after forming the wordlines.30. The method of claim 24 wherein the narrowed trenches have a u niform width from a top of the semiconductor material to a bottom of the digit line material.31 . The method of claim 24 further comprising forming electrical connections from the reference structure to circuitry configu red to hold the reference structure at a reference voltage.32. A method of forming an integrated assembly, comprising : forming a stack comprising semiconductor material over digit line material ;patterning the stack into rails extending along a first direction ; the rails being spaced from one another by first trenches; the rails having top su rfaces, and having sidewall su rfaces extending downwardly from the top surfaces; the patterning of the stack into the rails forming the digit line material into digit lines which extend along the first direction ; forming an insulative material that covers the top surfaces and the sidewall surfaces of the rails; the insulative material narrowing the first trenches;forming conductive shield lines within the narrowed first trenches; forming second trenches which extend along a second direction ; the second direction crossing the first direction ; the second trenches patterning upper regions of the rails into pillars and not patterning lower regions of the rails; the lower regions of the rails including the digit lines;forming wordlines within the second trenches;doping bottom sections of the semiconductor material to form lower sou rce/drain regions; the lower sou rce/drain regions being coupled with the digit lines;doping top sections of the semiconductor material to form upper source/drain regions; channel regions being vertically between the lower source/drain regions and the upper source/drain regions; the wordlines being adjacent the channel regions;forming storage elements coupled with the upper sou rce/drain regions; wherein the storage elements are comprised by memory cells of a memory array; wherein the digit lines extend along columns of the memory array and the wordlines extend along rows of the memory array; wherein each of the conductive shield lines has a first end along a first peripheral edge of the memory array and has a second end along a second peripheral edge of the memory array in opposing relation to the first peripheral edge of the memory array; andelectrically connecting at least one of the first and second ends of each of the conductive shield lines with a reference voltage source.33. The method of claim 32 wherein the conductive shield lines comprise conductively-doped silicon.34. The method of claim 32 wherein the bottom sections of the semiconductor material are doped prior to forming the wordlines; and wherein the top sections of the semiconductor material are doped after forming the wordlines.35. The method of claim 32 further comprising :forming conductive shield material within the narrowed first trenches; the conductive shield material substantially filling the narrowed first trenches; andreducing a height of the conductive shield material so that the conductive shield material vertically overlaps the digit lines and only lower segments of the semiconductor material of the rails; the conductive shield material having the reduced height being the conductive shield lines.36. The method of claim 35 wherein the lower segments of the semiconductor material which are vertically-overlapped by the shield material include an entirety of the lower source/drain regions.37. The method of claim 35 wherein the height of the conductive shield material is reduced prior to forming the wordlines.38. The method of claim 35 wherein the height of the conductive shield material is reduced after forming the wordlines.39. The method of claim 32 wherein the narrowed trenches have a uniform width from a top of the semiconductor material to bottoms of the narrowed trenches.40. The method of claim 32 wherein the electrically connecting said at least one of the first and second ends of each of the conductive shield lines with the reference voltage source comprises electrically connecting said at least one of the first and second ends of each of the conductive shield lines with a metal-containing reference structure.41 . The method of claim 40 wherein the reference structure is a plate.42. The method of claim 40 wherein the reference structure is vertically offset from the memory array.43. The method of claim 40 wherein the reference structure is adjacent one of the first and second peripheral edges of the memory array, and is laterally offset from said one of the first and second peripheral edges of the memory array.44. The method of claim 40 wherein the reference structure peripherally surrou nds the memory array.45. The method of claim 44 wherein the reference structure is vertically offset from the memory array.46. The method of claim 32 wherein the reference voltage source is a first reference voltage sou rce adjacent to the first peripheral edge of the memory array, and comprising :forming electrical connections from at least some of the first ends of the conductive shield lines to the first reference voltage sou rce; and forming electrical connections from at least some of the second ends of the conductive shield lines to a second reference voltage source adjacent to the second peripheral edge of the memory array.47. The method of claim 32 wherein the reference voltage source is a first reference voltage source, and comprising : forming electrical connections from the first ends of a first set of the conductive shield lines to the first reference voltage source using first interconnects: andforming electrical connections from the second ends of a second set of the conductive shield lines to a second reference voltage source using second interconnects; the second set comprising different conductive shield lines than the first set.
INTEG RATED ASS EMBLI ES HAVI NG SHI ELD LI N ES BETWEEN DIG IT LI NES, AN D M ETHODS OF FORM ING INTEG RATE DASSEMBLI ESCROSS REFE RENCE TO RELATE D APPLICATIONThis application claims priority to and the benefit of U.S. Provisional Patent Application No. 62/814,664, which was filed on March 6, 2019, the entirety of which is incorporated by reference herein.TECHN ICAL FI ELDIntegrated assemblies having shield lines between digit lines, and methods of forming integrated assemblies.BACKG ROUN DMemory is one type of integrated circuitry, and is used in computer systems for storing data. An example memory is DRAM (dynamic random-access memory). DRAM cells may each comprise a transistor in combination with a capacitor. The DRAM cells may be arranged in an array; with wordlines extending along rows of the array, and with digit lines extending along columns of the array. The wordlines may be coupled with the transistors of the memory cells. Each memory cell may be uniquely addressed through a combination of one of the wordlines with one of the digit lines.A problem which may be encountered in conventional memory architectures is that capacitive coupling (i.e., parasitic capacitance) may occur between adjacent digit lines, leading to disturbance along inactive digit lines when their neighbors are activated. The capacitive coupling becomes increasing problematic as memory architectures are scaled to increasing levels of integration. It would be desirable to alleviate or prevent such capacitive coupling.It is also desirable to develop new methods for fabricating highly- integrated memory (e.g., DRAM), and to develop new architectu res fabricated with such methods. BRI EF DESCRI PTION OF TH E DRAWI NGSFIGS. 1 - 1 C are diagrammatic views of a region of an example construction at an example initial process stage of an example method of forming an example integrated assembly. FIGS. 1 A, 1 B and 1 C are diagrammatic cross-sectional views along the lines A-A, B-B and C-C of FIG. 1 , respectively.FIGS. 2-2C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 1 -1 C. FIG. 2A is a diagrammatic cross-sectional view along the line A-A of FIG. 2. FIGS. 2B and 2C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 2 and 2A.FIGS. 3-3C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 2-2C. FIG. 3A is a diagrammatic cross-sectional view along the line A-A of FIG. 3. FIGS. 3B and 3C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 3 and 3A.FIGS. 4-4C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 3-3C. FIG. 4A is a diagrammatic cross-sectional view along the line A-A of FIG. 4. FIGS. 4B and 4C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 4 and 4A.FIGS. 5-5C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 4-4C. FIG. 5A is a diagrammatic cross-sectional view along the line A-A of FIG. 5. FIGS. 5B and 5C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 5 and 5A.FIGS. 6-6C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 5-5C. FIG. 6A is a diagrammatic cross-sectional view along the line A-A of FIG. 6. FIGS. 6B and 6C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 6 and 6A.FIGS. 7-7C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 6-6C. FIG. 7A is a diagrammatic cross-sectional view along the line A-A of FIG. 7. FIGS. 7B and 7C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 7 and 7A.FIGS. 8-8C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 7-7C. FIG. 8A is a diagrammatic cross-sectional view along the line A-A of FIG. 8. FIGS. 8B and 8C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 8 and 8A.FIGS. 9-9C are diagrammatic views of the region of the example construction of FIGS. 1 - 1 C at an example processing stage subsequent to that of FIGS. 8-8C. FIG. 9A is a diagrammatic cross-sectional view along the line A-A of FIG. 9. FIGS. 9B and 9C are diagrammatic cross- sectional views along the lines B-B and C-C, respectively, of FIGS. 9 and 9A.FIG. 10 is a diagrammatic view of the region of the example construction of FIG. 9A at an example processing stage subsequent to that of FIG. 9A. FIG. 10 is a view along the same cross-section as FIG. 9A.FIG. 1 1 is a diagrammatic schematic view of a region of an example memory array.FIGS. 1 2-12B are diagrammatic top views of regions of example integrated assemblies.FIGS. 12C and 12D are diagrammatic cross-sectional side views along the line C-C of FIG. 12B, and illustrate a pair of example integrated assemblies.FIG. 12E is a diagrammatic cross-sectional side view illustrating another example integrated assembly. FIGS. 1 3 is a diagrammatic view of the region of the example construction of FIG. 6A at an example processing stage subsequent to that of FIG. 6A, and alternative to the construction shown in FIG. 7A. FIG. 13 is a view along the same cross-section as FIGS. 6A and 7A.FIGS. 14-14C are diagrammatic views of a region of an example construction at an example initial process stage of an example method of forming an example integrated assembly. FIGS. 14A, 14B and 14C are diagrammatic cross-sectional views along the lines A-A, B-B and C-C of FIG. 14, respectively.FIGS. 15-15C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 14-14C. FIG. 1 5A is a diagrammatic cross- sectional view along the line A-A of FIG. 15. FIGS. 15B and 1 5C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 15 and 15A.FIGS. 16-16C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 15-15C. FIG. 1 6A is a diagrammatic cross- sectional view along the line A-A of FIG. 16. FIGS. 16B and 1 6C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 16 and 16A.FIGS. 17-17C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 16-16C. FIG. 1 7A is a diagrammatic cross- sectional view along the line A-A of FIG. 17. FIGS. 17B and 1 7C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 17 and 17A.FIGS. 18-18C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 17-17C. FIG. 1 8A is a diagrammatic cross- sectional view along the line A-A of FIG. 18. FIGS. 18B and 1 8C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 18 and 18A. FIGS. 19-19C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 18-18C. FIG. 1 9A is a diagrammatic cross- sectional view along the line A-A of FIG. 19. FIGS. 19B and 1 9C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 19 and 19A.FIGS. 20-20C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 19-19C. FIG. 20A is a diagrammatic cross- sectional view along the line A-A of FIG. 20. FIGS. 20B and 20C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 20 and 20A.FIGS. 21 -21 C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 20-20C. FIG. 21 A is a diagrammatic cross- sectional view along the line A-A of FIG. 21 . FIGS. 21 B and 21 C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 21 and 21 A.FIGS. 22-22C are diagrammatic views of the region of the example construction of FIGS. 14-14C at an example processing stage subsequent to that of FIGS. 21 -21 C. FIG. 22A is a diagrammatic cross- sectional view along the line A-A of FIG. 22. FIGS. 22B and 22C are diagrammatic cross-sectional views along the lines B-B and C-C, respectively, of FIGS. 22 and 22A.FIG. 23 is a diagrammatic view of the region of the example construction of FIG. 22B at an example processing stage subsequent to that of FIG. 22B. FIG. 23 is a view along the same cross-section as FIG. 22B.FIG. 24 is a diagrammatic cross-sectional side view of a region of an example assembly comprising stacked tiers.DETAILE D DESCRI PTION OF TH E ILLUSTRATE D EMBODI MENTSSome embodiments include memory architectu res (e.g. , DRAM) having shield lines provided between digit lines. The shield lines may be coupled with a reference voltage (e.g., grou nd, Vcc/2, etc.) so that they are not electrically floating. The shield lines may alleviate capacitive coupling between neighboring digit lines. Some embodiments include methods of fabricating memory architectures. Example embodiments are described with reference to FIGS. 1 -24.Referring to FIGS. 1 -1 C, an integrated assembly (construction) 1 0 includes a base 12. The base 12 comprises semiconductor material 1 8; and such semiconductor material may, for example, comprise, consist essentially of, or consist of monocrystalline silicon. The base 1 2 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. In some applications, the base 1 2 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit fabrication. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, etc.A support structure 14 is over the base 12. The support structure includes insulative material 1 6 over the semiconductor material 18. A gap is provided between the support structure 14 and the base 12 to indicate that there may be intervening materials, components, etc., between the support structure 14 and the base 12. In some embodiments, the gap may be omitted.The insulative material 16 may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.A stack 20 is formed over the support structu re 14. The stack 20 includes semiconductor material 22 over digit line material 24.The digit line material 24 may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). I n some embodiments, the digit line material may be a metal-containing material comprising one or more of tu ngsten, titaniu m, titaniu m nitride, tu ngsten nitride, etc.The digit line material 24 has a bottom surface 23 directly against the insulative material 16, and has a top surface 25 in opposing relation to the bottom surface 23.The semiconductor material 22 may comprise any suitable semiconductor composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, l l l/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc. ; with the term l l l/V semiconductor material referring to semiconductor materials comprising elements selected from groups I I I and V of the periodic table (with groups I I I and V being old nomenclature, and now being referred to as groups 13 and 15). In some embodiments, the semiconductor material 22 may comprise, consist essentially of, or consist of silicon (e.g., monocrystalline silicon, polycrystalline silicon, etc.).A bottom section 26 of the semiconductor material 22 is conductively-doped and is ultimately incorporated into source/drain regions of transistors (with example transistors being described below). The bottom section 26 may be n-type doped or p-type doped depending on whether the transistors are to be n-channel devices or p-channel devices. In the shown embodiment, the bottom section 26 is directly against the top surface 25 of the digit line material 24, and accordingly is electrically coupled with the digit line material 24. An approximate upper bou ndary of the bottom section 26 is diagrammatically illustrated with a dashed line 27. The semiconductor material 22 has a bottom su rface 19 directly against the top su rface 25 of the digit line material 24, and has a top surface 21 in opposing relation to the bottom surface 1 9.A protective capping material 28 is formed over the stack 20, and is directly against the top surface 21 of the semiconductor material 22. The capping material 28 may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of silicon nitride.Referring to FIGS. 2-2C, the stack 20 is patterned into rails 30 which extend laterally along a first direction (i.e., a y-axis direction, with the y-axis being shown in FIGS. 2, 2B and 2C). The rails are spaced from one another by trenches 32. The trenches 32 may be referred to as first trenches to distinguish them from other trenches formed at subsequent process stages.The rails 30 extend vertically along a z-axis direction, with the z- axis being shown in FIGS. 2A-2C. Each of the rails has a top su rface corresponding to the top surface 21 of the semiconductor material 22, and has a bottom surface corresponding to the bottom surface 23 of the digit line material 24. Each of the rails has sidewall surfaces 33 extending from the top su rfaces 21 to the bottom surfaces 23. The individual rails are capped by caps of the protective capping material 28.The patterned digit line material 24 within the rails 30 is configu red as digit lines 34; with such digit lines extending laterally along the first direction (i.e., the y-axis direction).The rails 30 may be formed with any suitable processing. For instance, in some embodiments a patterned mask (e.g., a photolithographically-patterned photoresist mask) may be provided to define locations of the rails 30 and the trenches 32; one or more etches may be utilized to transfer a pattern from the patterned mask into materials under the mask to thereby form the rails 30 and trenches 32; and then the mask may be removed to leave the construction of FIGS. 2-2C. Each of the digit lines 34 has a width W along the cross-section of FIG. 2A. Such width may be referred to as a first width. The cross- section of FIG. 2A is orthogonal to the first direction of the y-axis, and extends along an x-axis. The orthogonal relationship of the x and y axes is shown in FIG. 2.Each of the digit lines 34 has a height FI from the top of the insulative material 16 to the upper surface 25. In some embodiments, such height may be referred to as a first height.The trenches 32 may be considered to include intervening regions 36 between the digit lines 34. In the shown embodiment, such intervening regions also have the first width W along the cross-section of FIG. 2A. In the shown embodiment, each of the trenches has a uniform width W from the bottom surfaces 23 of the digit lines 34 to the top surfaces 21 of the rails 30, and even to the top surfaces of the capping material 28. In other embodiments, the widths of the intervening regions 36 may be different than the widths of the digit lines, but the trenches may still be of u niform width from the bottom surfaces of the digit lines to the top surfaces of the rails.FIGS. 2 and 2A show an edge region 38 along one side of the patterned rails 30. In some embodiments, the rails 30 are patterned into components of a memory array, and accordingly are within a memory array region 40. In such embodiments, the edge region 38 may be utilized to illustrate processing along a peripheral edge of the memory array region 40.Referring to FIGS. 3-3C, insulative material 42 is formed to cover the top surfaces 21 and sidewall su rfaces 33 of the rails 30. The insulative material 42 narrows the trenches 32.The insulative material 42 may comprise any suitable composition(s) ; and in some embodiments may comprise silicon dioxide (e.g., silicon dioxide deposited utilizing tetraethylorthosilicate, TEOS) ; porous silicon oxide, carbon-doped silicon dioxide, etc. The insulative material 42 may be formed with any suitable processing, including, for example, atomic layer deposition, chemical vapor deposition, etc. The narrowed trenches 32 have a u niform width Wi from the top surfaces 21 of the semiconductor material 22 to bottom surfaces 31 of the trenches 32. In some embodiments, the width Wi may be referred to as second width to distinguish it from the first width W of the digit lines 34 and intervening regions 36. I n some embodiments, the second width Wi may be less than or equal to about one-half of the first width W, less than or equal to about one-third of the first width W, etc.Referring to FIGS. 4-4C, conductive shield material 44 is formed within the narrowed trenches 32. The conductive shield material 44 may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titaniu m, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon (e.g., polycrystalline silicon), conductively- doped germanium, etc.). I n some embodiments, the conductive shield material 44 may be referred to as a second conductive material to distinguish it from the first conductive material 24 utilized as the digit line material. The shield material 44 may comprise a same composition as the digit line material 24 in some embodiments, or may comprise a different composition than the digit line material 24. In some embodiments, the shield material 44 may comprise one or more metals and/or metal-containing materials; and may, for example, comprise one or more of titaniu m nitride, tantalum nitride, tungsten, tantalu m, ruthenium, etc.In the illustrated embodiment, the conductive shield material 44 fills the narrowed trenches 32. I n some embodiments, the shield material 44 may be considered to substantially fill the narrowed trenches 32; with the term "substantially fill" meaning that the shield material 44 fills the trenches to at least a level of the top surfaces 21 of the semiconductor material 22 within the rails 30.Referring to FIGS. 5-5C, an optional chop-cut is utilized to punch through the shield material 44 along the edge region 38 and thereby form a recessed region 46. The shield material 48 adjacent the recessed region 46 may be considered to include a horizontally extending ledge region 48.Referring to FIGS. 6-6C, additional insulative material 42 is formed over the shield material 44 and within the recessed region 46. The additional insulative material 42 may comprise any suitable composition(s) ; and in some embodiments may comprise silicon dioxide. The silicon dioxide may be formed with a spin-on-dielectric (SOD) process. I n the shown embodiment, a planarized upper surface 51 extends across the materials 44 and 42. Such planarized upper surface may be formed with an suitable processing ; such as, for example, chemical-mechanical processing (CMP).Referring to FIGS. 7-7C, second trenches 52 are formed to extend along a second direction (i.e., the x-axis direction). The second direction of the second trenches 52 crosses the first direction (i.e., the y-axis direction) ; and accordingly crosses the direction of the first trenches 32 (shown in FIGS. 2-2C). In the shown embodiment, the second direction of the second trenches 52 is substantially orthogonal to the first direction of the first trenches 32.The second trenches 52 pattern upper regions 54 of the rails 30, and do not pattern lower regions 56 of the rails (as shown in FIG. 7B) ; and the digit lines 34 remain within the u npatterned lower regions 56 of the rails. The second trenches 52 also extend into the conductive shield material 44 (as shown in FIG. 7C).The patterned upper regions 54 include vertically-extending pillars 58 of the semiconductor material 22, with such pillars being over the digit lines 34.The pillars 58 have the sidewall surfaces 33 patterned with the first trenches 30 (with such sidewall surfaces 33 being described above with reference to FIGS. 2-2C). The sidewall surfaces 33 are indicated diagrammatically with dashed lines in the top view of FIG. 7.Referring to FIGS. 8-8C, wordlines 60 are formed within the second trenches 52. The wordlines comprise conductive wordline material 62. The conductive wordline material 62 may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titanium, tu ngsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the conductive wordline material 62 may be considered to be a third conductive material so that it may be distinguished from the second conductive material 44 of the shield lines and the first conductive material 24 of the digit lines. The first, second and third conductive materials may be the same composition as one another; and in some embodiments will comprise a same metal-containing composition (e.g., a composition comprising one or more of tu ngsten, titanium, tantalu m, ruthenium, tungsten nitride, tantalu m nitride, titaniu m nitride, etc.). Alternatively, at least one of the first, second and third conductive materials may be a different composition relative to at least one other of the first, second and third conductive materials.In the shown embodiment, insulative material 64 is provided within the second trenches 52, and the wordlines 60 are embedded within such insulative material. The insulative material 64 may comprise any suitable composition(s) ; and in some embodiments may comprise one or both of silicon dioxide and silicon nitride.Regions of the insulative material 64 between the wordlines 60 and the semiconductor material 22 correspond to gate dielectric material (or gate insulative material) 63. The gate dielectric material may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.The wordlines 60 are diagrammatically illustrated in the top view of FIG. 8 to assist the reader in u nderstanding the orientation of the wordlines relative to the other structu res within the assembly 10.In the illustrated embodiment, the wordlines 60 are shown to correspond to wordlines WL1 , WL2 and WL3. Such wordlines are examples of wordlines that may extend along the rows of a memory array. Also, the digit lines 34 are indicated to correspond to digit lines DL1 , DL2, DL3 and DL4. Such digit lines are examples of digit lines that may extend along the columns of the memory array.Referring to FIGS. 9-9C, the shield material 44 is recessed (i.e., reduced in height) to form conductive shield lines 66; with the conductive shield lines extending along the first direction of the y-axis. In the shown embodiment, the conductive shield lines vertically overlap upper segment (regions) 68 of the digit lines (e.g., DL1 ) and lower segments (regions) 70 of the semiconductor material 22. In some embodiments, the lower segments 70 may correspond to segments along the unpatterned portions 56 of the rails 30 (shown in FIG. 7B). In some embodiments, the lower regions 70 may include the entirety of the doped bottom segment 26 of the semiconductor material 22. In some embodiments, the digit lines (e.g., DL4) may be considered to extend to the first height FI above the upper surface of the insulative material 16, and the shield lines 66 may be considered to comprise top surfaces 67 which are at a second height H i above the upper su rface of the insulative material 16. The second height H i may be greater than or equal to the first height FI. The doped regions 26 may be considered to extend to a third height FI2, and the second height FH 1 may also be greater than or equal to the third height FI2. Additionally, each of the wordlines (e.g., WL3) may be considered to have a bottom su rface at a fourth height H3 (shown in FIG. 9C), and the second height Hi (FIG. 9A) may be less than the fourth height FI3.Notably, the shield line 66 within the edge region 38 has a different configuration then the shield lines 66 within the intervening regions 36. Specifically, the shield lines 66 within the intervening regions 36 are configured as vertically-extending plates, whereas the shield line 66 within the edge region 38 is configu red as an angle plate. Specifically, the shield line 66 within the edge region 38 has a vertically- extending region 72, a horizontally-extending region 74, and an elbow region 73 connecting the vertically-extending region with the horizontally-extending region. In some embodiments, the digit line DL1 may be considered to be an edge digit line along the edge of a memory array, and to define an edge column 76. The edge column 76 has an intervening region 36 on one side, and has the edge region 38 on another side in opposing relation to said one side. The shield line 66 having the angle-plate-configu ration extends along the edge column 76.The shield lines 66 within the intervening regions 36 have horizontal widths corresponding to the width Wi described above with reference to FIG. 3A.Insulative material 42 is formed over the recessed shield lines 66.Construction 10 is subjected to planarization (e.g. , CMP) to form a planarized upper surface 65 extending across the insulative materials 42 and 64, and across the semiconductor material 22.Top sections 78 of the semiconductor material pillars 58 are doped. The top sections 78 may be doped with the same type dopant as is utilized in the bottom section 26. Approximate lower boundaries of the doped sections 78 are diagrammatically illustrated with dashed lines 79. The doped top sections 78 form upper source/drain regions 80 of transistors 86, and the doped bottom sections 26 form lower source/drain regions 82 of the transistors. Transistor channel regions 84 are within the semiconductor pillars 58 and extend vertically between the lower source/drain regions 82 and the upper source/drain regions 80. The channel regions may be intrinsically doped, or lightly doped, to achieve a desired threshold voltage. The wordlines (e.g., WL3) are adjacent to the channel regions 84, and are spaced from the channel regions by the gate dielectric material 63. The wordlines comprise gates of the transistors 86 and may be utilized to gatedly couple the source/drain regions 80 and 82 of individual transistors to one another through the channel regions 84. FIG. 9B shows gates 88 along the wordlines 60, with such gates corresponding to regions of the wordlines adjacent the channel regions 84. In some embodiments, the gates 88 may be considered to correspond to gate regions of the wordlines 60.In the embodiment of FIGS. 1 -9, the bottom sections 26 of the semiconductor material 22 are doped prior to forming the wordlines 60 (specifically, are shown to be doped at the processing stage of FIG. 1 ), and the top sections 78 of the semiconductor material 22 are doped after forming the wordlines 60 (specifically, are doped at the processing stage of FIG. 9). I n other embodiments the top and bottom sections 26 and 78 may be doped at other process stages. For instance, both the top and bottom sections 26 and 78 may be doped at the process stage of FIG. 1 .The shield lines 66 may be utilized to alleviate, and even prevent, undesired parasitic capacitance between adjacent digit lines (e.g., parasitic capacitance between the digit lines DL1 and DL2). The shield lines 66 are shown to be coupled with a reference structure 90 (i.e., a reference voltage source, reference voltage node, etc.), which in tu rn is coupled with circuitry 92 configured to provide a reference voltage to the reference structure; and in some embodiments configured to hold the reference structure 90 at the reference voltage. The reference voltage is thus provided to the shield lines 66. The reference voltage may be any suitable reference voltage; and in some embodiments may be ground, Vcc/2, etc. It may be advantageous to hold the shield lines at a reference voltage, rather than enabling the shield lines to electrically float, in that such may enable the shield lines to better alleviate undesired parasitic capacitance between adjacent digit lines. The reference structure 90 may be a conductive plate (e.g., a metal- containing plate), or any other suitable conductive structu re. In some embodiments, the reference structu re 90 may be omitted and the shield lines 66 may be simply coupled to circuitry configu red to induce a desired reference voltage along the shield lines.The intervening regions 36 comprise the first width W from the bottom surfaces 23 of the digit lines 34 to top surfaces 81 of the upper source/drain regions 80.Referring to FIG. 1 0, storage elements 94 are formed to be conductively coupled with the upper sou rce/drain regions 80. The storage elements may be any suitable devices having at least two detectable states; and in some embodiments may be, for example, capacitors, resistive-memory devices, conductive-bridging devices, phase-change-memory (PCM) devices, programmable metallization cells (PMCs), etc. I n the shown embodiment, the storage elements 94 are capacitors. Each capacitor has a node coupled with a reference voltage 96. Such reference voltage may be any suitable reference voltage, and may be the same as the reference voltage utilized at the shield lines 66, or may be different from such reference voltage. In some embodiments, the reference voltage 96 may be ground or Vcc/2.The storage elements 94 and transistors 86 may be incorporated into memory cells 1 00 of a memory array 98. In some embodiments, the transistors 86 may be referred to as access transistors of the memory cells. FIG. 1 1 schematically illustrates a portion of the memory array 98, and shows such memory array comprising digit lines DL1 , DL2 and DL3, together with the wordlines WL1 , WL2 and WL3. Each of the memory cells 1 00 within the memory array is uniquely addressed through a combination of one of the wordlines and one of the digit lines. The memory array may include any suitable number of memory cells 100; and in some embodiments may comprise hu ndreds, millions, tens of millions, etc., of memory cells.The reference structure 90 of FIG. 1 0 may be placed in any suitable location relative to the memory array 98. FIGS. 1 2-12E show example arrangements of the memory array 98 and the reference structure 90. Each of FIGS. 1 2-12E shows the memory array 98 (labeled MEMORY ARRAY) diagrammatically illustrated as a square or other suitable polygon. FIGS. 1 2-12B diagrammatically illustrate the conductive shield lines 66 with dashed lines crossing the memory array.The memory array 98 of FIGS. 1 2-12B may be considered to have a peripheral boundary 1 02, and to have peripheral edges 101 , 103, 105 and 107 along the peripheral boundary. In some embodiments, the edges 1 01 and 103 may be referred to as first and second peripheral edges of the memory array, and may be considered to be in opposing relation relative to one another. Each of the shield lines 66 has a first end 109 along the first peripheral edge 1 01 , and has a second end 1 1 1 along the second peripheral edge 1 03. The first and second ends 109 and 1 1 1 may be considered to be in opposing relation to one another. FIG. 1 2 shows an embodiment in which the first ends 109 of the shield lines 66 are electrically coupled with the reference structure 90 (labeled RE F in FIG. 12) through interconnects 104.FIG. 12A shows an embodiment in which a first reference structure 90a (RE F 1 ) is provided adjacent the first peripheral edge 101 of the memory array 98, and a second reference structure 90b (RE F 2) is provided adjacent the second peripheral edge 103 of the memory array. In the illustrated embodiment, the first reference structure 90a is laterally offset from the first peripheral edge 1 01 , and the second reference structu re 90b is laterally offset from the second peripheral edge 103. The reference structures 90a and 90b are both coupled to common circuitry 92 configured to provide desired reference voltages on the structures 90a and 90b (i.e., the reference voltage nodes 90a and 90b, the reference voltage sources 90a and 90b, etc.). The shield lines 66 are divided amongst a first set 66a and a second set 66b. The first set has the first ends 109 coupled with the first reference structu re 90a through first interconnects 104a, and the second set has the second ends 1 1 1 coupled with the second reference structure 90b through second interconnects 104b.The use of two reference structu res 90a and 90b in the embodiment of FIG. 1 2A may enable the connections between the reference structures and the shield lines 66 to be better spread than can be accomplished with the single reference structu re of FIG. 1 2. Such may simplify the formation of the connections between the shield lines and the reference structures, and may enable desired spacing between adjacent interconnects to avoid parasitic capacitance between neighboring interconnects.FIG. 1 2B shows an embodiment in which the reference structure 90 (RE F) peripherally surrounds the memory array 98. Such may enable the connections to the shield lines to be spread uniformly around the memory array, which may fu rther alleviate parasitic capacitance between neighboring interconnects 104.The reference structures may be provided to be along a same plane as the memory array, or may be vertically offset relative to the memory array. For instance, FIGS. 12C and 1 2D show cross-sections along the line C-C of FIG. 12B illustrating example embodiments in which the reference structure 90 is along a same horizontal plane as the memory array 98 (FIG. 1 2C), or is vertically offset relative to the memory 98 (FIG. 12D).FIG. 1 2E shows another embodiment in which a reference structure 90 is vertically offset from a memory array 98; but in the embodiment of FIG. 12E the reference structure is not laterally offset relative to the memory array, and is instead directly under the memory array.The embodiment of FIGS. 1 -1 0 reduces the height of the conductive shield material 44 after forming the wordlines 60. Specifically, the wordlines 64 are formed at the processing stage of FIG. 8, and the height of the shield material is reduced at the processing stage of FIG. 9 in order to form the conductive shield lines 66. In other embodiments, the height of the conductive shield material may be reduced prior to forming the wordlines. For instance, FIG. 1 3 shows construction 1 0 at a process stage alternative to that of FIG. 7A, and shows the shield line material 44 reduced in height to form the conductive shield lines 66. The construction 1 0 of FIG. 1 3 may be subsequently processed with methodology analogous to that of FIGS. 8-10 to form the memory array 98 described with reference to FIG. 10.The processing of FIGS. 1 -1 0 utilizes interconnects extending from the ends of the shield lines 66 to couple the shield lines with one or more reference structures. In other embodiments, a reference structure may be provided under the shield lines and directly against bottom surfaces of the shield lines. FIGS. 14-23 illustrate an example embodiment in which shield lines are formed to have bottom su rfaces directly against a reference structure.Referring to FIGS. 14-14C, an integrated assembly (construction) 10a includes a support structure 14a over the base 12. The support structure includes the insulative material 1 6 and the semiconductor material 1 8, and further includes a reference structure 90 between the materials 16 and 18. The reference structure 90 comprises conductive material 120. The conductive material 120 may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titanium, tu ngsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the reference structure 90 comprises metal-containing material ; such as, for example, one or more of titanium, tantalu m, titanium nitride, tantalum nitride, rutheniu m, tu ngsten, etc. In the shown embodiment, the reference structure may be considered to be configured as a horizontally-extending expanse.The stack 20 is formed over the support structure 14a. The stack 20 includes the semiconductor material 22 over the digit line material 24. The bottom section 26 of the semiconductor material 22 is conductively-doped. The protective capping material 28 is over the stack 20.The reference structure 90 is shown to be coupled with the circuitry 92 configu red to hold the reference structu re at a desired voltage (e.g., ground, Vcc/2, etc.). Although such coupling of the reference structu re 90 to the circuitry 92 is shown at the process stage of FIGS. 14-14C, in other embodiments the coupling may be provided at a later process stage.Referring to FIGS. 15-1 5C, the stack 20 is patterned into rails 30 which extend laterally along the first direction (y-axis direction). The rails are spaced from one another by the first trenches 32. The rails 30 extend vertically along the z-axis direction. Each of the rails has a top surface corresponding to the top su rface 21 of the semiconductor material 22, and has the sidewall surfaces 33.The patterning of the rails 30 punches through the insulative material 1 6 to expose an upper surface 121 of the reference structu re 90 along the bottoms of the trenches 32. The patterned digit line material 24 within the rails 30 is configu red as the digit lines 34; which are labeled as digit lines DL1 - DL4.The rails 30 may be formed with any suitable processing, including, for example, process analogous to that described above with reference to FIGS. 2-2C.The digit lines 34 have the first width W along the cross-section of FIG. 15A, and extend to the first height H.The trenches 32 include the intervening regions 36 between the digit lines 34, and such intervening regions also have the first width W. In the shown embodiment, each of the trenches has a u niform width W from the top surface 1 21 of the reference structu re 90 to top surfaces of the capping material 28.The edge region 38 is shown along one side of the patterned rails 30. The edge region of the embodiment of FIGS. 15-15C is analogous to the edge region described above relative to the embodiment of FIGS. 2-2C.Referring to FIGS. 16-1 6C, insulative material 42 is formed over the rails 30, and is patterned into insulative shells 122. The insulative shells cover the top surfaces 21 of the rails and the sidewall surfaces 33 of the rails. The insulative shells 1 22 narrow the trenches 32, and the upper surface 1 21 of the reference structure 90 is exposed along bottoms of the narrowed trenches.The narrowed trenches 32 have the uniform second width Wi from the upper surface 1 21 of the reference structure 90 to the top su rfaces 21 of the semiconductor material 22. In some embodiments, the second width Wi may be less than or equal to about one-half of the first width W, less than or equal to about one-third of the first width W, etc.Referring to FIGS. 1 7-17C, the conductive shield material 44 is formed within the narrowed trenches 32 and directly against the exposed upper surface 121 of the reference structure 90 at the bottoms of the narrowed trenches.In the illustrated embodiment, the conductive shield material fills the narrowed trenches 32. In some embodiments, the shield material 44 may be considered to substantially fill the narrowed trenches 32; with the term "substantially fill" meaning that the shield material 44 fills the trenches to at least a level of the top surfaces 21 of the semiconductor material 22 within the rails 30.Referring to FIGS. 1 8-18C, the shield material 44 is recessed (i.e., reduced in height) to form the conductive shield lines 66; with the conductive shield lines extending along the first direction of the y-axis. In the shown embodiment, the conductive shield lines vertically overlap the entire height of the digit lines (e.g., DL1 ), and vertically overlap lower segments 70 of the semiconductor material 22. In some embodiments, the digit lines (e.g., DL4) may be considered to extend to the first height H above the reference structure 90, and the shield lines 66 may be considered to comprise top surfaces 67 which are at the second height H i above the reference structure. The second height Hi may be greater than or equal to the first height H. The doped regions 26 may be considered to extend to the third height H2, and the second height H i may also be greater than or equal to the third height H2.The shield lines 66 within the intervening regions 36 have horizontal widths corresponding to the width Wi described above with reference to FIG. 1 6A.Referring to FIGS. 19-1 9C, additional insulative material 50 is formed over the conductive shield lines 66. The additional insulative material 50 may comprise any suitable composition(s) ; and in some embodiments may comprise silicon dioxide. The silicon dioxide may be formed with a spin-on-dielectric (SOD) process. The additional insulative material 50 may comprise a same composition as the insulative material 42, or may be a different composition than the insulative material 42.Referring to FIGS. 20-20C, the second trenches 52 are formed to extend along the second direction (i.e., the x-axis direction). The second trenches 52 pattern upper regions 54 of the rails 30, and do not pattern lower regions 56 of the rails (as shown in FIG. 20B) ; and the digit lines (e.g., DL2) remain within the unpatterned lower regions 56 of the rails. The patterned upper regions 54 include vertically-extending pillars 58 of the semiconductor material 22, with such pillars being over the digit lines 34.Referring to FIGS. 21 -21 C, the wordlines 60 are formed within the second trenches 52. The wordlines comprise the conductive wordline material 62.The insulative material 64 is also provided within the second trenches 52, and the wordlines 60 are embedded within such insulative material. The insulative material 64 may comprise any suitable composition(s) ; and in some embodiments may comprise one or both of silicon dioxide and silicon nitride.The gate dielectric material (or gate insulative material) 63 is provided between the wordlines and the semiconductor pillars 58.The wordlines 60 are shown to correspond to wordlines WL1 , WL2 and WL3.Construction 10 is subjected to planarization (e.g. , CMP) to form a planarized upper surface 65 extending across the insulative materials 42, 50 and 64, and across the semiconductor material 22.Referring to FIGS. 22-22C, the top sections 78 of the semiconductor material pillars 58 are doped. The top sections 78 may be doped with the same type dopant as is utilized in the bottom section 26. The doped top sections 78 form upper source/drain regions 80 of transistors 86, and the doped bottom sections 26 form lower source/drain regions 82 of the transistors. Transistor channel regions 84 are within the semiconductor pillars 58 and extend vertically between the lower source/drain regions 82 and the upper source/drain regions 80. The wordlines (e.g., WL3) are adjacent the channel regions, and are spaced from the channel regions by the gate dielectric material 63. The wordlines comprise gates of the transistors 86 and may be utilized to gatedly couple the source/drain regions 80 and 82 of individual transistors to one another through the channel regions 84. FIG. 22B shows gates 88 along the wordlines 60, with such gates corresponding to regions of the wordlines adjacent the channel regions 84. In some embodiments, the gates 88 may be considered to correspond to gate regions of the wordlines 60.The shield lines 66 may be utilized to alleviate, and even prevent, undesired parasitic capacitance between adjacent digit lines (e.g., parasitic capacitance between the digit lines DL1 and DL2), in manner analogous to that described above with reference to FIG. 9.In the embodiment of FIGS. 14-22, the bottom sections 26 of the semiconductor material 22 are doped prior to forming the wordlines 60 (specifically, are shown to be doped at the processing stage of FIG. 14), and the top sections 78 of the semiconductor material 22 are doped after forming the wordlines 60 (specifically, are doped at the processing stage of FIG. 22). I n other embodiments the top and bottom sections 26 and 78 may be doped at other process stages. For instance, both the top and bottom sections 26 and 78 may be doped in the semiconductor material 22 at the process stage of FIG. 14.In the embodiment of FIGS. 14-22, the height of the conductive shield material 44 is reduced prior to forming the wordlines 60. I n other embodiments, the height of the conductive shield material may be reduced after forming the wordlines 60 analogously to the embodiment described above with reference to FIGS. 1 -10.Referring to FIG. 23, construction 10a is shown at a process stage following that of FIG. 22B. The storage elements 94 are formed to be conductively coupled with the upper source/drain regions 80. In the shown embodiment, the storage elements 94 are capacitors. Each capacitor has a node coupled with the reference voltage 96.The storage elements 94 and transistors 86 may be incorporated into memory cells 1 00 of a memory array 98. In some embodiments, the transistors 86 may be referred to as access transistors of the memory cells. The memory array 98 may be analogous to that described above with reference to FIG. 1 1 .The reference voltage source 92 (i.e., reference voltage circuitry) may be provided in any suitable location relative to the reference structure 90; and in some embodiments may be below the reference structure, above the reference structure, laterally outward of the reference structure, etc. In some embodiments, one or more du mmy wordlines may be utilized to supply the reference voltage to the reference structure 90.In some embodiments, a memory array 98 (e.g., the memory array 98 of FIG. 10 or that of FIG, 23) may be within a memory tier (i.e., memory deck) which is within a vertically-stacked arrangement of tiers (or decks). For instance, FIG. 24 shows a portion of an integrated assembly 1 0b comprising a vertically-stacked arrangement of tiers 168, 170, 1 72 and 174 (also labeled as tiers 1 -4). The vertically-stacked arrangement may extend upwardly to include additional tiers. The tiers 1 -4 may be considered to be examples of levels that are stacked one atop the other. The levels may be within different semiconductor dies (wafers), or at least two of the levels may be within the same semiconductor die. The bottom tier (tier 1 ) may include control circuitry and/or sensing circuitry (e.g., may include wordline drivers, sense amplifiers, reference-voltage-control-circuitry 92, etc. ; and in some embodiments may include CMOS circuitry). The upper tiers (tiers 2-4) may include memory arrays, such as, for example, the memory array 98. The memory arrays within the various tiers may be the same as one another (e.g., may all be DRAM arrays), or may be different relative to one another (e.g., some may be DRAM arrays, while others are NAN D arrays). Also, one or more of the upper tiers may include control circuitry or other logic circuitry. FIG. 24 diagrammatically shows an upper deck (tier 2) comprising a memory array, and a lower deck (tier 1 ) comprising control circuitry, and shows the control circuitry of the lower deck coupled with the circuitry of the upper deck through a conductive interconnect 1 75.The assemblies and structu res discussed above may be utilized within integrated circuits (with the term“integrated circuit” meaning an electronic circuit supported by a semiconductor substrate) ; and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, commu nication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms“dielectric” and“insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term“insulative” (or“electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The terms“electrically connected” and“electrically coupled” may both be utilized in this disclosure. The terms are considered synonymous. The utilization of one term in some instances and the other in other instances may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various featu res, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being“on”,“adjacent” or “against” another structure, it can be directly on the other structu re or intervening structu res may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present. The terms "directly u nder", "directly over", etc., do not indicate direct physical contact (u nless expressly stated otherwise), but instead indicate upright alignment.Structu res (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structu res generally extend upwardly from an u nderlying base (e.g., substrate). The vertically- extending structu res may extend substantially orthogonally relative to an upper su rface of the base, or not.Some embodiments include an integrated assembly having digit lines which extend along a first direction. The digit lines are spaced from one another by intervening regions. Each of the digit lines has a first width along a cross-section orthogonal to the first direction. Each of the intervening regions also has the first width along the cross- section. Each of the digit lines has a top surface at a first height. Vertically-extending pillars are over the digit lines. Each of the pillars includes a transistor channel region which extends vertically between an upper sou rce/drain region and a lower source/drain region. The lower source/drain regions are coupled with the digit lines. Each of the pillars has the first width along the cross-section. The intervening regions extend upwardly to between the pillars and have the first width from top surfaces of the upper source/drain regions to bottom surfaces of the digit lines. Storage elements are coupled with the upper source/drain regions. Wordlines extend along a second direction which crosses the first direction. The wordlines include gate regions adjacent the channel regions. Shield lines are within the intervening regions and extend along the first direction. Each of the shield lines has a top surface at a second height which is greater than or equal to the first height.Some embodiments include a method of forming an integrated assembly. A support structure is formed to comprise insulative material over a reference structure. The reference structu re comprises metal and is configu red as a horizontally-extending expanse. A stack is formed over the support structure. The stack comprises semiconductor material over digit line material. The stack is patterned into rails extending along a first direction. The rails are spaced from one another by first trenches. The patterning pu nches through the insulative material to leave an upper surface of the reference structure exposed along bottoms of the first trenches. Each of the rails has a top surface, and has sidewall su rfaces extending downwardly from the top surface. The patterning of the stack into the rails forms the digit line material into digit lines which extend along the first direction. I nsulative shells are formed that cover the top su rfaces and the sidewall surfaces of the rails. The insulative shells narrow the first trenches. The upper surface of the reference structure is exposed along bottoms of the narrowed first trenches. Conductive shield lines are formed within the narrowed first trenches and directly against the exposed upper su rface of the reference structure at the bottoms of the narrowed first trenches. Second trenches are formed which extend along a second direction. The second direction crosses the first direction. The second trenches pattern upper regions of the rails into pillars and do not pattern lower regions of the rails. The lower regions of the rails include the digit lines. Wordlines are formed within the second trenches. Bottom sections of the semiconductor material are doped to form lower source/drain regions. The lower source/drain regions are coupled with the digit lines. Top sections of the semiconductor material are doped to form upper source/drain regions. Channel regions are vertically between the lower source/drain regions and the upper source/drain regions. The wordlines are adjacent the channel regions. Storage elements are formed to be coupled with the upper source/drain regions.Some embodiments include a method of forming an integrated assembly. A stack is formed to comprise semiconductor material over digit line material. The stack is patterned into rails extending along a first direction. The rails are spaced from one another by first trenches. The rails have top su rfaces, and have sidewall surfaces extending downwardly from the top su rfaces. The patterning of the stack into the rails forms the digit line material into digit lines which extend along the first direction. An insulative material is formed to covers the top surfaces and the sidewall su rfaces of the rails. The insulative material narrows the first trenches. Conductive shield lines are formed within the narrowed first trenches. Second trenches are formed to extend along a second direction. The second direction crosses the first direction. The second trenches pattern upper regions of the rails into pillars and do not pattern lower regions of the rails. The lower regions of the rails include the digit lines. Wordlines are formed within the second trenches. Bottom sections of the semiconductor material are doped to form lower sou rce/drain regions. The lower source/drain regions are coupled with the digit lines. Top sections of the semiconductor material are doped to form upper source/drain regions. Channel regions are vertically between the lower source/drain regions and the upper sou rce/drain regions. The wordlines are adjacent the channel regions. Storage elements are formed to be coupled with the upper source/drain regions. The storage elements are comprised by memory cells of a memory array. The digit lines extend along columns of the memory array and the wordlines extend along rows of the memory array. Each of the shield lines has a first end along a first peripheral edge of the memory array and has a second end along a second peripheral edge of the memory array in opposing relation to the first peripheral edge of the memory array. At least one of the first and second ends of each of the conductive shield lines is electrically connected with a reference voltage sou rceIn compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents.
Some embodiments of the present invention are directed to OLED materials useful in display devices and processes for making such OLED materials. The OLED materials may comprise polar compounds integrated with one or more substrates. When the polar compounds are simultaneously cured and exposed to an applied voltage or electric field, the polar compounds may be oriented in the direction of the voltage. Such orientation may result in the light emitted from the OLED material radiating in a single direction. Additional embodiments are directed to a system comprising a display device having a polar light-emitting layer whose dipoles are oriented in a single direction.
We Claim: 1. A process for preparing Organic Light Emitting Diode (OLED) structure comprising: a. coating a substrate with a conductive material to form an anode; b. coating the anode with a hole-transport material to form a coated substrate; c. optionally applying friction to the coated substrate to form an irregular surface alignment layer; d. applying a polar organic compound to the surface of the coated substrate, and optionally allowing the polar organic compound to fill the irregular surface alignment layer formed in (c), to form a treated coated substrate; e. curing the treated coated substrate while simultaneously exposing the treated coated substrate to an electric field. 2. The process of claim 1, wherein the treated coated substrate is exposed to an electric field of less than 5 volts during the curing of the treated coated substrate. 3. The process of claim 1, comprising: a. coating a substrate with a conductive material to form an anode; b. coating the anode with a polyimide material to form a coated substrate; c. applying friction to the coated substrate to form an irregular surface alignment layer; d. applying a polar organic compound to the surface of the coated substrate, and allowing the polar organic compound to fill the grooves formed in (c), to form a treated coated substrate; e. curing the treated coated substrate while simultaneously exposing the treated coated substrate to an electric field. 4. The process of any one of claims 1 or 3, wherein the exposure of the coated substrate to an electric field aligns the polar organic compound in a single orientation. 5. The process of any one of claims 1 or 3, wherein the electric field is between about 1 and 7 volts. 6. An apparatus comprising an organic light emitting diode structure comprising: a. an anode integrated onto an anode substrate and connected to a power source; b. a conductive layer coated onto the anode; c. a hole-transport material coated onto the anode to form a coated substrate; d. an optional irregular surface alignment layer formed on the coated substrate; e. a polar organic compound applied to the surface of the coated substrate, and optionally filling in the irregular surface alignment layer in (c), to form a treated coated substrate; f. an electron transport layer disposed on the polar organic compound; g. a cathode disposed on the electron transport layer and supported by a cathode substrate; h. a power source connected to the anode and the cathode, wherein, when voltage is applied to the anode and cathode from the power source, dipoles of the polar organic compound orient in a uniform direction. 7. The apparatus of claim 6, wherein the anode is coated with a polyimide material to form a coated substrate. 8. The apparatus of claim 6, wherein the anode substrate and the cathode substrate are selected from glass, plastic, quartz, plastic film, metal, ceramic, and polymers. 9. The apparatus of claim 6, wherein the conductive layer is selected from the group consisting of indium-tin oxide, indium-zinc oxide, aluminum-doped zinc oxide, indium- doped zinc oxide, magnesium-indium oxide, nickel-tungsten oxide, gallium nitride, zinc selenide and zinc sulfide. 10. The apparatus of claim 6, wherein the hole transport material is selected from the group consisting of monoarylamines, diarylamines, triarylamines, polymer arylamines, poly(N-vinylcarbazole), polythiophenes, polypyrroles, polyanilines, and copolymers thereof. 11. The apparatus of claim 6, wherein the polar organic compound is selected from the group consisting of fluorescent dyes, phosphorescent compounds, transition metal complexes, iridium complexes of phenylpyridine, coumarins, polyfluorenes, and polyvinylarylenes. 12. The apparatus of claim 6, wherein the electron transport layer is a metal chelated oxinoid compound. 13. A system, comprising: a central processing unit operable to execute at least one set of maeline-readable instructions; a memory storage device operable to share the machine-readable instruction; and a display device comprising an OLED structure comprising at least one polar light emitting layer containing dipoles oriented in a single direction, wherein the display device is operable to display images in response to the set of machine-readable instructions. 14. The system of claim 13, wherein the OLED structure comprises: a. an anode integrated onto an anode substrate and connected to a power source; b. a conductive layer coated onto the anode; c. a hole-transport material coated onto the anode to form a coated substrate; d. an optional irregular surface alignment layer formed on the coated substrate; e. a polar organic compound applied to the surface of the coated substrate, and optionally filling in the irregular surface alignment layer in (c), to form a treated coated substrate; f. an electron transport layer disposed on the polar organic compound; g. a cathode disposed on the electron transport layer and supported by a cathode substrate; h. a power source connected to the anode and the cathode, wherein, when voltage is applied to the anode and cathode from the power source, dipoles of the polar organic compound orient in a uniform direction. 15. The system of claim 14, wherein the anode is coated with a polyimide material to form a coated substrate. 16. The system of claim 14, wherein the anode substrate and the cathode substrate are selected from glass, plastic, quartz, plastic film, metal, ceramic, and polymers. 17. The system of claim 14, wherein the conductive layer is selected from the group consisting of indium-tin oxide, indium-zinc oxide, aluminum-doped zinc oxide, indium- doped zinc oxide, magnesium-indium oxide, nickel-tungsten oxide, gallium nitride, zinc selenide and zinc sulfide. 18. The system of claim 14, wherein the hole transport material is selected from the group consisting of monoarylamines, diarylamines, triarylamines, polymer arylamines, poly(N-vinylcarbazole), polythiophenes, polypyrroles, polyanilines, and copolymers thereof. 19. The system of claim 14, wherein the polar organic compound is selected from the group consisting of fluorescent dyes, phosphorescent compounds, transition metal complexes, iridium complexes of phenylpyridine, coumarins, polyfluorenes, and polyvinylarylenes. 20. The system of claim 14, wherein the electron transport layer is a metal chelated oxinoid compound.
Low Power Consumption OLED Material for Display ApplicationsBackground of the Invention[0001] Liquid crystal displays (LCDs) are commonly used in devices such as flat panel displays for laptop computers, personal digital assistants, cellular phones, and the like. Displays made with LCDs frequently use a cold cathode fluorescent lamp (CCFL) or similar devices as a backlight for the LCD display to show forth an optical image to the viewer. CCFLs and similar devices are fragile, relatively inefficient materials that require an inverter and consume large quantities of power, up to 35 percent of the power within a notebook computer system. The use of CCFLs, which are made of glass or other rigid materials, renders the display module fragile, difficult to manufacture and maintain, and expensive to repair when broken. The specifications of these materials also render the display itself bulky and add to the weight of the system which incorporates the display. Because the displays are typically used in portable devices, users desire devices which are more ruggedized with lighter weight.[0002] In an effort to reduce the weight of the display and increase its durability, some manufacturers use organic light emitting diode (OLED) materials as a backlight source in mobile devices. OLEDs are thin film materials which emit light when excited by electric current. Since OLEDs emit light of different colors, they could be used to make displays. Displays made from OLED materials, therefore, do not need additional backlights, thus eliminating the need for the fragile glass CCFL and hence the bulky form factor of the display module. OLEDs are usually lightweight and can operate efficiently at relatively low voltages, thus consuming less power from the system. The versatility of the light emitting OLED materials has led some manufacturers to believe it would be desirable to substitute them for LCDs in mobile display devices in the near future.[0003] Although OLEDs can generate light with high efficiency, more than half of the light can be trapped within the device and render the light as useless for the device. Because the light emission from the OLED has no preference in the emitting direction, light is therefore emitted equally in all directions so that some of the light is emitted forward to the viewer, some is emitted to the back of the device and is either reflected forward to the viewer or being absorbed by the ambient, and some of the light is emitted laterally and trapped and absorbed by the various layers comprising the device. In general, up to 80% of the light generated from the OLED materials may be lost within the system and may never reach the viewer.[0004] There is a need therefore for an improved organic light emitting diode display structure that avoids the problems noted above and improves the efficiency of power used by the display, especially in portable devices. The present invention is directed to a new way of improving power efficiency of organic emitting diode displays through modification of the device fabrication with respect to the OLED material.Brief Description of the Drawings[0005] FIG. 1 shows an OLED structure.[0006] FIG. 2 shows an OLED structure having a grooved substrate.[0007] FIG. 3 shows an OLED structure integrated with a display device.Detailed Description of the Invention[0008] Some embodiments of the present invention are directed to OLED structures useful in display devices and processes for making such OLED structures. The OLED structures may comprise polar compounds which possess certain dielectric anisotropy and can be aligned with respect to either one or more substrates of the display cell. When the polar compounds are exposed to an applied voltage or electric field, the polar compounds will respond and the molecule aligns in certain orientation with respect to the direction of the electric field or voltage. Such orientation can be calibrated in a manner that may result in the light emitted from the OLED material radiating in a certain dominant direction.[0009] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. [00010] An exemplary embodiment of the present invention includes an OLED material comprising polar functional groups and entities as their molecular components which, when subjected to an electric field, orient hi a dominant direction as dictated by the electric field, thus orienting the emitted light in a single particular direction. [00011] Turning now to the figures, in which like numerals refer to like elements through the several figures, FIG. 1 (not to scale) illustrates the structure of an OLED material 10 in accordance with some embodiments of the present invention. In an OLED structure 10, an anode coated conductive layer 20 may be integrated on a substrate 30. A hole-transport layer 40 may be stacked on the coating of the anode. A layer of polar light- emitting material 50 may be disposed on the hole-transport layer 40. An electron transport layer 60 may be disposed on the light-emitting layer 50. Finally, a substrate 90 may support a cathode 70 comprising a conductive film. A cathode 70 may additionally be disposed on the electron transport layer 60. The anode 20 and cathode 70 may be connected to a power source 80. When the power source is activated, holes are injected from anode 20 into hole transport layer 40, the holes combine in a light emitting layer 50 with electrons that travel from cathode 70 and generate visible light. [00012] The substrates 30 and 90 may be made from any material capable of supporting the conductive coating of the anode 20 and cathode 70 and may be flexible or rigid. Examples include, but are not limited to, plastic, glass, quartz, plastic films, metals, ceramics, polymers or the like. Non-limiting examples of flexible plastic film and plastic include a film or sheet of polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyethersulfon (PES), polyetherimide, polyetheretherketone, polyphenylene sulfide, polyarylate, polyimide, polycarbonate (PC), cellulose triacetate (TAC), and cellulose acetate-propionate. Additionally, the substrate material 30 is transparent or otherwise light transmissive so that the light generated from the OLED material may pass through the device and be visible.[00013] The anode coated conductive layer 20 may be formed by optionally coating the substrate with a transparent and conductive coating material. For example, and not for limitation, transparent and conductive coating materials may include indium-tin oxide (ITO), indium-zinc oxide (IZO), and other tin oxides such as, but not limited to, aluminum- or indium-doped zinc oxide, magnesium-indium oxide, nickel-tungsten oxide, metal nitrides, such as but not limited to gallium nitride, and metal selenides, such as but not limited to zinc selenide, and metal sulfides, such as but not limited to zinc sulfide. [00014] Atop the anode coated conductive layer 20 is a hole transporting material40. The hole-transporting material may include amines, such as but not limited to aromatic tertiary amines. In one form the aromatic tertiary amine may be an arylamine, such as but not limited to a monoarylamine, diarylamine, triarylamine, or a polymeric arylamine. In addition, polymeric hole-transporting materials may include poly(N- vinylcarbazole) (PVK), polythiophenes, polypyrrole, polyaniline, and copolymers such as poly(3,4-ethylenedioxythiophene)/poly(4-styrenesulfonate) also called PEDOT/PSS. [00015] A polar light-emitting layer 50 is formed on hole transport layer 40 and may comprise a polar fluorescent and/or phosphorescent material where electroluminescence is produced as a result of electron-hole pair recombination in this region. The polar light-emitting layer 50 can be comprised of a single material or a host material doped with a guest compound or compounds, where light emission comes primarily from the dopant and can be of any color. In an exemplary embodiment, the light-emitting layer emits white light. The host material in the polar light-emitting layer 50 can be an electron-transporting material, as defined below, a hole-transporting material, as defined above, or another material or combination of materials that support hole- electron recombination. The dopant may be chosen from highly fluorescent dyes, but phosphorescent compounds, e.g., transition metal complexes, are also useful. Iridium complexes of phenylpyridine and its derivatives are particularly useful luminescent dopants. The polar light-emitting layer 50 may include dyes or coumarins and may also be polymeric material in nature. Polymeric materials such as polyfluorenes and polyvinylarylenes (e.g., poly(p-phenylenevinylene) (PPV)) can also be used as the host material. Small molecule dopants can be molecularly dispersed into the polymeric host, or the dopant could be added by copolymerizing a minor constituent into the host polymer. Any polar luminescent dopant known to be useful by one of ordinary skill in the art may be used herein.[00016] An electron transport layer 60 is formed atop the polar light-emitting layer50. The electron transporting material may be any material known to one of ordinary skill in the art to be useful for this purpose. Such compounds help to inject and transport electrons, exhibit high levels of performance, and are readily fabricated in the form of thin films. For example, and not for limitation, metal chelated oxinoid compounds, including chelates of oxine itself (also commonly referred to as 8-quinolinol or 8-hydroxyquinoline), may be used. [00017] Finally, a cathode 70 is deposited on electron transport layer 60 and supported by a substrate 90. The cathode may be transparent or otherwise light transmissive, opaque, or reflective and can comprise nearly any conductive material. Suitable cathode materials have good film-forming properties to ensure good contact with the underlying organic layer, promote electron injection at low voltage, and have good stability. Useful cathode materials often contain a low work function metal (<4.0 eV) or metal alloy.[00018] As noted above, substrate 90 may be made from any material capable of supporting the conductive coating of the cathode 70 and may be flexible or rigid. Examples include, but are not limited to, plastic, glass, quartz, plastic films, metals, ceramics, polymers or the like. Non-limiting examples of flexible plastic film and plastic include a film or sheet of polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyethersulfon (PES), polyetherimide, polyetheretherketone, polyphenylene sulfide, polyarylate, polyimide, polycarbonate (PC), cellulose triacetate (TAC), and cellulose acetate-propionate. Additionally, the substrate material 90 may be transparent or otherwise light transmissive, opaque, reflective or variations thereof. [00019] When a potential, i.e. voltage, is applied to the device from a power source80, electrons are emitted from light-emitting layer 50 where they are injected into electron transport layer 60 and recombined with the holes present therein giving rise to light emission. The cathode 70 reflects the light generated back toward the organic layers. By using multicolored OLED panels known to one of ordinary skill in the art, a white light or images with partial or full color utilizing field sequential color techniques may be formed. [00020] Exemplary OLED materials of the present invention comprise polar light- emitting layer materials. By exposing the polar light-emitting layer materials to an electric field or applied .voltage, the polar light-emitting layer polarizes, i.e., lines up, in the direction of the electric field. Such polarization orients the polar materials in a certain orientation and directs the light emitted from the light-emitting layer in a uniform dominant direction, thus optimizing the light emitted and reducing problems associated with light scatter and channeling. The polarity of the material may come from the organic light emitting material itself, the dopant host material, or the dopant. Chemical compounds useful as a light-emitting material, dopant host material, or dopant include those noted above as well as those known to one of ordinary skill in the art. Non-limiting examples of organic light-emitting materials include amines, including the aromatic tertiary amines, including arylamines, such as but not limited to a monoarylamine, diarylamine, triarylamine, or a polymeric arylamine polyimides, and polythiophenes including, but not limited to, poly(N-vinylcarbazole) (PVK), polypyrrole, polyaniline, and copolymers such as poly(3,4-ethylenedioxythiophene)/poly(4-styrenesulfonate) also called PEDOT/PSS and other amines referenced above.[00021] Another exemplary embodiment of the present invention may be shown inFIG. 2 (not to scale). In an OLED structure 10, an anode coated conductive layer 20 may be integrated on a substrate 30 having an irregular, non-smooth surface 35, also known as an alignment layer. The alignment layer 35 may provide an irregular, non-smooth surface for the subsequent layers. A hole-transport layer 40 may be applied on the coating of the anode 20 and upon the alignment layer 35. A layer of polar light-emitting material 50 may be disposed on the hole-transport layer 40. An electron transport layer 60 may be disposed on the light-emitting layer 50. Finally, a cathode 70 comprising a conductive film may be supported on a substrate 90 and disposed on the electron transport layer 60. The irregular, non-smooth surface of the alignment layer 35 may carry through the deposition process and exist at within all layers of the OLED structure. For example, the light-emitting layer 50 may fill part of the irregular surface of the alignment layer 35. In one embodiment, the polar light-emitting compounds may fill the alignment layer with portions of the molecules extending below the surface of the alignment layer and portions of the molecule extending above the surface of the alignment layer. The anode 20 and cathode 70 may be connected to a power source 80, which generates an applied voltage. When the power source is activated, holes may be injected from anode 20 into hole transport layer 40, the holes may combine in the light emitting layer 50 with electrons that travel from the cathode 70 and generate visible light. Because the light-emitting layer molecules are polar, the applied voltage causes the dipoles of the molecules to orient in a uniform arrangement, e.g., all positive ends of the molecule will be anchoring onto the surface of the alignment layer and all negative ends of the molecule will be pointing away from the surface of the alignment layer or vice- versa during the curing process. [00022] Once the chemicals are applied to the alignment layer 35 or the substrate 30, the chemicals may undergo a curing process. During curing, a voltage is simultaneously applied to the OLED material, aligning the polar light-emitting compounds in all of the layers of the OLED material. The voltage facilitates the alignment of the dipoles of the light-emitting layer within the material during the curing cycle. [00023] The applied voltage used to orient the light-emitting dipoles is typically less than about 7 volts. In one embodiment, the voltage ranges from 1 to about 7 volts. In another embodiment, the voltage ranges from about 3 volts to about 5 volts. [00024] The irregular, non-smooth surface of the alignment layer 35 may be formed on the substrate 30 by any means known in the art. A non-limiting example of forming the irregular, non-smooth surface of the alignment layer 35 includes the rubbing process or friction transfer. Friction transfer includes preparing the alignment layer by pressing a solid structure, for example and not for limitation, pellets, bars, ingots, rods, sticks, or the like, of the alignment material against the substrate and drawing the solid alignment material across the structure in a selected direction under a pressure sufficient to transfer a thin layer of the alignment material onto the substrate. The selected direction of the friction transfer provides an orientation direction fro the alignment of subsequent layers. The substrate may optionally be heated to optimize the initial action of the alignment layer. [00025] The thickness of the alignment layer may be sufficient to impart alignment on subsequent layers. The thickness may be thin enough such that the layer is not completely insulating. Exemplary thicknesses of the alignment layer of the present invention range from 0.1 to 20 microns. One embodiment of the invention provides for an alignment layer with thickness of between 1 to 10 microns, and still another embodiment provides for an alignment layer with thickness of between 5-7 microns.[00026] The thickness of the polar light-emitting materials may range from 100 angstroms to 2000 angstroms. In one embodiment of the present invention, the thickness of the polar light-emitting layer ranges from 300 to 2000 angstroms. In another embodiment, the thickness of the polar light-emitting layer ranges from 800 to 2000 angstroms.[00027] The polar light-emitting compound 50 may be applied to the irregular, non- smooth surface of the alignment layer 35 of which the topology shows through layer 20 and 40 or to the surface of the substrate 30 at room temperature or under elevated temperatures to enhance the uniformity of the light-emitting compound layer. [00028] Other embodiments of the present invention include processes for preparingOLED materials useful in display devices. One exemplary process is illustrated in FIG. 2 and may include coating a substrate 30 with a conductive layer 20 and/or a hole transport layer 40 to form a coated substrate, rubbing the coated substrate to form grooves or other irregular surfaces of an alignment layer 35, applying a polar light-emitting compound 50 to the irregular surface of the coated substrate and filling the grooves or irregularities formed by rubbing the substrate with the light-emitting compound 50, then curing the coated substrate while simultaneously exposing it to an electric field. [00029] Another exemplary process of the present invention may include coating a substrate 30 with a conductive layer 20 and/or a hole transport layer 40 to form a coated substrate, applying a polar light-emitting compound 50 to the surface of the coated substrate, then curing the coated substrate while simultaneously exposing it to an electric field. [00030] Another exemplary embodiment of the present invention may include theOLED materials integrated into a display device. FIG. 3 (not to scale) illustrates this exemplary embodiment. When a voltage is applied to the OLED structure 10 from a power source 80, light 300 emitted from the OLED structure 10 is transmitted in the direction of the applied voltage and toward the display 100. Because more of the light emitted from the OLED structure 10 is transmitted to the viewer, the display 100 may operate with less power than displays currently known in the art.[00031] The display device may include light distributing devices, such as lenses, polarizers, or optical viewing elements. Integrated with the OLED materials of the present invention, the display 100 may be any element which transmits the light from the OLED to the viewer. The display 100 may also comprise other components such as, but not limited to, a processor, memory, power supplies, or other peripheral devices, either alone or in combination.[00032] Those skilled in the art will appreciate that other light distributing devices may be used, such as, for example, and not for limitation, light guides, prisms, lenses, Fresnel lenses, diffusers, interferometers, or any other optical element that can distribute white light uniformly and efficiently onto the display device. It is further disclosed that additional optical elements, such as but not limited to polarizers, refractive elements, diffractive elements, bandpass filters, and the like, may be easily positioned exterior to or otherwise located near the OLED structure 10. By using a plurality of OLED panels as the light source, the size of the OLED structure 10 may be further reduced and the electrical power required may also be minimized. By using multicolored OLED panels, a white light or images with partial or full color utilizing field sequential color techniques may be formed. The light may optionally be passed through a light distributing device, which disperses the light to uniformly illuminate the display device 100.[00033] Those skilled in the art will further appreciate that the OLED structure 10 of the present invention may optionally be present in a display device 100 in combination with other OLED structures. The OLED structure 10 may be arranged randomly or in a pattern and may be stacked or arranged in series or adjacent to one another. The arrangement of the OLED structure 10 may depend on any of several factors including, but not limited to, size of the display, lighting requirements for the display, color, and the like. Additionally, those skilled in the art will appreciate that OLED materials may be, for example, and not for limitation, strips, films, blocks, and the like.[00034] The light emitted from the OLED structure 10 of the present invention may be manipulated by the structure of the OLED structure 10 itself and may emit white light or colors. A color-emitting OLED may be combined with white light-emitting OLED, both of which may then be incorporated into a display device 100. [00035] In the embodiments of the present invention, the intensity of the light transmitted to the display device 100 and the intensity of the color may be varied by adjusting the current and driving voltages applied to the OLED structure 10. Proportional current changes may be applied to each layer of the stack or to each OLED structure 10 in the series to optionally vary the color perceived by the viewer. [00036] The current necessary to display light from the OLED structure 10 to a display device 100 may be less than about 15 volts. In one embodiment of the present invention, the current necessary to display light from the OLED structure 10 ranges from about 1 volt to about 12 volts. The intensity of the light displayed from the OLED structure 10 may be varied by varying the voltage applied to the OLED structure 10. [00037] The OLED structure 10 of the present invention may be incorporated into any system benefiting from an image display device. The OLED structure 10 of the present invention may be incorporated into a display device in addition to or in lieu of LCD displays or other display devices known in the art. Systems incorporating display devices include, but are not limited to, those used with laptop computers, personal digital assistants, cellular phones, and the like.[00038] In addition to the display device 100, the system may also include, but is not limited to, a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.[00039] The system memory may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within the system, such as during start-up, is typically stored in ROM. RAM typically contains data program modules, and/or computer-executable instructions that are immediately accessible to and/or presently being operated on by processing unit. [00040] While the present invention has been particularly shown and described with respect to exemplary embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and detail may be made without departing from the scope and spirit of the present invention. It is therefore intended that the present invention not be limited to the exact forms described and illustrated, but fall within the scope of the appended claims.
The present disclosure includes apparatuses, methods, and systems for using memory as a block in a block chain. An embodiment includes a memory, and circuitry configured to generate a block in a block chain for validating data stored in the memory, wherein the block includes a cryptographic hash of a previous block in the block chain and a cryptographic hash of the data stored in the memory, and the block has a digital signature associated therewith that indicates the block is included in the block chain.
What is Claimed is:1. An apparatus, comprising:a memory; andcircuitry configured to generate a block in a block chain for validating data stored in the memory, wherein:the block includes:a cryptographic hash of a previous block in the block chain; anda cryptographic hash of the data stored in the memory; andthe block has a digital signature associated therewith that indicates the block is included in the block chain.2. The apparatus of claim 1 , wherein the circuitry is configured to store the block in the memory.3. The apparatus of claim 1 , wherein the circuitry is configured to generate the digital signature.4. The apparatus of any one of claims 1-3, wherein:the memory comprises an array of memory cells;the circuitry includes a pair of registers configured to define the array, the pair of registers including a register configured to define an address of the array and a register configured to define a size of the array; andwherein the circuitry is configured to generate a cryptographic hash associated with the array.5. The apparatus of any one of claims 1-3, wherein the circuitry is configured to generate the cryptographic hash of the data stored in the memory.6. A method of operating memory, comprising:generating a block in a block chain for validating data stored in the memory, wherein:the block includes: a cryptographic hash of a previous block in the block chain; anda cryptographic hash of the data stored in the memory; andthe block has a digital signature associated therewith that indicates the block is included in the block chain; andstoring the block in the memory.7. The method of claim 6, wherein the method includes sending the block to a host for validation of the data stored in the memory, wherein the block is sent responsive to a powering of the memory or receipt of a command from the host.8. The method of claim 6, wherein the block is stored in a portion of the memory that is inaccessible to a user of the memory.9. The method of any one of claims 6-8, wherein the method includes, after the validation of the data stored in the memory, generating an additional block in the block chain, wherein the additional block includes:an additional cryptographic hash of the previous block in the block chain, wherein the additional cryptographic hash of the previous block in the block chain is the cryptographic hash of the data stored in the memory; andan additional cryptographic hash of the data stored in the memory.10. The method of claim 9, wherein the method includes storing the additional block in the memory.11. A method of operating memory, comprising:receiving, by a host from the memory, a block in a block chain for validating data stored in the memory, wherein:the block includes:a cryptographic hash of a previous block in the block chain; anda cryptographic hash of the data stored in the memory; and the block has a digital signature associated therewith that indicates the block is included in the block chain; andvalidating, by the host, the data stored in the memory using the received block.12. The method of claim 11 , wherein the cryptographic hash of the data stored in the memory comprises a SHA-256 cryptographic hash.13. The method of claim 11, wherein:the cryptographic hash of the previous block in the block chain comprises 256 bytes of data; andthe cryptographic hash of the data stored in the memory comprises 256 bytes of data.14. The method of any one of claims 11-13, wherein the method includes validating, by the host, the digital signature to determine the block is included in the block chain.15. The method of any one of claims 11-13, wherein the block includes a header having a timestamp.16. The method of any one of claims 11-13, wherein the method includes: receiving, by the host from the memory, a cryptographic hash associated with an array of the memory; andvalidating, by the host, the data stored in the memory using the cryptographic hash associated with the array.17. A system, comprising:a memory, wherein the memory includes a block in a block chain for validating data stored in the memory, wherein:the block includes:a cryptographic hash of a previous block in the block chain; and a cryptographic hash of the data stored in the memory; andthe block has a digital signature associated therewith that indicates the block is included in the block chain; anda host, wherein the host is configured to:receive the block from the memory; andvalidate the data stored in the memory using the received block.18. The system of claim 17, wherein:the host is configured to send, to the memory, a command to sense the block; andthe memory is configured to execute the command to sense the block.19. The system of claim 17, wherein the host is configured to:generate the cryptographic hash of the data stored in the memory; and send the generated cryptographic hash of the data stored in the memory to the memory.20. The system of any one of claims 17-19, wherein the host is configured to: generate the digital signature associated with the block; andsend the generated digital signature to the memory.
USING MEMORY AS A BLOCK IN A BLOCK CHAINTechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to using memory as a block in a block chain.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0003] Memory devices can be combined together to form a solid state drive (SSD), an embedded MultiMediaCard (e.MMC), and/or a universal flash storage (UFS) device. An SSD, e.MMC, and/or UFS device can include non volatile memory (e.g., NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SDRAM), among various other types of non-volatile and volatile memory. Non-volatile memory may be used in a wide range of electronic applications such as personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, among others.[0004] Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Resistance variable memory devices can include resistive memory cells that can store data based on the resistance state of a storage element (e.g., a resistive memory element having a variable resistance).[0005] Memory cells can be arranged into arrays, and memory cells in an array architecture can be programmed to a target (e.g., desired) state. For instance, electric charge can be placed on or removed from the charge storage structure (e.g., floating gate) of a flash memory cell to program the cell to a particular data state. The stored charge on the charge storage structure of the cell can indicate a threshold voltage (Vt) of the cell. A state of a flash memory cell can be determined by sensing the stored charge on the charge storage structure (e.g., the Vt) of the cell.[0006] Many threats can affect the data stored in the memory cells of a memory device. Such threats can include, for example, faults occurring in the memory device, and/or threats from hackers or other malicious users. Such threats can cause significant financial loss, and/or can present significant safety and/or security issues.Brief Description of the Drawings[0007] Figure 1 illustrates a diagram of a portion of a memory array having a number of physical blocks in accordance with an embodiment of the present disclosure.[0008] Figure 2 is a block diagram of a computing system including a host and an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure.[0009] Figure 3 illustrates examples of blocks that can be used in a block chain for validating data stored in memory in accordance with an embodiment of the present disclosure.[0010] Figure 4A illustrates an example of a pair of registers used to define a secure memory array in accordance with an embodiment of the present disclosure.[0011] Figure 4B illustrates a diagram of a portion of a memory array that includes a secure memory array defined in accordance with an embodiment of the present disclosure. [0012] Figure 5 is a block diagram of an example system including a host and a memory device in accordance with an embodiment of the present disclosure.[0013] Figure 6 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0014] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0015] Figure 8 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure.[0016] Figure 9 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure.[0017] Figure 10 is a block diagram of an example memory device in accordance with an embodiment of the present disclosure.Detailed Description[0018] The present disclosure includes apparatuses, methods, and systems for using memory as a block in a block chain. An embodiment includes a memory, and circuitry configured to generate a block in a block chain for validating data stored in the memory, wherein the block includes a cryptographic hash of a previous block in the block chain and a cryptographic hash of the data stored in the memory, and the block has a digital signature associated therewith that indicates the block is included in the block chain. In some embodiments, a particular and/or specific physical block of memory in a memory, as described Figure 1 , may be used as such a block in a block chain. However, embodiments are not so limited.[0019] Many threats can affect the data stored in a memory (e.g., in a memory device). For example, faults may occur in the array and/or circuitry of the memory, which can result in errors occurring in the data. As an additional example, a hacker or other malicious user may attempt to perform activities to make unauthorized changes to the data for malicious purposes. For instance, a malicious user may attempt to alter the data stored in a memory in order to adversely affect (e.g., divert the flow of) a commercial transaction being performed using the memory (e.g., to falsely indicate that payment has been made for the service being provided by skipping the code that verifies the payment), a software license check being performed on die memory (e.g., to falsely indicate the software of the memory is properly licensed by skipping the code that verifies the license), or automotive control being performed using the memory (e.g., to skip a check of the genuineness of a part, an environmental check, or a check of a malfunctioning alarm), among other types of hacking activities. Such hacking activities (e.g., attacks) can cause significant financial loss, and/or can present significant safety and/or security issues.[0020] As such, in order to ensure a secure memory system, it is important to validate (e.g., authenticate and/or attest) that the data stored in the memory is genuine (e.g., is the same as originally programmed), and has not been altered by hacking activity or other unauthorized changes. Embodiments of the present disclosure can use memory as a block in a block chain data structure (e.g. use the memory as a storage component for the block chain) in order to effectively validate the data stored in the memory, and thereby ensure a secure memory system. For instance, embodiments of the present disclosure can modify the existing circuitry of the memory (e.g., the existing firmware of the memory device) to use the memory as the block in the block chain, such that the memory can be used as the block in the block chain without having to add additional (e.g., new) components or circuitry to the memory.[0021] As used herein,“a”,“an”, or“a number of’ can refer to one or more of something, and“a plurality of’ can refer to two or more such things.For example, a memory device can refer to one or more memory devices, and a plurality of memory devices can refer to two or more memory devices.Additionally, the designators“R”,“B”,“S”, and“N”, as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure. The number may be the same or different between designations.[0022] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 101 may reference element“01” in Figure 1, and a similar element may be referenced as 201 in Figure 2.[0023] Figure 1 illustrates a diagram of a portion of a memory array 101 having a number of physical blocks in accordance with an embodiment of the present disclosure. Memory array 101 can be, for example, a flash memory array such as a NAND flash memory array. As an additional example, memory array 101 can be a resistance variable memory array such as a PCRAM, RRAM, MMRAM, or spin torque transfer (STT) array, among others. However, embodiments of the present disclosure are not limited to a particular type of memory array. Further, memory array 101 can be a secure memory array, as will be further described herein. Further, although not shown in Figure 1, memory array 101 can be located on a particular semiconductor die along with various peripheral circuitry associated with the operation thereof.[0024] As shown in Figure 1, memory array 101 has a number of physical blocks 107-0 (BLOCK 0), 107-1 (BLOCK 1), . . ., 107-B (BLOCK B) of memory cells. The memory cells can be single level cells and/or multilevel cells such as, for instance, two level cells, triple level cells (TLCs) or quadruple level cells (QLCs). As an example, the number of physical blocks in memory array 101 may be 128 blocks, 512 blocks, or 1,024 blocks, but embodiments are not limited to a particular power of two or to any particular number of physical blocks in memory array 101.[0025] A number of physical blocks of memory cells (e.g., blocks 107-0, 107-1, . . ., 107-B) can be included in a plane of memory cells, and a number of planes of memory cells can be included on a die. For instance, in the example shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B can be part of a single die. That is, the portion of memory array 101 illustrated in Figure 1 can be a die of memory cells.[0026] As shown in Figure 1, each physical block 107-0, 107-1, . . ., 107- B includes a number of physical rows (e.g., 103-0, 103-1, . . ., 103-R) of memory cells coupled to access lines (e.g., word lines). The number of rows (e.g., word lines) in each physical block can be 32, but embodiments are not limited to a particular number of rows 103-0, 103-1, . . ., 103-R per physical block. Further, although not shown in Figure 1, the memory cells can be coupled to columns of sense lines (e.g., data lines and/or digit lines). [0027] As one of ordinary skill in the art will appreciate, each row 103-0, 103-1, . . ., 103-R can include a number of pages of memory cells (e.g., physical pages). A physical page refers to a unit of programming and/or sensing (e.g., a number of memory cells that are programmed and/or sensed together as a functional group). In the embodiment shown in Figure 1, each row 103-0, 103- 1, . . ., 103-R comprises one physical page of memory cells. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, each row can comprise multiple physical pages of memory cells (e.g., one or more even pages of memory cells coupled to even-numbered data lines, and one or more odd pages of memory cells coupled to odd numbered data lines). Additionally, for embodiments including multilevel cells, a physical page of memory cells can store multiple pages (e.g., logical pages) of data (e.g., an upper page of data and a lower page of data, with each cell in a physical page storing one or more bits towards an upper page of data and one or more bits towards a lower page of data).[0028] As shown in Figure 1, a page of memory cells can comprise a number of physical sectors 105-0, 105-1, . . 105-S (e.g., subsets of memory cells). Each physical sector 105-0, 105-1, . . ., 105-S of cells can store a number of logical sectors of data. Additionally, each logical sector of data can correspond to a portion of a particular page of data. As an example, a first logical sector of data stored in a particular physical sector can correspond to a logical sector corresponding to a first page of data, and a second logical sector of data stored in the particular physical sector can correspond to a second page of data. Each physical sector 105-0, 105-1, . . ., 105-S, can store system and/or user data, and/or can include overhead data, such as error correction code (ECC) data, logical block address (LBA) data, and metadata.[0029] Logical block addressing is a scheme that can be used by a host for identifying a logical sector of data. For example, each logical sector can correspond to a unique logical block address (LB A). Additionally, an LBA may also correspond (e.g., dynamically map) to a physical address, such as a physical block address (PBA), that may indicate the physical location of that logical sector of data in the memory. A logical sector of data can be a number of bytes of data (e.g., 256 bytes, 512 bytes, 1,024 bytes, or 4,096 bytes). However, embodiments are not limited to these examples. [0030] It is noted that other configurations for the physical blocks 107-0,107-1, . . ., 107-B, rows 103-0, 103-1, . . ., 103-R, sectors 105-0, 105-1, . . ., 105-S, and pages are possible. For example, rows 103-0, 103-1, . . 103-R of physical blocks 107-0, 107-1, . . 107-B can each store data corresponding to a single logical sector which can include, for example, more or less than 512 bytes of data.[0031] Figure 2 is a block diagram of a computing system 200 including a host 202 and an apparatus in the form of a memory device 206 in accordance with an embodiment of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. Further, in an embodiment, computing system 200 can include a number of memory devices analogous to memory device 206.[0032] In the embodiment illustrated in Figure 2, memory device 206 can include a memory 216 having a memory array 201. Memory array 201 can be analogous to memory array 101 previously described in connection with Figure 1. Further, memory array 201 can be a secure array, as will be further described herein. Although one memory array 201 is illustrated in Figure 2, memory 216 can include any number of memory arrays analogous to memory array 201.[0033] As illustrated in Figure 2, host 202 can be coupled to the memory device 206 via interface 204. Host 202 and memory device 206 cancommunicate (e.g., send commands and/or data) on interface 204. Host 202 and/or memory device 206 can be, or be part of, a laptop computer, personal computer, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, interface hub, or Internet of Things (IoT) enabled device, such as, for instance, an automotive (e.g., vehicular and/or transportation infrastructure) IoT enabled device or a medical (e.g., implantable and/or health monitoring) IoT enabled device, among other host systems, and can include a memory access device (e.g., a processor). One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. [0034] Interface 204 can be in the form of a standardized physical interface. For example, when memory device 206 is used for information storage in computing system 200, interface 204 can be a serial advanced technology attachment (SATA) physical interface, a peripheral component interconnect express (PCIe) physical interface, a universal serial bus (USB) physical interface, or a small computer system interface (SCSI), among other physical connectors and/or interfaces. In general, however, interface 204 can provide an interface for passing control, address, information (e.g., data), and other signals between memory device 206 and a host (e.g., host 202) having compatible receptors for interface 204.[0035] Memory device 206 includes controller 208 to communicate with host 202 and with memory 216 (e.g., memory array 201). For instance, controller 208 can send commands to perform operations on memory array 201, including operations to sense (e.g., read), program (e.g., write), move, and/or erase data, among other operations.[0036] Controller 208 can be included on the same physical device (e.g., the same die) as memory 216. Alternatively, controller 208 can be included on a separate physical device that is communicatively coupled to the physical device that includes memory 216. In an embodiment, components of controller 208 can be spread across multiple physical devices (e.g., some components on the same die as the memory, and some components on a different die, module, or board) as a distributed controller.[0037] Host 202 can include a host controller (not shown Figure 2) to communicate with memory device 206. The host controller can send commands to memory device 206 via interface 204. The host controller can communicate with memory device 206 and/or the controller 208 on the memory device 206 to read, write, and/or erase data, among other operations. Further, in an embodiment, host 202 can be an IoT enabled device, as previously described herein, having IoT communication capabilities.[0038] Controller 208 on memory device 206 and/or the host controller on host 202 can include control circuitry and/or logic (e.g., hardware and firmware). In an embodiment, controller 208 on memory device 206 and/or the host controller on host 202 can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including a physical interface. Also, memory device 206 and/or host 202 can include a buffer of volatile and/or nonvolatile memory and a number of registers.[0039] For example, as shown in Figure 2, memory device can include circuitry 210. In the embodiment illustrated in Figure 2, circuitry 210 is included in controller 208. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, circuitry 210 may be included in (e.g., on the same die as) memory 216 (e.g., instead of in controller 208).Circuitry 210 can comprise, for instance, hardware, firmware, and/or software.[0040] Circuitry 210 can generate a block 220 in a block chain for validating (e.g., authenticating and/or attesting) the data stored in memory 216 (e.g., in memory array 201). The block 220 can include a cryptographic hash of (e.g., a link to) the previous block in the block chain, and a cryptographic hash of (e.g., identifying) the data stored in memory array 201. The block 220 can also include a header having a timestamp indicating when the block was generated. Further, the block 220 can have a digital signature associated therewith that indicates the block is included in the block chain. An example illustrating such a block will be further described herein (e.g., in connection with Figure 3).[0041] As used herein, a“block in a block chain”, such as, for instance, block 220 illustrated in Figure 2, can include data (e.g., payload), headers, timestamps, history, etc. However, as used herein a block in a block chain does not have to equate to the size of a block of memory as described previously in connection with Figure 1. For instance, a block in a block chain may be smaller, equivalent, and/or larger than a block size denomination of a particular memory associated with an architecture or designation.[0042] The cryptographic hash of the data stored in memory array 201, and/or the cryptographic hash of the previous block in the block chain, can comprise, for instance, a SHA-256 cryptographic hash. Further, thecryptographic hash of the data stored in memory array 201, and thecryptographic hash of the previous block in the block chain, can each respectively comprise 256 bytes of data.[0043] The cryptographic hash of the data stored in memory array 201 can be generated (e.g., calculated), for example, by circuitry 210. In such an example, the cryptographic hash of the data stored can be internally generated by memory device 206 without having external data moving on interface 204. As an additional example, the cryptographic hash of the data can be communicated from an external entity. For instance, host 202 can generate the cryptographic hash of the data stored in memory array 201, and send the generatedcryptographic hash to memory device 206 (e.g., circuitry 210 can receive the cryptographic hash of the data stored in memory array 201 from host 202).[0044] The digital signature associated with the block 220 can be generated (e.g., calculated), for example, by circuitry 210 based on (e.g., responsive to) an external command, such as a command received from host 202. For instance, the digital signature can be generated using symmetric or asymmetric cryptography. As an additional example, host 202 can generate the digital signature, and send (e.g. provide) the generated digital signature to memory device 206 (e.g., circuitry 210 can receive the digital signature from host 202).[0045] As shown in Figure 2, the block 220, as well as the digital signature associated with block 220, can be stored in memory array 201. For example, the block 220 can be stored in a portion of memory array 201 that is inaccessible to a user of memory device 206 and/or host 202 (e.g., in a“hidden” region of memory array 201). Storing the block 220 in memory array 201 can simplify the storage of the block by, for example, removing the need for software storage management for the block.[0046] In an embodiment, memory array 201 (e.g., a subset of array 201 , or the whole array 201) can be a secure array (e.g., an area of memory 216 to be kept under control). For example, the data stored in memory array 201 can include sensitive (e.g., non-user) data, such as host firmware and/or code to be executed for sensitive applications. In such an embodiment, a pair of non volatile registers can be used to define the secure array. For example, in the embodiment illustrated in Figure 2, circuitry 210 includes registers 214-1 and 214-2 that can be used to define the secure array. For instance, register 214-1 can define the address (e.g., the starting LB A of the data) of the secure array, and register 214-2 can define the size (e.g., the ending LBA of the data) of the secure array. An example of such registers, and their use in defining a secure array, will be further described herein (e.g., in connection with Figures 4A-4B). Once the secure array has been defined, circuitry 210 can generate (e.g., calculate) a cryptographic hash associated with the secure array, which may be referred to herein as a golden hash, using authenticated and antireplay protected commands (e.g., so that only memory device 206 knows the golden hash, and only memory device 206 is capable of generating and updating it). The golden hash may be stored in inaccessible portion of memory array 201 (e.g., the same inaccessible portion in which block 220 is stored), and can be used during the process of validating the data of the secure array, as will be further described herein.[0047] Memory device 206 (e.g., circuitry 210) can send, via interface 204, the block 220, along with the digital signature associated with block 220, to host 202 for validation of the data stored in memory array 201. For example, circuitry 210 can sense (e.g., read) the block 220 stored in memory array 201, and send the sensed block to host 202 for validation of the data stored in array 201, responsive to a powering (e.g., a powering on and/or powering up) of memory device 206. As such, a validation of the data stored in memory array 201 can be initiated (e.g., automatically) upon the powering of memory device 206.[0048] As an additional example, circuitry 210 can send the block 220, along with the digital signature associated with block 220, to host 202 upon an external entity, such as host 202, initiating a validation of the data stored in memory array 201. For instance, host 202 can send a command to memory device 206 (e.g., circuitry 210) to sense the block 220, and circuitry 210 can execute the command to sense the block 220, and send the sensed block to host 202 for validation of the data stored in array 201, responsive to receipt of the command.[0049] Upon receiving the block 220, host 202 can validate (e.g., determine whether to validate) the data stored in memory array 201 using the received block. For example, host 202 can use the cryptographic hash of the previous block in the block chain and the cryptographic hash of the data stored in memory array 201 to validate the data. Further, host 202 can validate the digital signature associated with the block 220 to determine the block is included (e.g., is eligible to be included) in the block chain. As used herein, validating the data stored in memory array 201 can include, and/or refer to, authenticating and/or attesting that the data is genuine (e.g., is the same as originally programmed), and has not been altered by hacking activity or other unauthorized changes.[0050] In embodiments in which memory array 201 is a secure array, the golden hash previously described herein may also be used to validate the data stored in memory array 201. For example, a run-time cryptographic hash can be generated (e.g., calculated), and compared with the golden hash. If the comparison indicates the run-time and golden hashes match, it can be determined that the secure array has not been altered, and therefore the data stored therein is valid. If, however, the comparison indicates the run-time and golden hashes do not match, this may indicate that the data stored in the secure array has been changed (e.g., due to a hacker or a fault in the memory), and this can be reported to host 202.[0051] After the validation of the data stored in memory array 201, circuitry 210 can generate an additional (e.g., the next) block in the block chain for validating the data stored in memory array 201, in a manner analogous to which the block 220 was generated. For example, this additional block can include a cryptographic hash of block 220, which has now become the previous block in the block chain, and a new cryptographic hash of the data stored in memory array 201. Further, this additional block can include a header having a timestamp indicating when this block was generated, and can have a digital signature associated therewith that indicates this block is included in the block chain. An example illustrating such an additional block will be further described herein (e.g., in connection with Figure 3). Further, in embodiments in which memory array 201 is a secure array, an additional (e.g., new) golden hash can be generated.[0052] The additional block, as well as the digital signature associated with the additional block, and the additional golden hash, can be stored in memory array 201. For example, the additional block can replace block 220 (e.g., the previous block) in memory array 201. The additional block, digital signature, and additional golden hash can then be used by host 202 to validate the data stored in memory array 201, in a manner analogous to that previously described herein for block 220. Additional blocks in the block chain can continue to be generated by circuitry 210, and used by host 202 to validate the data stored in memory array 201, in such manner throughout the lifetime of memory device 206.[0053] The embodiment illustrated in Figure 2 can include additional circuitry, logic, and/or components not illustrated so as not to obscure embodiments of the present disclosure. For example, memory device 206 can include address circuitry to latch address signals provided over I/O connectors through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder, to access memory array 201. Further, memory device 206 can include a main memory, such as, for instance, a DRAM or SDRAM, that is separate from and/or in addition to memory array 201. An example further illustrating additional circuitry, logic, and/or components of memory device 206 will be further described herein (e.g., in connection with Figure 10).[0054] Figure 3 illustrates examples of blocks (e.g., block 320-1 and block 320-2) that can be used in a block chain for validating data stored in memory (e.g. in memory array 201 previously described in connection with Figure 2) in accordance with an embodiment of the present disclosure. Blocks 320-1 and 320-2 can be generated, for instance, using circuitry 210 previously described in connection with Figure 2. For example, block 320-2 can be generated after block 320- 1 has been used to validate the data stored in the memory (e.g., block 320-2 can be the next block in the block chain after block 320-1).[0055] As shown in Figure 3, each respective block 320-1 and 320-2 can include a header, a cryptographic hash of the previous block in the block chain, and a cryptographic hash of the data stored in the memory, in a manner analogous to block 220 previously described in connection with Figure 2. For example, block 320-1 includes header 322-1 having a timestamp indicating when block 320-1 was generated, cryptographic hash 324-1 of the previous block in the block chain, and cryptographic hash 326-1 of the data stored in the memory. Further, block 320-2 includes header 322-2 having a timestamp indicating when block 320-2 was generated, cryptographic hash 324-2 of the previous block (e.g., block 320-1) in the block chain, and a subsequent (e.g., new) cryptographic hash 326-2 of the data stored in the memory. [0056] As shown in Figure 3, cryptographic hash 326-1 of block 320-1 can be used as cryptographic hash 324-2 of block 320-2. That is, block 320-2 can include cryptographic hash 326-1 of the data stored in the memory from block 320-1 as cryptographic hash 324-2 of the previous block in the block chain.[0057] As shown in Figure 3, each respective block 320-1 and 320-2 can have a digital signature associated therewith that indicates the block is included in the block chain, in a manner analogous to block 220 previously described in connection with Figure 2. For example, digital signature 328-1 is associated with block 320- 1 , and digital signature 328-2 is associated with block 320-2.[0058] Figure 4A illustrates an example of a pair of registers 414-1 and 414-2 used to define a secure memory array in accordance with an embodiment of the present disclosure, and Figure 4B illustrates a diagram of a portion of a memory array 401 that includes a secure memory array defined using registers 414-1 and 414-2 in accordance with an embodiment of the present disclosure. Registers 414-1 and 414-2 can be, for instance, registers 214-1 and 214-2, respectively, previously described in connection with Figure 2, and secure memory array 401 can be, for instance, memory array 201 previously described in connection with Figure 2. For instance, as shown in Figure 4B, secure memory array 401 can include a number of physical blocks 407-0, 407-1, . . ., 407-B of memory cells, each including a number of physical rows 403-0, 403-1,. . ., 403 -R having a number of sectors of memory cells, in a manner analogous to memory array 101 previously described in connection with Figure 1.[0059] As shown in Figure 4A, register 414-1 can define addresses of the secure array (e.g., the addresses of different portions of the secure array), and register 414-2 can define sizes of the secure array (e.g., the sizes of the different portions of the secure array). The addresses of the secure array defined by register 414-1 can correspond to, for instance, starting points (e.g., starting LB As) of the secure array (e.g., the starting points of the different portions of the secure array), and the sizes of the secure array defined by register 414-2 can correspond to, for instance, ending points (e.g., ending LBAs) of the secure array (e.g., the ending points of the different portions of the secure array).[0060] For example, as shown in Figure 4 A, registers 414-1 and 414-2 can define N pairs of values, with each respective pair comprising an address value (e.g., addr) defined by register 414-1 and a size value (e.g., size) defined by register 414-2. For instance, in the example illustrated in Figure 4A, Pairo comprises address value addroand size value sizeo (e.g., Pairo= [addro, sizeo]), Pair1comprises address value addr1and size value size1(e.g., Pain = [addr1, size1]), and so on, with Paim comprising address value addrNand size value sizeN(e.g., PairN= [addrN, sizeN]). The address value of a pair can correspond to a starting point (e.g., starting LBA) of a portion of the secure array, and the sum of the address value and the size value of that pair can correspond to the ending point (e.g., ending LBA) of that portion of the secure array. As such, the entire secure array (e.g., the portions that comprise the entire secure array) can be given by: [addro, addro+ sizeo] U [addr1,addr1+ size1] U ... U [addrN, addrN+ sizeN].[0061] The first pair whose size value defined by register 414-2 is zero can stop the definition of the secure array. For instance, in the example illustrated in Figure 4A, if the size value of Pair2is zero, then the secure array would be given by: [addro, addro+ sizeo] U [addr1, addr1+ size1].[0062] An example of a secure array defined by registers 414-1 and 414- 2 (e.g., with all size values defined by register 414-2 as non-zero) is illustrated in Figure 4B. For instance, as shown in Figure 4B, the address (e.g., LBA) associated with sector 405-0 of memory array 401 is addro, the address associated with sector 405-1 of memory array 401 is addro+ sizeo, the address associated with sector 405-2 of memory array 401 is addr1, the address associated with sector 405-3 of memory array 401 is addr1+ size1, the address associated with sector 405-4 of memory array 401 is addrN, and the address associated with sector 405-5 of memory array 401 is addrN+ sizeN. As such, the secure array comprises sectors (e.g., the data stored in sectors) 405-0 through 405-1, sectors 405-2 through 405-3, and 405-4 through 405-5. However, the sectors of memory array 401 that are before sector 405-0, and sectors 405-1 through 405-2 of memory array 401, are not part of the secure array (e.g., the secure array comprises a subset of array 401).[0063] Figure 5 is a block diagram of an example system including a host 502 and a memory device 506 in accordance with an embodiment of the present disclosure. Host 502 and memory device 506 can be, for example, host 202 and memory device 206, respectively, previously described in connection with Figure 2.[0064] A computing device can boot in stages using layers, with each layer authenticating and loading a subsequent layer and providing increasingly sophisticated runtime services at each layer. A layer can be served by a prior layer and serve a subsequent layer, thereby creating an interconnected web of the layers that builds upon lower layers and serves higher order layers. As is illustrated in Figure 5, Layer 0 (“Lo”) 551 and Layer 1 (“L1”) 553 are within the host. Layer 0 551 can provide a Firmware Derivative Secret (FDS) key 552 to Layer 1 553. The FDS key 552 can describe the identity of code of Layer 1 553 and other security relevant data. In an example, a particular protocol (such as robust internet of things (RIOT) core protocol) can use the FDS 552 to validate code of Layer 1 553 that it loads. In an example, the particular protocol can include a device identification composition engine (DICE) and/or the RIOT core protocol. As an example, an FDS can include Layer 1 firmware image itself, a manifest that cryptographically identifies authorized Layer 1 firmware, a firmware version number of signed firmware in the context of a secure boot implementation, and/or security-critical configuration settings for the device. A device secret 558 can be used to create the FDS 552 and be stored in memory of the host 502.[0065] The host can transmit data, as illustrated by arrow 554, to the memory device 506. The transmitted data can include an external identification that is public, a certificate (e.g., an external identification certificate), and/or an external public key. Layer 2 (“L2”) 555 of the memory device 506 can receive the transmitted data, and execute the data in operations of the operating system (“OS”) 557 and on a first application 559-1 and a second application 559-2.[0066] In an example operation, the host 502 can read the device secret 558, hash an identity of Layer 1 553, and perform a calculation including:KL1= KDF [Fs(s), Hash (“immutable information”)] where KL1is an external public key, KDF (e.g., KDF defined in the National Institute of Standards and Technology (NIST) Special Publication 800-108) is a key derivation function (e.g., HMAC-SHA256), and Fs(s) is the device secret 558. FDS 552 can be determined by performing:FDS = HMAC-SHA256 [ Fs(s), SHA256(“immutable information”)] Likewise, the memory device 506 can transmit data, as illustrated by arrow 556, to the host 502.[0067] Figure 6 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 6 is an example of a determination of the parameters including the external public identification, the external certificate, and the external public key that are then sent, indicated by arrow 654, to Layer 2 (e.g., Layer 2 555) of a memory device (e.g., 506 in Figure 5). Layer 0 (“Lo”) 651 in Figure 6 corresponds to Layer 0 551 in Figure 5 and likewise FDS 652 corresponds to FDS 552, Layer 1 653 corresponds to Layer 1 553, and arrows 654 and 656 correspond to arrows 554 and 556, respectively.[0068] The FDS 652 from Layer 0 651 is sent to Layer 1 653 and used by an asymmetric ID generator 661 to generate a public identification (“IDik public”) 665 and a private identification 667. In the abbreviated“IDik public,” the “lk” indicates Layer k (in this example Layer 1), and the“public” indicates that the identification is openly shared. The public identification 665 is illustrated as shared by the arrow extending to the right and outside of Layer 1 653 of the host. The generated private identification 667 is used as a key input into an encryptor 673. The encryptor 673 can be any processor, computing device, etc. used to encrypt data.[0069] Layer 1 653 of a host can include an asymmetric key generator 663. In at least one example, a random number generator (RND) 636 can optionally input a random number into the asymmetric key generator 663. The asymmetric key generator 663 can generate a public key (“Kxk public”) 669(referred to as an external public key) and a private key (“KLK private”) 671(referred to as an external private key) associated with a host such as host 502 in Figure 5. The external public key 669 can be an input (as“data”) into the encryptor 673. The encryptor 673 can generate a result K’675 using the inputs of the external private identification 667 and the external public key 669. The external private key 671 and the result K’675 can be input into an additional encryptor 677, resulting in output K” 679. The output K” 679 is the external certificate (“IDL1certificate”) 681 transmitted to the Layer 2 (555 of Figure 5). The external certificate 681 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the host can be associated with an identity of the host by verifying the certificate, as will be described further in association with Figure 8. Further, the external public key (“KL1 public key”) 683 can be transmitted to Layer 2. Therefore, the public identification 665, the certificate 681, and the external public key 683 of a host can be transmitted to Layer 2 of a memory device.[0070] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 7 illustrates a Layer 2 755 of a memory device (e.g., memory device 506 in Figure 5) generating a device identification (“IDL2 public”) 766, a device certificate (“IDLZ Certificate”) 782, and a device public key (“KL2 publickey”) 784.[0071] The external public key (“KL1 public key”) 783 transmitted from Layer 1 of the host to Layer 2 755 of a memory device, as described in Figure 6, is used by an asymmetric ID generator 762 of the memory device to generate a public identification (“ID1k public”) 766 and a private identification 768 of the memory device. In the abbreviated“ID1k public,” the“lk” indicates Layer k (in this example Layer 2), and the“public” indicates that the identification is openly shared. The public identification 766 is illustrated as shared by the arrow extending to the right and outside Layer 2 755. The generated private identification 768 is used as a key input into an encryptor 774.[0072] As shown in Figure 7, the external certificate 781 and public identification 765, along with the external public key 783, are used by a certificate verifier 799. The certificate verifier 799 can verify the external certificate 781 received from a host, and determine, in response to the external certificate 781 being verified or not being verified, whether to accept or discard data received from the host. Further details of verifying the external certificate 781 are further described herein (e.g., in connection with Figure 8).[0073] Layer 2 755 of the memory device can include an asymmetric key generator 764. In at least one example, a random number generator (RND) 738 can optionally input a random number into the asymmetric key generator 764. The asymmetric key generator 764 can generate a public key (“KLKpublic”) 770 (referred to as a device public key) and a private key (“KLK private”) 772 (referred to as a device private key) associated with a memory device such as memory device 506 in Figure 5. The device public key 770 can be an input (as“data”) into the enciyptor 774. The encryptor 774 can generate a result K’ 776 using the inputs of the device private identification 768 and the device public key 770. The device private key 772 and the result K’ 776 can be input into an additional encryptor 778, resulting in output K” 780. The output K” 780 is the device certificate (“IDL2 certificate”) 782 transmitted back to the Layer 1 (553 of Figure 5). The device certificate 782 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the memory device can be associated with an identity of the memory device by verifying the certificate, as will be described further in association with Figure 8. Further, the device public key (“KL2public key”) 784 can be transmitted to Layer 1. Therefore, the public identification 766, the certificate 782, and the device public key 784 of the memory device can be transmitted to Layer 1 of a host.[0074] In an example, in response to a host receiving a public key from a memory device, the host can encrypt data to be sent to the memory device using the device public key. Vice versa, the memory device can encrypt data to be sent to the host using the external public key. In response to the memory device receiving data encrypted using the device public key, the memory device can decrypt the data using its own device private key. Likewise, in response to the host receiving data encrypted using the external public key, the host can decrypt the data using its own external private key. As the device private key is not shared with another device outside the memory device and the external private key is not shared with another device outside the host, the data sent to the memory device and the host remains secure.[0075] Figure 8 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure. In the illustrated example of Figure 8, a public key 883, a certificate 881, and a public identification 865 is provided from a host (e.g., from Layer 1 553 of host 502 in Figure 5). The data of the certificate 881 and the external public key 883 can be used as inputs into a decryptor 885. The decryptor 885 can be any processor, computing device, etc used to decrypt data. The result of the decryption of the certificate 881 and the external public key 883 can be used as an input into a secondary decryptor 887 along with the public identification, result in an output. The external public key 883 and the output from the decryptor 887 can indicate, as illustrated at 889, whether the certificate is verified, resulting in a yes or no 891 as an output. In response to the certificate being verified, data received from the device being verified can be accepted, decrypted, and processed. In response to the certificate not being verified, data received from the device being verified can be discarded, removed, and/or ignored. In this way, nefarious devices sending nefarious data can be detected and avoided. As an example, a hacker sending data to be processed can be identified and the hacking data not processed.[0076] Figure 9 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure. In the instance where a device is sending data that may be verified in order to avoid subsequent repudiation, a signature can be generated and sent with data. As an example, a first device may make a request of a second device and once the second device performs the request, the first device may indicate that the first device never made such a request. An anti-repudiation approach, such as using a signature, can avoid repudiation by the first device and insure that the second device can perform the requested task without subsequent difficulty.[0077] A memory device 906 (such as memory device 206 in Figure 2) can send data 990 to a host (such as host 202 in Figure 2). The memory device 906 can generate, at 994, a signature 996 using a device private key 971. The signature 996 can be transmitted to the host 902. The host 902 can verify, at 998, the signature using data 992 and the external public key 969 previously received. In this way, the signature is generated using a private key and verified using a public key. In this way, the private key used to generate a unique signature can remain private to the device sending the signature while allowing the receiving device to be able to decrypt the signature using the public key of the sending device for verification. This is in contrast to encryption/decryption of the data, which is encrypted by the sending device using the public key of the receiving device and decrypted by the receiving device using the private key of the receiver. In at least one example, the device can verify the digital signature by using an internal cryptography process (e.g., Elliptical Curve Digital signature (ECDSA) or a similar process.[0078] Figure 10 is a block diagram of an example memory device 1006 in accordance with an embodiment of the present disclosure. Memory device 1006 can be, for example, memory device 206 previously described in connection with Figure 2.[0079] As shown in Figure 10, memory device 1006 can include a number of memory arrays 1001-1 through 1001-7. Memory arrays 1001-1 through 1001-7 can be analogous to memory array 101 previously described in connection with Figure 1. Further, in the example illustrated in Figure 10, memory array 1001-3 is a secure array, subset 1011 of memory array 1001-6 comprises a secure array, and subsets 1013 and 1015 of memory array 1001-7 comprise a secure array. Subsets 1011, 1013, and 1015 can each include, for instance, 4 kilobytes of data. However, embodiments of the present disclosure are not limited to a particular number or arrangement of memory arrays or secure arrays.[0080] As shown in Figure 10, memory device 1006 can include a remediation (e.g., recovery) block 1017. Remediation block 1017 can be used as a source of data in case of errors (e.g., mismatches) that may occur during operation of memory device 1006. Remediation block 1017 may be outside of the area of memory device 1006 that is addressable by a host.[0081] As shown in Figure 10, memory device 1006 can include a serial peripheral interface (SPI) 1004 and a controller 1008. Memory device 1006 can use SPI 1004 and controller 1008 to communicate with a host and memory arrays 1001-1 through 1001-7, as previously described herein (e.g., in connection with Figure 2).[0082] As shown in Figure 10, memory device 1006 can include a secure register 1019 for managing the security of memory device 1006. For example, secure register 1019 can configure, and communicate externally, to an application controller. Further, secure register 1019 may be modifiable by an authentication command.[0083] As shown in Figure 10, memory device 1006 can include keys 1021. For instance, memory device 1006 can include eight different slots to store keys such as root keys, DICE-RIOT keys, and/or other external session keys.[0084] As shown in Figure 10, memory device 1006 can include an electronically erasable programmable read-only memory (EEPROM) 1023. EEPROM 1023 can provide a secure non-volatile area available for a host, in which individual bytes of data can be erased and programmed.[0085] As shown in Figure 10, memory device 1006 can include counters (e.g., monotonic counters) 1025. Counters 1025 can be used as an anti replay mechanism (e.g., freshness generator) for commands (e.g., to sign a command set or sequence) received from and/or sent to a host. For instance, memory device 1006 can include six different monotonic counters, two of which may be used by memory device 1006 for the authenticated commands, and four of which may be used by the host.[0086] As shown in Figure 10, memory device 1006 can include anSHA-256 cryptographic hash function 1027, and/or an HMAC-SHA256 cryptographic hash function 1029. SHA-256 and/or HMAC-SHA256 cryptographic hash functions 1027 and 1029 can be used by memory device 1006 to generate cryptographic hashes, such as, for instance, the cryptographic hashes of block 220 previously described herein, and/or a golden hash used to validate the data stored in memory arrays 1001-1 through 1001-7 as previously described herein. Further, memory device 1006 can support L0 and L1 of DICE-RIOT 1031.[0087] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of a number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0088] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
An integrated magnetic film enhanced transformer and a method of forming an integrated magnetic film enhanced transformer are disclosed. The integrated magnetic film enhanced transformer includes an transformer metal having a first portion and a second portion, a top metal coupled to the transformer metal, a bottom metal coupled to the transformer metal, and an isolation film disposed between the first portion and the second portion of the transformer metal. The isolation film includes a magnetic material that can enhance a magnetic flux density B of the transformer, increase an electromotive force (EMF) of the transformer, and increase a magnetic permeability of the transformer.
Docket No. 072383 PATENT WHAT IS CLAIMED IS: 1. A transformer comprising: a transformer metal having a first portion and a second portion; and an isolation film interposing the first portion and the second portion of the transformer metal, wherein the isolation film includes a magnetic material. 2. The integrated magnetic film enhanced transformer according to claim 1, wherein the magnetic material is a magnetic film. 3. The transformer according to claim 1, further comprising: a top metal coupled to the transformer metal; and a bottom metal coupled to the transformer metal. 4. The transformer according to claim 3, wherein a via interconnect couples the top metal to the transformer metal. 5. The transformer according to claim 3, wherein a via interconnect couples the bottom metal to the transformer metal. 6. The transformer according to claim 1, wherein the transformer is a cross comb type planar transformer. 7. The transformer according to claim 1, wherein the transformer is a serpent type planar self coupling transformer. 8. The transformer according to claim 1, wherein the transformer is a circular type planar self coupling transformer.Docket No. 072383 PATENT 9. The transformer according to claim 1, wherein the transformer is a three- dimensional circular self coupling transformer. 10. The transformer according to claim 9, wherein the magnetic material extends along a longitudinal axis of the three-dimensional circular self coupling transformer. 11. The transformer according to claim 1 integrated in at least one semiconductor die. 12. The transformer according to claim 1, further comprising an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 13. A transformer comprising : a substrate; a transformer metal having a plurality of turns formed on the substrate; and a magnetic material disposed between adjacent portions of the plurality of turns of the transformer metal. 14. The transformer according to claim 13, wherein the magnetic material is a magnetic film. 15. The transformer according to claim 13, further comprising: a top metal coupled to the transformer metal; and a bottom metal coupled to the transformer metal. 16. The transformer according to claim 13 integrated in at least one semiconductor die.Docket No. 072383 PATENT 17. The transformer according to claim 13, further comprising an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 18. An integrated magnetic film enhanced three-dimensional circular self coupling transformer comprising: a transformer metal having a plurality of top metal portions, a plurality of via interconnects, and a plurality of bottom metal portions extending along a longitudinal axis of the transformer; and a magnetic material disposed between adjacent portions of the transformer metal. 19. The transformer according to claim 18, wherein the magnetic material is a magnetic film. 20. The transformer according to claim 18, wherein the magnetic material extends along the longitudinal axis of the transformer. 21. The transformer according to claim 18, wherein the plurality of top metal portions, the plurality of via interconnects, and the plurality of bottom metal portions form a plurality of U-shaped, interconnected elements extending along the longitudinal axis of the transformer. 22. The transformer according to claim 18, wherein a first top metal of the plurality of top metal portions is coupled in series to a first end of a first via interconnect of the plurality of via interconnects, wherein a second end of the first via interconnect is coupled in series to a first end of a first bottom metal of the plurality of bottom metal portions, wherein a second end of the first bottom metal is coupled in series to a first end of a second via interconnect of the plurality of via interconnects, andDocket No. 072383 PATENT wherein a second end of the second via interconnect is coupled in series to a first end of a second top metal of the plurality of top metal portions. 23. The transformer according to claim 22, wherein the first top metal is parallel to the second top metal, wherein the first via interconnect is parallel to the second via interconnect, wherein the first via interconnect and the second via interconnect are perpendicular to the first top metal and the second top metal, wherein the first via interconnect and the second via interconnect are perpendicular to the first bottom metal, and wherein the first bottom metal is perpendicular to each of the first via interconnect, the second via interconnect, the first top metal, and the second top metal. 24. The transformer according to claim 18 integrated in at least one semiconductor die. 25. The transformer according to claim 18, further comprising an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 26. A method of forming a transformer, the method comprising: depositing and patterning a transformer metal having a first portion and a second portion; and depositing and patterning a magnetic material between the first portion and the second portion of the transformer metal. 27. The method according to claim 26, wherein the magnetic material is a magnetic film.Docket No. 072383 PATENT 28. The method according to claim 26, wherein the magnetic material is a shape anisotropic magnetic film. 29. The method according to claim 26, further comprising: forming a bottom metal coupled to the transformer metal; and forming a top metal coupled to the transformer metal. 30. The method according to claim 26, wherein a thickness of the magnetic material is selected to reduce an eddy current and a skin effect inside the magnetic material to reduce magnetic field loss. 31. The method according to claim 26, further comprising: performing a magnetic anneal process to align a magnetic field axis of the transformer along an easy axis of the magnetic material. 32. The method according to claim 26, further comprising: depositing a cap layer on the transformer metal to self-align the magnetic material between the first portion and the second portion of the transformer metal. 33. The method according to claim 26, wherein the transformer is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 34. A method of forming a transformer, the method comprising: depositing and patterning a bottom metal using a metal deposit/photo/etching process; depositing a first inter layer dielectric (ILD) on the first metal and performing a chemical mechanical planarization (CMP) process on the inter layer dielectric; depositing a bottom cap film on the first inter layer dielectric (ILD); depositing an transformer metal on the bottom cap film and patterning the transformer metal using a photo/etching process;Docket No. 072383 PATENT depositing a top cap film above the transformer metal and performing a chemical mechanical planarization (CMP) process on the top cap film; performing a photo/etching process to the top cap film to form a plurality of holes between portions of the transformer metal; depositing a magnetic material over the top cap film and the plurality of holes and etching the magnetic material back to a top of the top cap film such that the magnetic material is interposed between the portions of the transformer metal; depositing a second inter layer dielectric (ILD) above the magnetic material and performing a chemical mechanical planarization (CMP) process on the second inter layer dielectric (ILD); and performing a vertical magnetic anneal to align a magnetic field axis of the transformer along an easy axis of the magnetic material. 35. The method according to claim 34, further comprising: patterning a first via opening in the first inter layer dielectric (ILD) using a photo/etching process and filling the first via opening with a metal to form a bottom via interconnect that couples the transformer metal to the bottom metal. 36. The method according to claim 34, further comprising: depositing and patterning a top metal above the second inter layer dielectric (ILD) using a metal deposit/photo/etching process; and patterning a second via opening in the second inter layer dielectric (ILD) using a photo/etching process and filling the second via opening with a metal to form a top via interconnect that couples the transformer metal to the top metal. 37. The method according to claim 34, wherein the magnetic material is a magnetic film. 38. The method according to claim 34, wherein the magnetic material is a shape anisotropic magnetic film.Docket No. 072383 PATENT 39. The method according to claim 34, wherein a thickness of the magnetic material is selected to reduce an eddy current and a skin effect inside the magnetic material to reduce magnetic field loss. 40. The method according to claim 34, wherein the transformer is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 41. A method of forming a transformer, the method comprising: depositing and patterning a bottom metal using a dual damascene process; depositing a first inter layer dielectric (ILD) on the first metal; depositing a bottom cap film on the first inter layer dielectric (ILD); depositing a second inter layer dielectric (ILD) on the bottom cap film; forming a plurality of trenches in the second inter layer dielectric (ILD) using photolithography and etching techniques; plating a copper layer over at least the plurality of trenches and polishing the copper layer down to the surface of the second inter layer dielectric (ILD) to form an transformer metal; depositing a top cap film above the second inter layer dielectric (ILD) and the transformer metal; forming a plurality of holes in the top cap film and the second inter layer dielectric (ILD) using photolithography and etching techniques; depositing a magnetic material layer over at least the plurality of holes; depositing a third inter layer dielectric (ILD) above the magnetic material and performing a chemical mechanical planarization (CMP) process on the third inter layer dielectric (ILD); and performing a vertical magnetic anneal to align a magnetic field axis of the transformer along an easy axis of the magnetic material. 42. The method according to claim 41, further comprising:Docket No. 072383 PATENT patterning a first via opening in the first inter layer dielectric (ILD) using a damascene process and filling the first via opening with a metal to form a bottom via interconnect that couples the transformer metal to the bottom metal. 43. The method according to claim 41, further comprising: depositing and patterning a top metal above the third inter layer dielectric (ILD) using a damascene process; and patterning a second via opening in the third inter layer dielectric (ILD) using a damascene process and filling the second via opening with a metal to form a top via interconnect that couples the transformer metal to the top metal. 44. The method according to claim 41, wherein the magnetic material is a magnetic film. 45. The method according to claim 41, wherein the magnetic material is a shape anisotropic magnetic film. 46. The method according to claim 41, wherein a thickness of the magnetic material is selected to reduce an eddy current and a skin effect inside the magnetic material to reduce magnetic field loss. 47. The method according to claim 41, wherein the transformer is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 48. A transformer comprising: transformer means for generating a magnetic field, the transformer means having a first portion and a second portion; and isolating means for magnetically isolating the first portion and the second portion of the transformer means, the isolating means interposing the first portion and the second portion of the transformer means,Docket No. 072383 PATENT wherein the isolating means includes a magnetic material. 49. The integrated magnetic film enhanced transformer according to claim 48, wherein the magnetic material is a magnetic film. 50. The transformer according to claim 48, further comprising: top metal means for electrically connecting the transformer coupled to the transformer means; and bottom metal means for electrically connecting the transformer coupled to the transformer means. 51. The transformer according to claim 50, further comprising: first via interconnecting means for coupling the top metal means to the transformer means. 52. The transformer according to claim 50, further comprising: second via interconnecting means for coupling the bottom metal means to the transformer means. 53. The transformer according to claim 48, wherein the transformer is a cross comb type planar transformer. 54. The transformer according to claim 48, wherein the transformer is a serpent type planar self coupling transformer. 55. The transformer according to claim 48, wherein the transformer is a circular type planar self coupling transformer. 56. The transformer according to claim 48, wherein the transformer is a three-dimensional circular self coupling transformer.Docket No. 072383 PATENT 57. The transformer according to claim 56, wherein the magnetic material extends along a longitudinal axis of the three-dimensional circular self coupling transformer. 58. The transformer according to claim 48 integrated in at least one semiconductor die. 59. The transformer according to claim 48, further comprising an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated. 60. A method of forming a transformer, the method comprising: step for depositing and patterning a transformer metal having a first portion and a second portion; and step for depositing and patterning a magnetic material between the first portion and the second portion of the transformer metal. 61. The method according to claim 60, wherein the magnetic material is a magnetic film. 62. The method according to claim 60, wherein the magnetic material is a shape anisotropic magnetic film. 63. The method according to claim 60, further comprising: step for forming a bottom metal coupled to the transformer metal; and step for forming a top metal coupled to the transformer metal. 64. The method according to claim 60, wherein a thickness of the magnetic material is selected to reduce an eddy current and a skin effect inside the magnetic material to reduce magnetic field loss.Docket No. 072383 PATENT 65. The method according to claim 60, further comprising: performing a magnetic anneal process to align a magnetic field axis of the transformer along an easy axis of the magnetic material. 66. The method according to claim 60, further comprising: step for depositing a cap layer on the transformer metal to self-align the magnetic material between the first portion and the second portion of the transformer metal. 67. The method according to claim 60, wherein the transformer is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the transformer is integrated.
Docket No. 072383 PATENT MAGNETIC FILM ENHANCED TRANSFORMER Field of Disclosure [0001] Disclosed embodiments are related to transformers, and methods of forming transformers. More particularly, the embodiments are related to integrated magnetic film enhanced transformers, and methods of forming integrated magnetic film enhanced transformers. Background [0002] A transformer is a device that transfers electrical energy from one circuit to another circuit through a shared magnetic field. The transformer is based on two principles: first, that an electric current can produce a magnetic field (electromagnetism) and, second, that a changing magnetic field within a coil of wire induces a voltage across the ends of the coil (electromagnetic induction). By changing the current in the primary coil, the strength of the magnetic field is changed. Since the secondary coil is wrapped around the same magnetic field, a voltage is induced across the secondary coil. By adding a load to the secondary circuit, one can make current flow in the second circuit, thus transferring energy from one circuit to the other circuit. [0003] FIGS. 1 and 2 illustrate schematic views of a simplified transformer design and circuit. In operation, a current passing through the primary coil creates a magnetic field. The primary and secondary coils are wrapped around a core of very high magnetic permeability, such as iron, which ensures that most of the magnetic field lines produced by the primary current are within the iron and pass through the secondary coil as well as the primary coil. [0004] Such transformers can be integrated into a logic/RF CMOS process by utilizing standard CMOS back-end process steps, e.g., metal deposition, dielectric deposition, and metal patterning in the CMOS foundry. The conventional logic/RF process commonly uses an oxide or a low-k oxide as an isolation film. By integrating a transformer into a logic/RF process, a DC-DC converter and power transfer can be provided. [0005] FIG. 3 shows a top view of a conventional cross comb type planar transformer structure. FIG. 4 shows a cross-sectional view of the cross comb type planar transformer structure illustrated in FIG. 3. The transformer can include, for example, a cross comb type transformer metal 310 connected to a bottom metal 312 by a viaDocket No. 072383 PATENT interconnect 314. As shown in FIG. 4, the conventional transformer commonly uses an oxide or a low-k oxide as an isolation film 308 and/or a high-k cap film 320. SUMMARY [0006] Exemplary embodiments are directed to transformers, and methods of forming transformers. More particularly, the embodiments are related to integrated magnetic film enhanced transformers, and methods of forming integrated magnetic film enhanced transformers. [0007] [0008] For example, an exemplary embodiment is directed to an integrated magnetic film enhanced transformer including a transformer metal having a first portion and a second portion, a top metal coupled to the transformer metal, a bottom metal coupled to the transformer metal, and an isolation film interposing the first portion and the second portion of the transformer metal. The isolation film includes a magnetic material. [0009] In another exemplary embodiment, a transformer can include a substrate, a transformer metal having a plurality of turns formed on the substrate, a top metal coupled to the transformer metal, a bottom metal coupled to the transformer metal, and a magnetic material disposed between adjacent portions of the plurality of turns of the transformer metal. [0010] In yet another exemplary embodiment, an integrated magnetic film enhanced three-dimensional circular self coupling transformer can include a transformer metal having a plurality of top metal portions, a plurality of via interconnects, and a plurality of bottom metal portions extending along a longitudinal axis of the transformer, and a magnetic material disposed between adjacent portions of the transformer metal. [0011] Another exemplary embodiment is directed to a method of forming an integrated magnetic film enhanced transformer. The method can include forming a bottom metal, depositing and patterning a transformer metal having a first turn and a second turn, and coupled to the bottom metal, depositing and patterning a magnetic material between the first turn and the second turn of the transformer metal, and forming a top metal coupled to the transformer metal. [0012] Another exemplary embodiment is directed to a method of forming an integrated magnetic film enhanced transformer. The method can include depositing and patterning a bottom metal using a metal deposit/photo/etching process, depositing a first inter layerDocket No. 072383 PATENT dielectric (ILD) on the first metal and performing a chemical mechanical planarization (CMP) process on the inter layer dielectric, depositing a bottom cap film on the first inter layer dielectric (ILD), depositing an transformer metal on the bottom cap film and patterning the transformer metal using a photo/etching process, depositing a top cap film above the transformer metal and performing a chemical mechanical planarization (CMP) process on the top cap film, performing a photo/etching process to the top cap film to form a plurality of holes between portions of the transformer metal, depositing a magnetic material over the top cap film and the plurality of holes and etching the magnetic material back to a top of the top cap film such that the magnetic material is interposed between the portions of the transformer metal, depositing a second inter layer dielectric (ILD) above the magnetic material and performing a chemical mechanical planarization (CMP) process on the second inter layer dielectric (ILD), and performing a vertical magnetic anneal to align a magnetic field axis of the transformer along an easy axis of the magnetic material. [0013] Another exemplary embodiment is directed to a method of forming an integrated magnetic film enhanced transformer. The method can include depositing and patterning a bottom metal using a dual damascene process, depositing a first inter layer dielectric (ILD) on the first metal, depositing a bottom cap film on the first inter layer dielectric (ILD), depositing a second inter layer dielectric (ILD) on the bottom cap film, forming a plurality of trenches in the second inter layer dielectric (ILD) using photolithography and etching techniques, plating a copper layer over at least the plurality of trenches and polishing the copper plating layer down to the surface of the second inter layer dielectric (ILD) to form an transformer metal, depositing a top cap film above the second inter layer dielectric (ILD) and the transformer metal, forming a plurality of holes in the top cap film and the second inter layer dielectric (ILD) using photolithography and etching techniques, depositing a magnetic material layer over at least the plurality of holes, depositing a third inter layer dielectric (ILD) above the magnetic material and performing a chemical mechanical planarization (CMP) process on the third inter layer dielectric (ILD), and performing a vertical magnetic anneal to align a magnetic field axis of the transformer along an easy axis of the magnetic material. [0014] Another exemplary embodiment is directed to a transformer including transformer means for generating a magnetic field, the transformer means having a firstDocket No. 072383 PATENT portion and a second portion, and isolating means for magnetically isolating the first portion and the second portion of the transformer means, the isolating means interposing the first portion and the second portion of the transformer means, wherein the isolating means includes a magnetic material. [0015] Another exemplary embodiment is directed to a method of forming a transformer. The method can include a step for depositing and patterning a transformer metal having a first portion and a second portion, and a step for depositing and patterning a magnetic material between the first portion and the second portion of the transformer metal. BRIEF DESCRIPTION OF THE DRAWINGS [0016] The accompanying drawings are presented to aid in the description of embodiments and are provided solely for illustration of the embodiments and not limitation thereof. [0017] FIG. 1 is a perspective schematic view of a conventional transformer. [0018] FIG. 2 is a schematic of a conventional transformer circuit. [0019] FIG. 3 is a top plan view of a conventional cross comb type planar transformer structure. [0020] FIG. 4 is a cross-sectional view taken along line 4Y-4Y' in FIG. 3. [0021] FIG. 5 is a top plan view of a cross comb type planar transformer structure. [0022] FIG. 6 is a cross-sectional view taken along line 6Y-6Y' in FIG. 5. [0023] FIG. 7 is a top plan view of a serpent type planar self coupling transformer structure. [0024] FIG. 8 is a cross-sectional view taken along line 8Y-8Y' in FIG. 7. [0025] FIG. 9 is a top plan view of a circular type planar self coupling planar transformer. [0026] FIG. 10 is a cross-sectional view taken along line 10Y- 1OY' in FIG. 9. [0027] FIG. 11 is a perspective view of a three-dimensional circular self coupling transformer. [0028] FIG. 12 is a cross-sectional view taken along line 1 IY-I IY' in FIG. 11. [0029] FIG. 13 is a perspective view of a three-dimensional circular self coupling transformer. [0030] FIG. 14 is a flow diagram illustrating a method of forming a transformer.Docket No. 072383 PATENT [0031] FIG. 15 is a flow diagram illustrating a method of forming a transformer. DETAILED DESCRIPTION [0032] Aspects of the embodiments are disclosed in the following description and related drawings directed to such embodiments. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements used and applied in the embodiments will not be described in detail or will be omitted so as not to obscure the relevant details. [0033] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments include the discussed feature, advantage or mode of operation. [0034] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising", "includes" and/or "including", when used herein, specify the presence of stated features, integers, blocks, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, blocks, steps, operations, elements, components, and/or groups thereof. [0035] The disclosed exemplary embodiments are directed to transformers, and methods of forming transformers. More particularly, the embodiments are related to integrated magnetic film enhanced transformers, and methods of forming integrated magnetic film enhanced transformers. Such integrated magnetic film enhanced transformers can be used, for example, for DC-DC converters, power transfer, system on chip (SoC) with analog application, etc. [0036] Increasing transformer efficiency and reducing size is important for circuit design and integration. Conventionally, an oxide layer is used as an isolation film, which results in a reduction in efficiency and an increase in size of the transformer. The disclosed embodiments implement a magnetic material, such as a magnetic film (e.g., a ferromagnetic film such as CoFe, CoFeB, or NiFe, etc.), instead of an oxide layer. TheDocket No. 072383 PATENT magnetic film can enhance a magnetic flux density B, which greatly increases the permeability and the electromotive force (EMF) of the transformer. For a given EMF of the transformer, the implementation of the magnetic film can reduce the size of the transformer and/or improve the transformer efficiency. [0037] The disclosed embodiments recognize that the EMF of the transformer is proportional to the magnetic flux density B, turn N, and cross-section a. The magnetic flux density B is proportional to the magnetic permeability. By using a magnetic film instead of oxide, the permeability can be increased, for example, by about one hundred to one thousand times. The EMF of the transformer also can be increased by, for example, the same or similar amount. For a given EMF of the transformer, the implementation of the magnetic film can reduce the size of the transformer and/or improve the transformer efficiency. [0038] According to the exemplary embodiments, the transformer efficiency can be improved and the size of the transformer can be reduced. [0039] With reference to FIGS. 1-15, exemplary embodiments of an integrated magnetic film enhanced transformer, and methods of forming an integrated magnetic film enhanced transformer, will now be described. [0040] With reference again to the schematic of a transformer and circuit in FIGS. 1 and 2, one of ordinary skill in the art will recognize that a current passing through the primary coil creates a magnetic field. The primary and secondary coils are wrapped around a core of a material having very high magnetic permeability, such as iron. This ensures that most of the magnetic field lines produced by the primary current travel within the iron and pass through the secondary coil as well as the primary coil. [0041] The voltage induced across the secondary and primary coil can be calculated from Faraday's law of induction, which states that: dΦ r . dΦ ^" s _ ^s *■■''_? = A? 5--r- Vp = A' P TΓ T >"" TF'" [0042] dt , at , ^ P ΛF [0043] where Vs or Vp is the instantaneous voltage in the secondary or primary coils, respectively, Ns or Np is the number of turns in the secondary or primary coil and Φ equals the total magnetic flux through one turn of the coil. If the turns of the coil are oriented perpendicular to the magnetic field lines, the flux is the product of the magnetic field strength B and the area A through which it cuts. The area is constant, being equalDocket No. 072383 PATENT to the cross-sectional area of the transformer core, whereas the magnetic field varies with time according to the excitation of the primary. [0044] In an ideal transformer, all the incoming energy would be transferred from the primary circuit to the magnetic field and thence to the secondary circuit, and the following equations would be satisfied. [0045] ^incoming = ^p ' ^p = ^outcoming = ^s ' K then ^ F ^'-F ^S [0046] Thus, if a voltage is stepped up (Vs > Vp) by the transformer, then a current is stepped down (Is < Ip) by the same factor. In practice, most transformers are very efficient, so that this formula is a good approximation. [0047] Electromotive force (EMF) of a transformer at a given flux density increases with frequency, an effect predicted by the universal transformer EMF equation. By operating at higher frequencies, transformers can be physically more compact because a given core is able to transfer more power without reaching saturation, and fewer turns are needed to achieve the same impedance. However, properties such as core loss and conductor skin effect also increase with frequency. If the flux in the core is sinusoidal, the relationship for either winding between its root mean square EMF energy, and the supply frequency f, number of turns N, core cross-sectional area a, and peak magnetic flux density B is given by the universal EMF equation: [0048] V 2 [0049] The magnetic field of a point charge moving at constant velocity is (in SI units): [0050] B ^ v x JuD9 B = μϋ (H + M i = μύ ( l + \m H = μH [0051] where: v is velocity vector of the electric charge, measured in meters per second; x indicates a vector cross product; D is the electric displacement vector; and μ is the magnetic permeability. [0052] Considering the permittivity of a material, the electric displacement field is:Docket No. 072383 PATENT [0054] where: E is the electric field vector measured in newtons per coulomb or volts per meter; and p is the charge density, or the amount of charge per unit volume. [0055] Accordingly, increasing μ will increase the EMF Energy of the transformer. [0056] The transformer can be integrated into a logic/RF CMOS process by utilizing standard CMOS back-end process steps, e.g., metal deposition, dielectric deposition, and metal patterning in the CMOS foundry. With reference again to FIG. 4, the conventional logic/RF process commonly uses oxide or low-k oxide as an isolation film 308. By integrating the transformer into the logic/RF process, a DC-DC converter and power transfer can be provided. [0057] In view of the above theory analysis, it will be recognized that, among other things, that the EMF of the transformer is proportional to the magnetic flux density B, number of turns N, and cross-section a. Furthermore, the magnetic flux density B is proportional to magnetic permeability. [0058] Accordingly, by using a magnetic material, such as a magnetic film (e.g., a ferromagnetic film or thin film), instead of an oxide as the isolation film, the embodiments can increase the permeability, for example, by about one hundred to one thousand times. The embodiments also can increase the electromotive force (EMF), for example, by the same or similar amount. For a given EMF of the transformer, the implementation of a magnetic film can reduce the size of the transformer and/or increase the transformer efficiency. [0059] An integrated magnetic field enhanced transformer according to an embodiment can be formed, for example, using two or three metal layers and one or two via logic CMOS back-end process steps, e.g., metal deposition, dielectric deposition, and metal patterning in the CMOS foundry. For example, a magnetic film can be inserted as a strip into a metal wire space to maintain a vertical anisotropic or a horizontal anisotropic of the magnetic field. It will be recognized that the magnetic material may be any suitable material, combination of materials, or alloy that exhibits magnetic properties, such as a ferromagnetic material or a ferromagnetic thin film such as CoFe, CoFeB, or NiFe, etc. Furthermore, a thickness of the magnetic material may be selected to reduce an eddy current and a skin effect inside the magnetic material to reduce magnetic field loss.Docket No. 072383 PATENT [0060] With reference to FIGS. 5-17, exemplary embodiments of an integrated magnetic film enhanced transformer, and methods of forming an integrated magnetic film enhanced transformer, will now be described. [0061] As shown in FIGS. 5 and 6, a cross comb type planar transformer structure according to an exemplary embodiment can include a cross comb type planar transformer metal 510 coupled or connected to a bottom metal 512 by via 514. A magnetic material 516, such as a magnetic film, can be inserted as a strip into one or more spaces between the transformer metal 510. As shown in the cross-sectional illustration in FIG. 6, a cap film 520 is deposited on an inter layer dielectric (ILD) 522. The transformer metal 510 is patterned on the cap film 520. The magnetic material 516 is disposed in the spaces between the transformer metal 510. That is, the magnetic material 516 interposes portions of the transformer metal 510. The strips of magnetic material 516 can be thin and shape anisotropic along the vertical direction or axis (i.e., the long axis of the strips, or the easy axis). The strips of magnetic material 516 can reduce the eddy current and the skin effect and enhance anisotropic magnetic flux, along with the strips inside the space of metal. According to an embodiment, the transformer efficiency can be increased and the energy loss in the magnetic films can be reduced. By using the magnetic material 516 instead of an oxide as the isolation film, the permeability can be increased, for example, by about one hundred to one thousand times. The electromotive force (EMF) also can be increased, for example, by the same or similar amount. For a given EMF of the transformer, the implementation of the magnetic material 516 can reduce the size of the transformer and/or improve the transformer efficiency. [0062] As shown in FIGS. 7 and 8, a serpent type planar self coupling transformer structure according to an exemplary embodiment can include a serpent type planar transformer metal 510 coupled or connected to a top metal 518 and bottom metal 512 by via interconnects 524 and 514. A magnetic material 516, such as a magnetic film, can be inserted as a strip into one or more spaces between the transformer metal 510. As shown in the cross-sectional illustration in FIG. 8, a cap film 520 is deposited on an inter layer dielectric (ILD) 522. The transformer metal 510 is patterned on the cap film 520. The magnetic material 516 is disposed in the spaces between the transformer metal 510. That is, the magnetic material 516 interposes portions of the transformer metal 510. By using the magnetic material 516 instead of an oxide as the isolation film,Docket No. 072383 PATENT the permeability can be increased, for example, by about one hundred to one thousand times. The electromotive force (EMF) also can be increased by the same or similar amount. For a given EMF of the transformer, the implementation of the magnetic material 516 can reduce the size of the transformer and/or improve the transformer efficiency. [0063] As shown in FIGS. 9 and 10, a circular type planar self coupling planar transformer structure according to an exemplary embodiment can include a circular type planar transformer metal 530 coupled or connected to a top metal 538 and bottom metal 532. As shown in the cross-sectional illustration in FIG. 10, a bottom cap film 540 is deposited on an inter layer dielectric (ILD) 542. The transformer metal 530 and dielectric 544 are deposited on the cap film 540. A top cap film 546 is deposited over the transformer metal 530 and dielectric 544. A via 534 couples or connects the bottom metal 532 to the transformer metal 530. A magnetic material 536, such as a magnetic film, can be inserted, for example, as a strip into one or more spaces between the transformer metal 530. The magnetic material 536 is disposed in the spaces between the transformer metal 530. That is, the magnetic material 536 interposes portions of the transformer metal 530. By using the magnetic material 536 instead of an oxide as the isolation film, the permeability can be increased, for example, by about one hundred to one thousand times. The electromotive force (EMF) also can be increased by the same or similar amount. For a given EMF of the transformer, the implementation of the magnetic material 536 can reduce the size of the transformer and/or increase the transformer efficiency. [0064] As shown in FIGS. 11 and 12, a three-dimensional circular self coupling transformer structure according to an exemplary embodiment can include a three- dimensional circular transformer metal, which includes, for example, a plurality of top metal portions 558 and bottom metal portions 552 connected by via interconnects 554. The metal portions can be formed, for example, from copper. The transformer structure includes first and second terminals 566 and 568, and an output lead 564. As shown in the cross-sectional illustration in FIG. 12, the bottom metal portions 552 can be formed in an inter layer dielectric (ILD) 562. The via interconnects 554 can be formed on the bottom metal portions 552. A cap film 560 is formed on the inter layer dielectric (ILD) 562. The top metal portions 558 can be coupled or connected to the via interconnects 554. A magnetic material 556, such as a magnetic film, can be inserted longitudinallyDocket No. 072383 PATENT as a strip into a space between the three-dimensional coil formed by the top metal portions 558, via interconnects 554, and bottom metal portions 552. [0065] In another embodiment, the magnetic material 556 can be disposed on one or more sides of the three-dimensional coil formed by the top metal portions 558, via interconnects 554, and bottom metal portions 552, as exemplarily shown in FIG. 13. [0066] One of ordinary skill in the art will recognize that the magnetic material 556 can be inserted longitudinally as a strip into a space between the three-dimensional coil, or disposed on one or more sides of the three-dimensional coil, as well as a combination of one or more of these features. [0067] As shown in FIGS. 11-13, the magnetic material 556 interposes one or more portions of the transformer metal. By using the magnetic material 556 instead of an oxide as the isolation film, the permeability can be increased, for example, by about one hundred to one thousand times. By providing thin strips of magnetic material that are shape anisotropic along the vertical direction (i.e., the long axis of the strips, or the easy axis), the embodiments can reduce the eddy current and the skin effect and enhance the anisotropic magnetic flux, along with the strips inside the space of the metal. The embodiments can increase the transformer efficiency and reduce the energy loss in the magnetic films. The embodiments also can increase the electromotive force (EMF), for example, by the same or similar amount. For a given EMF of the transformer, the implementation of the magnetic material 556 can reduce the size of the transformer and/or increase the transformer efficiency. [0068] With reference to FIGS. 14 and 15, exemplary methods of forming an integrated magnetic field enhanced transformer will now be described. [0069] FIG. 14 illustrates an exemplary method of forming an integrated magnetic field enhanced transformer according to an embodiment. A metal deposit/photo/etching process can be used to form metal wire. As explained above, an integrated magnetic field enhanced transformer can be formed, for example, using two or three metal layers (e.g., a bottom metal, a transformer metal, and/or a top metal) and one or two via logic backend processes. A magnetic material, such as a magnetic film, can be inserted as a strip into one or more spaces between the metal wire (e.g., transformer metal) to maintain a vertical anisotropic or a horizontal anisotropic of the magnetic field. [0070] For example, with reference to FIGS. 10 and 14, an exemplary method of forming an integrated magnetic field enhanced transformer can include depositing andDocket No. 072383 PATENT patterning a bottom metal 532 by performing a metal deposit/photo/etching process (e.g., 1402). Next, an inter layer dielectric (ILD) 542 can be deposited and a chemical mechanical planarization (CMP) process can be performed (e.g., 1404). A bottom cap film 540 can be deposited on the ILD 542 (e.g., 1406). A via photo/etching/fϊlling/CMP process can be performed to form a via 534 (e.g., 1408). A transformer metal 530 can be deposited and patterned by performing a metal deposit/photo/etching process (e.g., 1410). The transformer metal 530 can be coupled to the bottom metal 532 by a via interconnect 534. Then, a cap/oxide film can be deposited and a CMP process can be performed (e.g., 1412). A photo/etching process can be performed to form holes for the magnetic film strips (e.g., 1414). A magnetic material layer, such as a magnetic film, can be deposited and etched back to the top of the magnetic material to form a magnetic material 536 (e.g., 1416) interposed between portions of the transformer metal 530. [0071] Next, an ILD film can be deposited and a CMP process can be performed (e.g., 1418). A vertical magnetic anneal can be applied (e.g., 1420). A via patterning process (photo/etching) can be performed and the via can be filled, for example, by tungsten. Then, a CMP process can be performed to remove the extra tungsten on the top of the surface to form a via (not shown) (e.g., 1422). Finally, a metal film (not shown) can be deposited and the metal can be patterned by a photo/etching process to make the connection to the top via (e.g., 1424). [0072] FIG. 15 illustrates an exemplary method of forming an integrated magnetic field enhanced transformer according to an embodiment. The method can use, for example, a dual damascene trench process to pattern a copper metal and a via. With reference to FIGS. 10 and 15, an exemplary method of forming an integrated magnetic field enhanced transformer can include using a dual damascene process, patterning trenches and plating copper and performing chemical mechanical planarization (CMP) on the bottom metal layer to form the bottom metal 532 (e.g., 1502). Next, an inter layer dielectric (ILD) 542 can be deposited (e.g., 1504). A bottom cap film 540 can be deposited on the ILD 542 (e.g., 1506). An ILD film 544 can be deposited on the bottom cap film 540 (e.g., 1508). A via opening can be formed in the ILD 542, for example, using a damascene process (e.g., 1510) and filling the via opening with metal to form a bottom via interconnect. [0073] Next, the method can include forming trenches for metal wire using photolithography and etching techniques (e.g., 1512). The method can include plating aDocket No. 072383 PATENT copper layer over at least the trenches and the vias. Then, the copper layer can be polished down to the surface of the ILD 544 using, for example, chemical mechanical planarization (CMP) techniques (e.g., 1514) to form the transformer metal 530. A exemplary method can include an oxide etching back process in ILD 544 (e.g., 1516). A top cap film 546 can be deposited on the ILD 544 and the transformer metal 530 (e.g., 1518). [0074] A plurality of holes can be formed in the top cap film 546 and the ILD film 544 using photolithography and etching techniques (e.g., 1520). A magnetic material layer, such as a magnetic film, can be deposited over at least the etched holes, and then the magnetic material 536 can be, for example, etched back or polished using CMP techniques to the surface of the top cap film 546 to form the magnetic material 536 (e.g., 1522). [0075] Next, an ILD film 548 can be deposited over the top cap film 546 and the magnetic material 536, and polished using, for example, chemical mechanical planarization (CMP) techniques (e.g., 1524). A vertical magnetic annealing process can be performed to align the magnetic field with the easy axis of the magnetic strips (e.g., 1526). [0076] The method can include forming a via opening in the ILD 548, for example, using a damascene process and filling the via opening with a metal to form a via (not shown) to connect the transformer metal 530 to a top metal 538 (e.g., 1528). Finally, a metal layer (not shown) can be formed over the ILD 548 and patterned to form the top metal 538, for example, using a damascene process (e.g., 1530). [0077] According to the features of the embodiments, an integrated magnetic field enhanced transformer, and a method of forming an integrated magnetic field enhanced transformer, can be provided, for example, for a DC-DC converter, power transfer, SoC with analog applications, etc. The embodiments implement a magnetic material, such as a magnetic film, to enhance a magnetic flux density B and increase the electromotive force (EMF) of the transformer. The magnetic strips can be shape anisotropic to increase the intrinsic magnetic field. The thickness of the magnetic strips can be reduced to provide thin magnetic strips to reduce the eddy current and the skin effect, and thereby to reduce loss of magnetic field. Since the EMF of the transformer is proportional to the magnetic flux density B, turn N, and cross-section a, and the magnetic flux density B is proportional to the magnetic permeability, the permeabilityDocket No. 072383 PATENT can be increased, for example, by about one hundred to one thousand times, by implementing a magnetic material, such as a magnetic film, instead of oxide as an isolation film between the transformer metal portions. The EMF of the transformer also can be increased by, for example, the same or similar amount. Thus, for a given EMF of the transformer, the embodiments can reduce the size of the transformer and/or increase the transformer efficiency.lt will be appreciated that the transformer, as illustrated for example in FIGS. 5-15, may be included within a mobile phone, portable computer, hand-held personal communication system (PCS) unit, portable data units such as personal data assistants (PDAs), GPS enabled devices, navigation devices, settop boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, or any other device that stores or retrieves data or computer instructions, or any combination thereof. Accordingly, embodiments of the disclosure may be suitably employed in any device which includes active integrated circuitry including memory and on-chip circuitry for test and characterization. [0078] The foregoing disclosed devices and methods are typically designed and are configured into GDSII and GERBER computer files, stored on a computer readable media. These files are in turn provided to fabrication handlers who fabricate devices based on these files. The resulting products are semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips are then employed in devices described above. [0079] Those of skill in the art will appreciate that the disclosed embodiments are not limited to illustrated exemplary structures or methods, and any means for performing the functionality described herein are included in the embodiments. [0080] While the foregoing disclosure shows illustrative embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. CLAIMS
Methods and apparatus are provided for use in digital information processors that support digital memory buffers. In one aspect of the present invention, a digital signal processor receives a swap instruction and responds to the swap instruction by swapping the contents of a first address register and a second address register. In another aspect, a digital signal processor receives a swap instruction, swaps the contents of a first address register and a second address register in a future file, generates one or more control signals to an architecture file in a downstream stage of a pipeline in response to the swap instruction, and swaps the contents of the first address register and the second address register in the architecture file in response to the one or more control signals.
CLAIMS 1. A method for use in a digital information processor having a first address register for storing a first address and having a second address register for storing a second address, the method comprising: responding to a swap instruction, which specifies a swap operation for at least two address registers that are identified explicitly or implicitly, by swapping the contents of the first address register and the second address register. 2. The method of claim 1 wherein responding comprises: decoding the swap instruction and generating a signal that indicates a swap instruction; and responding to the signal that indicates a swap instruction by swapping the contents of the first address register and the second address register. 3. The method of claim 2 wherein responding to the signal comprises supplying the first address from the first address register to the second address register and supplying the second address from the second address register to the first address register. 4. The method of claim 2 wherein responding to the signal comprises: receiving the signal that indicates a swap instruction, and generating control signals in response at least thereto; supplying the first address from the first address register to the second address register in response to one or more of the control signals; supplying the second address from the second address register to the first address register in response to one or more of the control signals; and storing the first address in the second address register and storing the second address in the first address register. <Desc/Clms Page number 22> 5. The method of claim 1 wherein the first address register is associated with a first memory buffer and the second address register is associated with a second memory buffer. 6. The method of claim 1 wherein the swap instruction includes an op code that identifies the swap operation, a first operand that identifies the first address register, and a second operand that identifies the second address register. 7. The method of claim 5 wherein the first address register and the second address register comprise a base register for the first memory buffer and a base register for the second memory buffer, respectively. 8. The method of claim 5 wherein the first address register and the second address register comprise an index register for the first memory buffer and an index register for the second memory buffer, respectively. 9. The method of claim 8 wherein responding to the swap instruction further comprises swapping the contents of a base register for the first memory buffer and a base register for the second memory buffer. 10. A digital information processor comprising: a first address register for storing a first address; a second address register for storing a second address; and a circuit that receives a swap instruction, which specifies a swap operation for at least two address registers that are identified explicitly or implicitly, and responds to the swap instruction by swapping the contents of the first address register with the contents of the second address register. 11. The digital information processor of claim 10 wherein the circuit comprises: <Desc/Clms Page number 23> an instruction decoder for receiving and decoding an instruction, and for generating a signal that indicates a swap instruction; and a data address generator for responding to the signal that indicates a swap instruction by swapping the contents of the first address register and the second address register. 12. The digital information processor of claim 11 wherein the data address generator comprises a swap unit for supplying the first address from the first address register to the second address register and supplying the second address from the second address register to the first address register. 13. The digital information processor of claim 11 wherein the data address generator comprises: a control unit for receiving the signal that indicates a swap instruction, and generates control signals in response at least thereto; and a register swap unit, receiving the first address from the first address register and receiving the second address from the second address register, and responding to one or more of the control signals by supplying the first address to the second address register and supplying the second address to the first address register, and wherein the first address is stored in the second address register, and the second address is stored in the first address register. 14. The digital information processor of claim 10 wherein the first address register is associated with a first memory buffer and the second address register is associated with a second memory buffer. 15. The digital information processor of claim 10 wherein the swap instruction includes an op code that identifies the swap operation, a first operand that identifies the first address register, and a second operand that identifies the second address register. <Desc/Clms Page number 24> 16. The digital information processor of claim 10 wherein the address from the first address register and the address from the second address register are each supplied to a load/store unit. 17. The digital information processor of claim 14 wherein the first address register and the second address register comprise a base register for the first memory buffer and a base register for the second memory buffer, respectively. 18. The digital information processor of claim 14 wherein the first address register and the second address register comprise an index register for the first memory buffer and an index register for the second memory buffer, respectively. 19. The digital information processor of claim 18 wherein the digital information processor further responds to the swap instruction by swapping the contents of a base register for the first memory buffer and a base register for the second memory buffer. 20. A digital information processor comprising: a first address register for storing a first address; a second address register for storing a second address; and means, responsive to a swap instruction, which specifies a swap operation for at least two address registers that are identified explicitly or implicitly, for swapping the contents of the first address register and the second address register. 21. A data address generator comprising: a first address register containing a first address corresponding to a location in a first circular buffer; a second address register containing a second address corresponding to a location in a second circular buffer; and <Desc/Clms Page number 25> a circuit that receives a signal that indicates a swap instruction and responds to the signal by swapping the contents of the first address register and the second address register. 22. The data address generator of claim 21 wherein the circuit comprises a swap unit that receives the first address from the first index register and receives the second address from the second index register, and supplies the first address to the second index register and supplies the second address to the first index register. 23. The data address generator of claim 21 further comprising an apparatus for generating target addresses within a group of circular buffers, each circular buffer extending in a memory between bounds defined by a base address and an end address that is equal to a sum of the base address and a length, the apparatus being responsive to the previous address accessed within said buffer, I, and a specified offset, M, the apparatus comprising: a register for storing the previous address accessed within said buffer, I ; a set of registers for storing information which defines the position and size of said circular buffer in memory; and an arithmetic logic unit for generating a incremented address by calculating the value of I+M, for generating an address by modifying the value of I+M by the length of the buffer; and for providing the one of an incremented address and a wrapped address which is within the bounds of the circular buffer. 24. A method for use in a digital information processor comprising a pipeline having a future file and an architecture file, the future file being upstream relative to the architecture file, the future file including a first address register and a second address register, the architecture file including a first address register and a second address register, the method comprising : swapping the contents of the first address register and the second address register in the future file in response to a swap instruction ; <Desc/Clms Page number 26> generating and sending one or more control signals from the future file to the architecture file in response to the swap instruction; and swapping the contents of the first address register and the second address register in the architecture file in response to the one or more control signals. 25. A method for use in a digital information processor comprising a pipeline having a first pipeline stage and a second pipeline stage, the first pipeline stage being upstream relative to the second pipeline stage, the first pipeline stage and the second pipeline stage each being capable of performing an operation, the method comprising: performing the operation in the first pipeline stage in response to receipt of the instruction; generating and sending one or more control signals from the first pipeline stage to the second pipeline stage in response to the instruction ; and performing the operation in the second pipeline stage in response to one or more of the one or more control signals.
<Desc/Clms Page number 1> METHOD AND APPARATUS FOR SWAPPING THE CONTENTS OF ADDRESS REGISTERS FIELD OF THE INVENTION The present invention relates to digital information processors, and more particularly, to methods and apparatus for use in digital information processors that support digital memory buffers. \ BACKGROUND OF THE INVENTION Many digital information processors provide digital memory buffers to temporarily store information. A digital memory buffer may be constructed of dedicated hardware registers wired together or it may simply be a dedicated section of a larger memory. One type of digital memory buffer is referred to as a circular buffer. In a circular buffer, the first location of the buffer is treated as if it follows the last location of the buffer. That is, when accessing consecutive locations in the buffer, the first location automatically follows the last location. It is desirable to quickly access the information that is stored in a circular buffer. For example, a digital information processor may have an execution pipeline to enhance throughput, yet information must be accessed quickly in order to take full advantage of the pipeline. Consequently, memory addresses (which are used to access the locations of the buffer) are often generated using a hardware-implemented address generator. One type of hardware-implemented address generator for a circular buffer maintains four registers for each circular buffer: (1) a base register, B, containing the lowest numbered address in the buffer, (2) an index register, I, containing the next address to be accessed in the buffer, (3) a modify register, M, containing the increment (or the decrement) value, and (4) a length register, L, containing the length of the buffer. FIG. 1 shows an example of a circular buffer, incorporated as a part of a larger memory, and address registers that may be maintained in association with the memory buffer. The lowest numbered address in the buffer, i. e. , address 19, is referred to as the base <Desc/Clms Page number 2> address. The base address is stored in a base register, B. The highest address in the buffer, i. e. , address 29, is referred to as the end address and is indicated as E. The length of the buffer is stored in a length register, L. An index register, indicated at I, is a pointer into the buffer. The index register typically contains the address of the next location to be accessed, e. g. , address 26. After each access, the pointer is incremented or decremented a predetermined number of addresses so as to be prepared for the next access into the circular buffer. The number of address spaces which the pointer is incremented or decremented is the modify amount and is stored in a modify register, M. It is common for the modify amount to be a fixed number which does not change, although there are applications in which the modify amount may be varied. Many digital information processing routines make use of memory buffers. One such routine is commonly referred to as a Fast Fourier Transform (FFT). FFT routines use a series of"butterfly"computations to generate a result. The results from one butterfly computation are used as the input data for the next butterfly computation. Most FFT routines are written such that the input data for each butterfly computation is read from a particular memory buffer (referred to herein as an input buffer) and the results from each butterfly computation are stored in another memory buffer (referred to herein as an output buffer). Since the results of each butterfly computation are used as the input data for the next butterfly computation, the results must be"loaded"into the input buffer before the next butterfly computation can begin. There are various ways that one could go about"loading"the results into the input buffer. One way is to simply copy the results from the output buffer to the input buffer. However, copying the results from one buffer to another may require a significant amount of time, relatively speaking, which adds significant overhead and thereby reduces the performance of the FFT routine. Another way is to redirect the address registers associated with the input buffer so as to point to the addresses in the output buffer where the results from the previous butterfly computation are stored. In conjunction, the registers associated with the output buffer are typically redirected so as to point to the addresses previously used for the input buffer. This is done so that the results of a given butterfly computation can be stored without overwriting <Desc/Clms Page number 3> the input data for that butterfly computation. The overall effect of redirecting the address registers associated with the input and output buffers is the same as if the contents of the input buffer had been swapped with the contents of the output buffer. The redirecting of the address registers is commonly carried out as follows: (1) the contents of the base register for the input buffer is swapped with the contents of the base register for the output buffer, and (2) the contents of the index register for the input buffer is swapped with the contents of the index register for the output buffer. FIG. 2A is a representation of the contents of the base and index registers for the input and output buffers before the contents are swapped. Before the contents are swapped, the base register, BO, and the index register, 10, which in this example are associated with the input buffer, point to the input data used for butterfly computation #1. The base register, B1, and the index register, 11, which are associated with the output buffer, point to the results from butterfly computation #1. FIG. 2B is a representation of the contents of the base and index registers for the input and output buffers after the contents are swapped. After the contents of the registers are swapped, the base register, BO, and the index register, 10, associated with the input buffer, point to the results for butterfly computation #1. The base register, Bl, and the index register, 11, associated with the output buffer, point to the input data used for butterfly computation #1. FIG. 3 shows a routine that is commonly used to swap the contents of the index and base registers of the input buffer with the contents of the index and base registers of the output buffer. This routine includes six instructions and uses temporary registers R0, Rl. Notwithstanding the performance level of current digital information processors, further improvements are needed. SUMMARY OF THE INVENTION According to one aspect of the present invention, a method is used in a digital information processor having a first address register for storing a first address and having a second address register for storing a second address. The method includes responding to a swap instruction, which specifies a swap operation for at least two address registers that are <Desc/Clms Page number 4> identified explicitly or implicitly, by swapping the contents of the first address register and the second address register. According to another aspect of the present invention, a digital information processor includes a first address register for storing a first address, a second address register for storing a second address, and a circuit that receives a swap instruction, which specifies a swap operation for at least two address registers that are identified explicitly or implicitly, and responds to the swap instruction by swapping the contents of the first address register with the contents of the second address register. According to another aspect of the present invention, a digital information processor includes a first address register for storing a first address, a second address register for storing a second address, and means, responsive to a swap instruction, which specifies a swap operation for at least two address registers that are identified explicitly or implicitly, for swapping the contents of the first address register and the second address register. According to another aspect of the present invention, a data address generator (DAG) includes a first address register containing a first address corresponding to a location in a first circular buffer, a second address register containing a second address corresponding to a location in a second circular buffer, and a circuit that receives a signal that indicates a swap instruction and responds to the signal by swapping the contents of the first address register and the second address register. Depending on the implementation, a swap instruction may completely eliminate the need for temporary registers to carry out the swap, which in turn reduces the register pressure and helps to reduce the possibility of delays due to excessive register demand (delays can reduce the execution speed and level of performance of the routine running on the processor). Again, depending on the implementation, the swap instruction may reduce or completely eliminate data dependencies like those in the routine of FIG. 3 and any associated wait cycles (data dependencies and wait cycles can reduce the execution speed and level of performance of a routine running on the processor). According to another aspect of the present invention, a method for use in a digital information processor includes swapping the contents of a first address register and a second address register in a future file in response to a swap instruction, generating and sending one <Desc/Clms Page number 5> or more control signals from the future file to the architecture file in response to a swap instruction, and swapping the contents of the first address register and the second address register in an architecture file in response to the one or more control signals. It has been recognized that the latter mentioned aspect of the present invention is not limited to swap instructions, but rather may be applied to pipelined data processors in general, particularly in a situation where the results of an operation are needed at more than one stage in the pipeline. For example, rather than performing an operation at one stage and pipelining the results to subsequent stage (s), the capability to actually carry out the operation may be provided at more than one stage in the pipeline. Thereafter, only control signals (and not the results) need be provided to subsequent stage (s). Depending on the embodiment, this may lead to a reduction in the required area and/or power. Notwithstanding the potential advantages, discussed above, of one or more embodiments of one or more aspects of the present invention, it should be understood that there is no absolute requirement that any embodiment of any aspect of the present invention address the shortcomings of the prior art. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows an example of a circular buffer incorporated as a part of a larger memory; FIG. 2A is a representation of the contents of the base and index registers for the input and output buffers, prior to swapping the contents of address registers associated with the input buffer and the output buffer; FIG. 2B is a representation of the contents of the base and index registers for the input and output buffers, after swapping the contents of address registers associated with the input buffer and the output buffer; FIG. 3 shows a routine that is commonly used to swap the contents of the index and base registers of the input buffer with the contents of the index and base registers of the output buffer; <Desc/Clms Page number 6> FIG. 4A shows an example of a swap instruction format according to one embodiment of the present invention; FIG. 4B shows an example of another swap instruction format according to one embodiment of the present invention; FIG. 5 is a block diagram of a portion of a digital information processor that receives and executes a swap instruction, according to one embodiment of the present invention; FIG. 6 is a block diagram of one embodiment of the DAG of FIG. 5; FIG. 7A is a block diagram of a portion of one embodiment of the register unit of FIG. 6; FIG. 7B is a block diagram of a portion of another embodiment of the register unit of FIG. 6; FIG. 8 shows a representation of one example of a digital information processor pipeline capable of receiving and executing a swap instruction, according to one embodiment of the present invention; and FIG. 9 is a block diagram of one embodiment of a DAG that may be used in the pipeline of FIG. 8. DETAILED DESCRIPTION It has been determined that the routine shown in FIG. 3 has several drawbacks. First, the need for temporary registers increases the register pressure (a measure of the level of demand for temporary registers) within a processor. If the demand for temporary registers becomes excessive (in comparison to the number of temporary registers) shortages can result, thereby leading to delays, which in turn reduces the execution speed and level of performance of the routine running on the processor. This problem is particularly noticeable in a processor having relatively few temporary registers. Second, the last four instructions in the routine can not be executed until one or more previous instructions have been completed (a situation referred to herein as data dependency). If a processor has a very deep pipeline (i. e. , a pipeline that is divided into many stages), wait cycles may need to be added because of these dependencies (i. e. , to make <Desc/Clms Page number 7> sure that certain instructions are not executed before the prior instructions are completed). Thus, even though the routine of FIG. 3 has only six instructions, eight to ten (or even more) instruction cycles may be required to complete the routine. During such time, no other instructions can be input to the pipeline, which reduces the overall throughput through the pipeline and reduces the execution speed and level of performance of a routine running on the processor. Thus, it would be desirable to eliminate the need to use the routine of FIG. 3. It has been determined that this may be accomplished by providing a swap instruction. FIG. 4A shows an example of a swap instruction format 100 according to one embodiment of the present invention. The instruction format has an op code, e. g. , SWAP, that identifies the instruction as a swap instruction and is indicated at 101. The instruction format also has two operands fields, e. g. , address register idl, address register id2, which identify the address registers that are to have their contents swapped and are indicated at 102,103. As used herein, the term swap means to exchange the contents. This may be carried out in any manner. The term address register refers to a data address register, which is defined as any register that contains a memory address for use in accessing a memory location or any register that contains data for use in generating a memory address for use in accessing a memory location. Examples of address registers include but are not limited to base registers, B, index pointer registers (or simply index registers), I, modifier registers, M, length registers, L, and end registers, E. The address registers are often integrated into a data address generator (DAG), further discussed herein below. An example of a swap instruction that uses the instruction format of FIG. 4A is: SWAP 10, 11. This instruction calls for the contents of index register 10 to be swapped with the contents of index register Il. Another example of a swap instruction that uses the instruction format of FIG. 4A is: SWAP BO, B1 <Desc/Clms Page number 8> This instruction calls for the contents of base register BO to be swapped with the contents of the base register B1. The availability of a swap instruction reduces the number of instructions and the number of instruction cycles needed to swap the contents of address registers, thereby increasing the speed and level of performance of a digital information processor. A swap instruction may also reduce the need for temporary registers, which in turn reduces the register pressure and thereby reduces the possibility of delays due to excessive register demand (recall that delays can reduce the execution speed and level of performance of the routine running on the processor). It should be recognized that the present invention is not limited to the swap instruction format shown in FIG. 4A and that other swap instruction formats may be used. For example, in some embodiments, the address registers are not specified in the instruction, but rather are implied, for example, based on the op code. In such embodiments, the digital information processor may be configured, for example, to automatically swap particular address registers whenever a swap instruction is supplied. Alternatively, for example, a plurality of different swap instructions may be supported, each having a different op code. The different op codes may implicitly identify the particular address registers to be swapped. For example, the instruction: SWAP01 may call for the contents of the address register 10 to be swapped with the contents of the address register Il. The instruction: SWAP23 may call for the contents of the address register I2 to be swapped with the contents of the address register I3. In some embodiments, a single swap instruction causes more than one swap operation to be carried out. The additional address registers may be implied based on the opcode (e. g. , as discussed above). Alternatively, for example the additional address registers may be implied based on the supplied operands. For example, the digital information processor may be configured such that if two index registers are supplied as <Desc/Clms Page number 9> operands in a swap instruction, then the processor swaps the two index registers and also swaps the base registers that are associated with the two index registers. For example, in one embodiment, the swap instruction: SWAP 10, I1 may (1) cause the contents of the 10 register to be swapped with the contents of the 11 register and, (2) cause the contents of the BO register to be swapped with the contents of the B1 register. One such embodiment is described below with respect to FIG. 7B. Another way to implement a swap instruction that causes more than one swap operation to be carried out is to provide an instruction format that includes more than two operand fields. FIG. 4B shows an example of an instruction format 104 with more than two operand fields. The instruction format 104 has an op code, e. g. , SWAP, that identifies the instruction as a swap instruction and is indicated at 105. The instruction format has four operands fields, e. g. , address register idl, address register id2, address register id3, address register id4, which identify the address registers that are to have their contents swapped and are indicated at 106,107, 108,109. An example of a swap instruction that uses the instruction format of FIG. 4B is: SWAPI0, Il, B0, Bl. This instruction calls for the contents of index register 10 to be swapped with the contents of index register Il, and calls for the contents of base register BO to be swapped with the contents of base register B 1. Using a single swap instruction to cause more than one swap operation further reduces the number of cycles needed to swap the contents of address registers, thereby increasing the speed and level of performance of a digital information processor. In some embodiments, the number of instructions needed to swap the contents of the address registers is reduced from six to one, and the number of instruction cycles is reduced from eight (or more) to as few as one. Digital information processors that execute swap instructions are now discussed. FIG. 5 is a block diagram of a portion of a digital information processor 110 that receives and executes a swap instruction, according to one embodiment of the present <Desc/Clms Page number 10> invention. The digital information processor 110 includes an instruction decoder 112, a data address generator (DAG) 114, an execution control unit 116, and a load/store unit 118. The DAG 114 provides addresses for use in loading and storing data to memory buffers (not shown). The digital information processor 110 may be configured as a single monolithic integrated circuit, but is not limited to this configuration. The input to the instruction decoder 112 is connected to a bus indicated at a line 120. Signal lines, indicated at 122, connect the instruction decoder to the DAG 114. Signal lines, indicated at 124, connect the instruction decoder 112 to the execution control unit 116. Signal lines, indicated at 126, connect the DAG 114 to the load/store unit 118. Signals lines, indicated at 128, connect the load/store unit 118 to the execution control unit 116. In operation, an instruction is fetched (e. g. , from an instruction cache or other memory (not shown) ) and provided to the instruction decoder 112 on the bus 120. If the instruction is a DAG instruction (i. e. , an instruction having to do with the DAG), then the instruction decoder 112 outputs a decoded DAG instruction and/or other control signals, which are supplied through the signal lines 122 to the DAG 114. If the instruction is a not a DAG instruction (i. e. , an instruction not having to do with the DAG), then the instruction decoder 112 outputs a decoded instruction and/or other control signals that are supplied through the signal lines 124 to the execution/control unit 116. The DAG 114 executes DAG instructions and, if appropriate, outputs addresses of data to be accessed in the memory buffers. The addresses are supplied on the signal lines 126 to the load/store unit 118, which loads and/or stores data to/from the addresses in the memory buffer, as appropriate. The load/store unit 118 passes data to/from the execution control unit 116 by way of the signal lines 128. It should be understood that there are many different types of DAGs. The present invention is not limited to use in association with any particular type of DAG. FIG. 6 is a block diagram of one embodiment of the DAG 114 (FIG. 5). This embodiment includes a DAG control unit 130, a DAG arithmetic logic unit (DAG ALU) 132, and a DAG register unit 134. The DAG register unit 134 includes four register banks 136-142 and one or more swap units 144. The four register banks include L registers 136 <Desc/Clms Page number 11> for storing data indicating the length of each memory buffer, B registers 13 8 for storing the base address of each memory buffer, I registers 140 for storing an index address of each memory buffer, and M registers 142 for storing an increment (or decrement) value. The index address may, for example, indicate the address currently being accessed or the next address to be accessed. The swap units 144 are typically implemented in hardware and are further described below. The DAG control unit 130 is connected via signal lines 122 to instruction decoder 112 (FIG. 5). Signal lines, indicated at 146, connect the DAG control unit 130 to the L, B, I, M registers 136-142. Signal lines, indicated at 148, connect the DAG control unit 130 to the swap unit 144. Signal lines, indicated at 150, and signal lines, indicated at 152, connect the DAG register unit 134 to the DAG ALU 132. In some embodiments, the L, B, I, M registers 136-142 may also connect to an address and/or data bus (not shown) to load and/or store from/to memory. In operation, the DAG control unit 130 receives the decoded DAG instructions and/or control signals from the instruction decoder 112 (FIG. 5). In response to such instruction and/or control signals, the DAG control unit 130 produces control signals that are used to execute the DAG instruction. The term"in response to"means"in response at least to", so as not to preclude being responsive to more than one thing. Here for example the DAG control unit 130 produces L, B, I, M register control signals, which are supplied to the L, B, I, M registers 136-142. The DAG control unit 130 also produces swap control signals and ALU control signals. The swap control signals are supplied to the swap unit 144. The swap unit 144 swaps the contents of the appropriate address registers in response to the swap control signals. The ALU control signals are supplied to the DAG ALU 132. The DAG register unit 134 provides output signals L out, B out, I out, M out that indicate the contents of one of the L, B, I, M registers respectively. These signals are supplied to the DAG ALU 132 and to the load/store unit 118 (FIG. 5). The DAG ALU 132 performs computations to generate new addresses L in, B in, I in, M in, which are supplied to the DAG register unit 134, to be stored in one of the L, B, I, M registers 136-142, respectively. <Desc/Clms Page number 12> FIG. 7A is a block diagram of a portion of one embodiment of the register unit 134 (FIG. 6). In this embodiment, the register unit is capable of swapping the contents of the B registers 138 and is capable of swapping the contents of the I registers 140, as described below. In this embodiment, the register unit includes a B register bank 138, a B register swap unit 160, an I register bank 140, and an I register swap unit 162. The B register bank 138 includes four registers, BO-B3. The I register bank 140 includes four registers, 10-13. Each of the B registers and each of the I registers has a CLK input that receives its own CLK signal (not shown) from the DAG control unit (FIG. 6). The register unit further includes a B out mux 166 and an I out mux 170. Each is controlled by control signals (not shown) from the DAG control unit (FIG. 6). The B in signal is supplied, via the signal lines 150, to a first set of inputs (inO) of the B register swap unit 160. Outputs of the B register swap unit 160 are connected via signal lines indicated at 182-188 to inputs of the B register bank 138. Outputs of the B register bank 138 are connected via signal lines indicated at 190-196 to a second set of inputs (inl) of the B register swap unit 160 and to inputs of the B out mux 166. The output of the B out mux 166 provides the B out signal on the signal lines 152. The I in signal is supplied via the signal lines 150 to a first set of inputs (inO) of the I register swap unit 162. Outputs of the I register swap unit 162 are connected via signal lines indicated at 206-212 to inputs of the I register bank 140. Outputs of the I register bank 140 are connected via signal lines indicated at 214-220 to a second set of inputs (inl) of the I register swap unit 162 and to inputs of the I out mux 170. The output of the I out mux 170 provides the I out signal on the signal lines 152. This embodiment of the B register swap unit 160 includes a BO/B1 swap unit 222 and a B2/B3 swap unit 224. The I register swap unit 162 includes a 10/11 swap unit 226 and a 12/13 swap unit 228. These four swap units 222-228 are identical to one another. The swap units have select lines to receive swap control signals on the signal lines 148 from the DAG control unit 130 (FIG. 6). For example, in this embodiment, the swap control signals from the DAG control unit (FIG. 6) include the following four control signals: a BO/B 1 <Desc/Clms Page number 13> swap signal, a B2/B3 swap signal, an IO/Il swap signal, and an 12/13 swap signal. The BOB1 swap signal is supplied to select line, sel, of the BO/B1 swap unit 222. The B2/B3 swap signal is supplied to select line, sel, of the B2/B3 swap unit 224. The 10/11 swap signal is supplied to select line, sel, of the IO/I1 swap unit 226. The 12/13 swap signal is supplied to select line, sel, of the 12/13 swap unit 228. The operation of the swap units is now described with reference to the BO/B1 swap unit 222. The BO/B1 swap unit 222 has two operating states, specifically, a swap state and a non-swap state. In the swap state, the BO/B1 swap unit 222 enables the contents of the BO register to be swapped with the contents of the B 1 register. In the non-swap state, the BO/B 1 swap unit 222 provides a transparent connection between the B in signal on signal lines 150 and the B registers 138. Selection of the operating state is controlled by the logic state of the BO/B1 swap signal, which is provided to the select input of the BO/B1 swap unit 222. In this embodiment, if the BOB1 swap signal has a first logic state (e. g. , a logic high state or"1"), then the BO/B 1 swap unit is in the swap operating state. If the BOB 1 swap signal has a second logic state equal (e. g. , a logic low state or"0"), then the BO/B1 swap unit is in the non-swap operating state. In the swap state, mux 0 of swap unit 222 selects the output of the B1 register, and mux 1 of swap unit 222 selects the output of the BO register. If provided with a pulse on its CLK line, the BO register stores the contents of the B 1 register and the B 1 register stores the contents of the BO register, i. e. , the contents of the BO register and the B 1 register are swapped. In the non-swap state, the mux 0 of swap unit 222 selects the B in signal on signal lines 150, and mux 1 of swap unit 222 selects the B in signal on signal lines 150. If the BO register or the B 1 register is provided with a pulse on its CLK line, then the register provided with the pulse stores the address provided by the B in signal on signal lines 150. The other swap units 224-228 operate similarly to the BO/B1 swap unit 222. Thus, the B2/B3 swap unit 224 enables the contents of the B2 register to be swapped with the contents of the B3 register. The IO/I1 swap 226 unit enables the contents of the 10 register to <Desc/Clms Page number 14> be swapped with the contents of the 11 register. The 12/13 swap unit 228 enables the contents of the I2 register to be swapped with the contents of the I3 register. As stated above, in some embodiments, a single swap instruction causes more than one swap operation to be carried out. In some embodiments this is carried out by providing a swap instruction that includes additional operand fields (e. g. , as in FIG. 4B). hi other embodiments the additional address registers may be implied. For example, in one embodiment, the swap instruction SWAP 10, 11, causes the contents of the 10 register to be swapped with the contents of the I1 register and causes the contents of the BO register to be swapped with the contents of the B 1 register. This may be implemented by configuring the DAG control unit 130 such that the swap instruction SWAP 10, I1 causes the DAG control unit 130 to assert both the IO/Il swap signal and the BO/B1 swap signal. Similarly, the swap instruction SWAP 12, I3, may cause the contents of the I2 register to be swapped with the contents of the I3 register and may cause the contents of the B2 register to be swapped with the contents of the B3. This may be implemented by configuring the DAG control unit 130 such that the swap instruction SWAP 12, I3 causes the DAG control unit 130 to assert both the 12/13 swap signal and the B2/B3 swap signal. This could be implemented by asserting appropriate control signals in the embodiment of FIG. 7A. FIG. 7B shows another implementation of such an embodiment. This implementation is identical to the implementation of FIG. 7A, except that in the implementation of FIG. 7B, the swap control signals on the signal lines 148 from the DAG control unit 130 (FIG. 8) include two control signals: a IO/Il/BO/B1 swap signal and a I2/I3/B2/B3 swap signal. The IO/I1/BO/B1 swap signal is supplied to the select line of the BO/B1 swap unit 222 and to the select line of the 10/11 swap unit 226. The I2/I3/B2/B3 swap signal is supplied to select line of the B2/B3 swap unit 224 and to the select line of the 12/13 swap unit 228. As stated above, providing the ability to execute a swap instruction reduces the number of instruction cycles needed to swap the contents of address registers, thereby increasing the speed and level of performance of a digital information processor (recall that <Desc/Clms Page number 15> data dependencies and wait cycles can reduce the execution speed and level of performance of a routine running on the processor). Providing this ability also reduces the need for temporary registers, which in turn reduces the register pressure and thereby reduces the possibility of delays due to excessive register demand (recall that delays can reduce the execution speed and level of performance of the routine running on the processor). Now that swap instructions and DAGs have been discussed, considerations relating to implementing a swap instruction in a digital information processor with a pipeline are discussed. It should be recognized that FIGS. 7A, 7B show various embodiments of a DAG register unit that has swap units. However, the DAG register unit and the swap units (s) are not limited to the implementations shown. For example, a swap unit can be implemented in many ways. Using multiplexers is just one way. For example, multiplexers can be replaced by groups of tri-state drivers wherein each of the tristate drivers receives a different enable signal. The enable to a tristate driver could, for example, be based on the swap control signal. The multiplexers could also be replaced by combinatorial logic. Thus, for example, the invention is not limited to how the swap is carried out. FIG. 8 shows one embodiment of a pipeline 240. This pipeline 240 has a series of stages, seven of which are shown, i. e., IF1, IF2, DC, AC, LS, EX1, WB. The pipeline 240 operates in association with a DAG that includes two versions of each address register (e. g., two L registers, two B registers, two I registers, and two M registers). One version of each of the registers is collectively referred to herein as a future file, indicated at 242. The other version of each of the registers is collectively referred to herein as an architecture file, indicated at 244. The future file 242 and the architecture file 244 are connected by a DAG pipeline 246. As further described below, the future file 242 is read and updated in the course of generating and modifying addresses that are used for accessing the memory buffers. The future file 242 tends to show the speculative state of the address registers. The architecture file 244, on the other hand, is updated pursuant to an instruction when that instruction completes execution. The use of two versions of each address register enables <Desc/Clms Page number 16> the hardware to speculatively execute instructions with reduction in throughput only if there is a misprediction. Instructions are inserted into the pipeline 240 and proceed through the pipeline stages until execution of the instruction is complete. More specifically, instructions, indicated at 248, are fetched in the IF1 stage. In the IF2 stage, the instructions 248 are decoded 250 and identified as DAG instructions or non-DAG instructions. If instruction 248 is a DAG instruction 252, then at the DC stage, an I register and an M register of the future file 242 are read (indicated at 254). In the AC stage, the DAG generates addresses 256 which are to be supplied to the load/store unit 260. DAG swap instructions are executed, for example, as described above with respect to FIGS. 4-7B. In the LS stage, addresses generated by the DAG are supplied 258 to the load/store unit 260 which loads data in response thereto. The addresses generated by the DAG are stored in the future file 242. In addition, DAG information is input to the DAG pipeline 246, which is used to send DAG information to the architecture file 244, as discussed with respect to FIG. 9. ALU operations 262 are performed in the EX stage (or EX stages). In the WB stage, the results from the ALU operations are stored 264, thereby completing execution of the instructions. Upon completion, information from the DAG pipeline is used to update the architecture file 244. In this way, the architecture file 244 shows the state of the address registers pursuant to the most recent instruction to exit the pipeline 240, but does not show the effects of instructions currently in the pipeline 240. In some embodiments, the DAG generates up to two new addresses in any given instruction cycle. Both of the new addresses are forwarded to the architecture file, and consequently, the DAG pipeline address bus is wide enough to forward two addresses at a time. It should be recognized that a two-address wide bus is wide enough to forward the results of a swap instruction if the swap instruction does not modify more than two address registers in any given instruction cycle. The situation is complicated, however, if the swap instruction causes more than one swap operation (i. e. , swaps the contents of more than two <Desc/Clms Page number 17> address registers) in any given instruction cycle. For example, the swap instruction discussed above with respect to FIG. 7B causes the contents of the 10 register to be swapped with the contents of the I1 register and causes the contents of the BO register to be swapped with the contents of the B1 register. Such a swap instruction modifies the contents of four address registers (I0, 11, BO and Bl registers) in a single instruction cycle. A two-address-wide address bus is not wide enough to forward four addresses at one time. The width of the address bus would need to be doubled (i. e. , from a width of two addresses to a width of four addresses) in order to forward four addresses at one time. Doubling the width of the address bus would double the number of registers needed in the DAG pipeline, and would thereby result in an increase in chip area and power consumption. FIG. 9 shows one embodiment of a DAG adapted to address the situation where a swap instruction causes more than one swap operation in an instruction cycle. In this embodiment, the results of such a swap instruction are not forwarded through the DAG pipeline. Rather, two swap units are employed, and one swap unit is downstream of the other in the pipeline. When a swap instruction is received, the upstream swap unit executes the swap operation on the future file. Control signals (rather than the four new addresses) are generated and are forwarded through the pipeline to the downstream swap unit, which in turn executes the swap operation on the architecture file. The overall effect is the same as if the four new addresses had been forwarded through the pipeline, but without the need to double the size of the address bus. In this embodiment, the DAG includes an upstream portion 270, a DAG pipeline 272, and a downstream portion 274. The upstream portion 270 includes a DAG control unit 276, a DAG ALU 278, and a register unit 280 which includes L, B, I, M registers 282 (i. e., the future file) and one or more swap units 284. The DAG control unit 276, the DAG ALU 278, and the register unit 280 may for example be similar to the DAG control unit 130, the DAG ALU 132, and the register unit 134, respectively, described hereinabove with respect to FIGS. 5-7B. The upstream portion receives DAG instructions supplied by way of signal lines indicated at 285. The DAG ALU 278 performs computations to generate new addresses and <Desc/Clms Page number 18> the swap unit (s) 284 swap the contents of the address registers, as appropriate. The DAG control unit 276 and the swap unit (s) 284 are configured such that a single swap instruction may cause more than one swap operation to be carried out in a single instruction cycle. Such a configuration means that a swap instruction may modify the contents of four (or more) address registers in a single instruction cycle. The downstream portion 274 includes a control unit 286 and a register unit 288, which includes L, B, I, M registers 290 (i. e. , the architecture file) and one or more swap units 292. The register unit 288 may for example be similar to the register unit 134 described hereinabove with respect to FIGS. 6-7B. As further discussed hereinbelow, providing one or more swap units 292 in the register unit 288 of the downstream portion 274 of the DAG makes it unnecessary to forward the results of a swap instruction to the downstream portion 274 of the DAG. The DAG pipeline 272 connects the upstream portion 270 to the downstream portion 274. In this embodiment, the pipeline 272 includes first, second and third pipelined data paths 294-298. Each of the pipelined paths 294-298 comprises a series of pipelined register stages. That is, the first pipelined data path includes pipelined register stages 2941-294N. The second pipelined data path includes pipelined register stages 2961-296N. The third pipelined data path includes pipelined register stages 2981-298N. By providing a pipeline to send results from the upstream portion 270 to the downstream portion 274, system designers are able to reduce the complexity of the downstream portion 274. For example, unlike the upstream portion 270 of the DAG, the downstream portion 274 of the DAG does not require a control unit capable of receiving and responding to DAG instructions. Nor does it require an ALU that performs computations to generate new addresses. The first data path 294 and the second data path 296 are each used to forward addresses that have been generated by the DAG ALU 278. Consequently, the register stages 2941-294N in the first data path 294 and the register stages 2961-296N in the second data path are typically at least as wide as the width of the DAG address registers. When the control unit in the downstream portion 274 of the DAG receives addresses from the first and/or the <Desc/Clms Page number 19> second data path 294,296, the addresses are copied into the appropriate address register in the architecture file 290. The third data path 298 is used to forward information relating to the swap instruction. As stated above, because the upstream portion 270 of the DAG and the downstream portion 274 of the DAG each have one or more swap units, there is no need to pipeline the results of a swap instruction to the downstream portion 274 of the DAG. Thus, the register stages 2981-298N in the third data path 298 need not be as wide as the register stages 2941-294N, 2961-296N of the first and second data paths 294,296. In some embodiments, the third data path 298 merely forwards a signal that indicates whether the upstream portion 270 of the DAG has received a swap instruction. In other embodiments, the third data path 298 may be used to forward a signal that indicates address registers that are to have their contents swapped. When the downstream portion 274 of the DAG receives signal (s) from the third data path 298, the control unit 286 provides control signals to the register unit 288 in the downstream portion 274 so as to cause the contents of the appropriate registers of the register unit to be swapped. Thus, providing one or more swap units in the register unit of the downstream portion of the DAG helps to eliminate the need to forward the results of a swap instruction to the downstream portion of the DAG. This in turn makes it possible to implement a swap instruction that swaps the contents of two base registers and two index registers without the need to pipeline four addresses at time. Indeed, in this embodiment, the swap instruction is implemented on the architecture file even without the need to use the first or second data paths 294,296, because there is no need to forward any addresses in connection with the swap instruction. Note that two additional data paths would be needed in order to pipeline four addresses at a time, which would increase the cost, size and/or power consumption of the data information processor. It has been recognized that aspects of the present invention are not limited to swap instructions, but rather may be applied to pipelined data processors in general, particularly in situations where the results of an operation are needed at more than one stage in the pipeline. For example, rather than performing an operation at one stage and pipelining the results to <Desc/Clms Page number 20> subsequent stage (s), the capability to actually carry out the operation is provided at more than one stage in the pipeline. This may be accomplished by providing an operator (i. e. , an execution unit) at each of those stages. Thereafter, only control signals (and not the complete results) need be provided to those stages, wherein the control signals instruct the operator at each of those stages to carry out the operation. Although the data address generator shown above comprises a control unit, an ALU, and a register unit with four banks of address registers, it should be understood that a data address generator is not limited to this configuration. A data address generator only needs to be able to generate addresses to be stored in address registers and to modify the contents of the address registers. Moreover, it should be understood that the present invention is not limited to use in association with a data address generator. While there have been shown and described various embodiments, it will be understood by those skilled in the art that the present invention is not limited to such embodiments, which have been presented by way of example only, and that various changes and modifications may be made without departing from the spirit and scope of the invention. Accordingly, the invention is limited only by the appended claims and equivalents thereto. What is claimed is:
According to one embodiment, a computer system is disclosed. The computer system comprises a central processing unit (CPU) to generate and control a virtual machine that runs simulated instruction code and create an abstraction of a real machine so that operation of a real operating system for the computer system is not impeded.
CLAIMS What is claimed is: 1. A computer system comprising: a central processing unit (CPU) to generate and control a virtual machine that runs simulated instruction code and to create an abstraction of a real machine so that operation of a real operating system for the computer system is not impeded. 2. The computer system of claim 1 wherein the CPU runs the simulated instruction code and the real operating system. 3. The computer system of claim 1 further comprising: a direct execution environment to store simulated instruction code and associated data; and a host operating system environment. 4. The computer system of claim 3 wherein the host operating system environment comprises: a monitor to generate the virtual machine using the hardware; and a platform simulator to perform simulations of virtualization events; 5. The computer system of claim 4 wherein the monitor performs virtualization operations. <Desc/Clms Page number 11> 6. The computer system of claim 5 wherein the monitor gains control from the virtual machine whenever the virtual machine attempts to perform a virtualization event. 7. The computer system of claim 6 wherein the monitor sets a list of virtualization events to be checked by the virtual machine. 8. The computer system of claim 7 wherein the monitor passes control to the monitor for the handling of the virtualization event. 9. The computer system of claim 8 wherein the monitor performs a particular virtualization operation upon determining the type of virtualization event. 10. The computer system of claim 9 wherein the monitor handles the virtualization event and returns execution to the monitor. 11. The computer system of claim 9 wherein the monitor passes control to the platform simulator for simulation of the virtualization event. 12. The computer system of claim 8 wherein the monitor virtualization operations in such a manner to prevent the simulated instruction code from affecting the real operating system. 13. A method comprising: simulating instruction code at a central processing unit (CPU) implementing Virtual Machine Extensions (VMX); virtualizing simulated instruction code; <Desc/Clms Page number 12> launching a virtual machine (VM) at the CPU; and executing simulated instruction code on the VM. 14. The method of claim 13 further comprising: detecting a sensitive event; exiting the VM; and analyzing the sensitive event. 15. The method of claim 14 further comprising: determining whether the sensitive event is a complex event; and virtualizing the simulated instruction code if the sensitive event is not a complex event. 16. The method of claim 15 further comprising resuming the VM after the simulated instruction code is virtualized. 17. The method of claim 15 further comprising: de-virtualizing the simulated instruction code if the sensitive event is a complex event; and simulating the instruction code. 18. A system comprising: hardware to generate and control a virtual machine that runs simulated instruction code and to create an abstraction of a real machine so that operation of a real operating system for the computer system is not impeded; <Desc/Clms Page number 13> a direct execution environment to store simulated instruction code and associated data; and a host operating system environment. 19. The system of claim 18 wherein the host operating system environment comprises: a monitor to generate the virtual machine using the hardware; and a platform simulator to perform simulations of virtualization events; 20. The system of claim 19 wherein the monitor performs virtualization operations. 21. The system of claim 20 wherein the monitor gains control from the virtual machine whenever the virtual machine attempts to perform a virtualization event. 22. The computer system of claim 21 wherein the monitor sets a list of virtualization events to be checked by the virtual machine. 23. The system of claim 22 wherein the monitor performs a particular virtualization operation upon determining the type of virtualization event. 24. The system of claim 23 wherein the monitor handles the virtualization event and resumes Direct Execution Environment back. 25. The computer system of claim 24 wherein the monitor passes control to the platform simulator for simulation of the virtualization event. <Desc/Clms Page number 14> 26. The computer system of claim 23 wherein the monitor virtualizes operations in such a manner to prevent the simulated instruction code from affecting the real operating system.
<Desc/Clms Page number 1> A METHOD FOR CPU SIMULATION USING VIRTUAL MACHINE EXTENSIONS COPYRIGHT NOTICE [0001] Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever. FIELD OF THE INVENTION [0002] The present invention relates to Central Processing Unit (CPU) simulators; more particularly, the present invention relates to employing direct execution of simulated code on a CPU. BACKGROUND [0003] Software simulators for CPUs (e. g., Gambit, Archsim, etc) have a wide range of usage in many areas relating to integrated circuit design, validation and tuning. These simulators are commonly used for pre-silicon software development (e. g. , BIOS, operating systems, compilers, applications, etc. ) for architecture validation (functional and performance), and more. A user may evaluate an instruction set architecture (ISA) of a new CPU by executing benchmarks on a host machine that runs the simulator. Based on the results produced by the simulator, a user may modify or verify the new CPU design accordingly. Moreover, the simulator can be expanded to simulate the behavior of an entire PC platform, including buses and <Desc/Clms Page number 2> I/O devices (for example, SoftSDV platform simulator). A possible input for such a simulator may be an operating system called a"Simulated"or"Guest"OS. [0005] The permanent increase in both scale and complexity of the simulated code (operating systems and applications) requires improvement of current simulation techniques and introduction of new technologies in order to achieve significant simulation speedup. If the simulated CPU and the host CPU architectures are close (or identical) the simulated instructions can be allowed to run natively. However, most operating systems for personal computers assume full control over the machine resources. Thus, if the simulated operating system is allowed to run natively it will conflict with the host operating system over PC resources (CPU, devices, memory, etc.). BRIEF DESCRIPTION OF THE DRAWINGS [0006] The invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which: [00071 Figure 1 is a block diagram of one embodiment of a computer system; [0008] Figure 2 illustrates a high level architecture of one embodiment of a simulation environment; and [0009] Figure 3 is a flow diagram of one embodiment of the operation of Full Platform Simulator. <Desc/Clms Page number 3> DETAILED DESCRIPTION [0010] A method of using hardware support for virtualization in order to prevent conflicts between a Host operating system (OS) and a Guest OS, and to obtain a full virtualization is described. In the following detailed description of the present invention numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. [0011] Reference in the specification to"one embodiment"or"an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase"in one embodiment"in various places in the specification are not necessarily all referring to the same embodiment. [0012] Figure 1 is a block diagram of one embodiment of a computer system 100. Computer system 100 includes a central processing unit (CPU) 102 coupled to bus 105. In one embodiment, CPU 102 is a processor in the Pentium0 family of processors including the Pentium@ 11 processor family, Pentium@ III processors, and Pentium@ IV processors available from Intel Corporation of Santa Clara, California. Alternatively, other CPUs may be used. A chipset 107 is also coupled to bus 105. Chipset 107 includes a memory control hub (MCH) 110. MCH 110 may include a memory controller 112 that is <Desc/Clms Page number 4> coupled to a main system memory 115. Main system memory 115 stores data and sequences of instructions that are executed by CPU 102 or any other device included in system 100. In one embodiment, main system memory 115 includes dynamic random access memory (DRAM); however, main system memory 115 may be implemented using other memory types. Additional devices may also be coupled to bus 105, such as multiple CPUs and/or multiple system memories. [0014] MCH 110 may also include a graphics interface 113 coupled to a graphics accelerator 130. In one embodiment, graphics interface 113 is coupled to graphics accelerator 130 via an accelerated graphics port (AGP) that operates according to an AGP Specification Revision 2.0 interface developed by Intel Corporation of Santa Clara, California. In addition, the hub interface couples MCH 110 to an input/output control hub (ICH) 140 via a hub interface. ICH 140 provides an interface to input/output (I/O) devices within computer system 100. ICH 140 may be coupled to a Peripheral Component Interconnect bus adhering to a Specification Revision 2.1 bus developed by the PCI Special Interest Group of Portland, Oregon. Thus, ICH 140 includes a PCI bridge 146 that provides an interface to a PCI bus 142. PCI bridge 146 provides a data path between CPU 102 and peripheral devices. [0016] PCI bus 142 includes an audio device 150 and a disk drive 155. However, one of ordinary skill in the art will appreciate that other devices may be coupled to PCI bus 142. In addition, one of ordinary skill in the art will recognize that CPU 102 and MCH 110 could be combined to form a single chip. Further graphics accelerator 130 may be included within MCH 110 in other embodiments. <Desc/Clms Page number 5> Figure 2 illustrates one embodiment of architecture 200 for a simulation environment. The architecture 200 includes hardware 205 that runs the simulation environment. According to one embodiment, hardware 205 supports Lagrande Technology. Lagrande Technology (LT) is a technology that allows support for virtual machines on IA-32 processors. Support is given for two principal classes of software: monitor (or host) and guest. Monitor Software (or, more simply,"the monitor") should have full control of CPU 102 when it is running. The monitor presents guest software with a processor abstraction and allows it to execute on CPU 102. However, the monitor should be able to retain control of the processor resources, physical memory, interrupt management, and I/O. [0018] According to one embodiment, CPU 102 support for virtualization is provided with a new form of processor operation, called Virtual Machine Extension (VMX) operation. A new set of instructions is enabled in VMX operation. In addition, two kinds of control transfers, called VM entries and VM exits, are enabled. These transitions are managed by a new structure called a virtual-machine control structure (or VMCS). All guest software runs in VMX operation. The VMCS controlling execution of VMX operation may cause certain events, operations, and situations that cause VM exits. A VM exit causes the processor to transfer control to a monitor entry point determined by controlling the VMCS. The monitor thus gains control of the processor on a VM exit and can take action appropriate to the event, <Desc/Clms Page number 6> operation, or situation that caused the VM exit. It can then return to the context managed by the VMCS via a VM entry. [0020] If the VM monitor properly constructs the VMCS, it can prevent guest software from determining that it is running in VMX operation. The VMCS has been designed to include facilities that would allow VM monitor to virtualize CPU 102. Referring back to Figure 2, the simulation environment includes a Direct Execution Environment 210, and a Host OS environment 220. Direct Execution Environment 210 includes Guest code (OS and/or applications) running in a virtual machine. When launching (or resuming) virtual machine hardware 205 performs a full context switch from the context of a Host OS to that of the Guest OS, and allows the Guest code to run natively (at an original privilege level and at the original virtual addresses) on CPU 102. CPU 102 performs common architectural checks. While running in the Virtual Machine CPU 102 performs additional checks to discover virtualization events (described below). Host OS environment 220 includes Full Platform Simulator 222 and Monitor 224. In one embodiment, Full Platform Simulator 222 runs in a user privilege level. Monitor 224 has parts running at the system privilege and parts running in the user privilege level. Monitor 224 controls the execution of the Guest code and represents a bridge between Direct Execution Environment 210 and Host OS environment 220. Monitor 224 creates and resumes a Virtual Machine (VM) by using hardware 205 support. <Desc/Clms Page number 7> In addition, Monitor 224 regains control back from the Virtual Machine when the code running in Virtual Machine tries to perform a sensitive action. These sensitive actions, which are not permitted to be performed in the VM, are called"Virtualization Events". In one embodiment, Monitor 224 configures the CPU, at which Virtualization Events should be checked while running in Virtual Machine, as well as which state components should be loaded/restored upon resuming the VM. According to one embodiment, Virtualization Events include hardware interrupts, attempts to change virtual address space (Page Tables), access to devices (e. g., I/O instructions), control register access, Page Faults handling, etc. Monitor 224 performs the required state synchronization and handles a Virtualization Event. [0025] Monitor 224 analyzes the reason caused to exit from the Virtual Machine and performs an appropriate Virtualization operation. In one embodiment, Monitor 224 handles the Virtualization Event and resumes Direct Execution Environment back. Alternatively, Monitor 224 passes control to Full Platform Simulator 222 for simulation of the faulting instruction. In a further embodiment, Monitor 224 performs virtualization operations in such a manner that prevents the Guest OS from compromising Host OS integrity. For example, Monitor 224 manages Page Tables used in the Virtual Machine, and maps the Guest virtual addresses to the physical addresses allocated from host memory, rather than physical addresses intended by guest OS. <Desc/Clms Page number 8> Platform Simulator 222 runs as a regular process on top of the Host OS. Figure 3 is a flow diagram of one embodiment of the operation of Full Platform Simulator 222. At processing block 310, simulation begins. At decision block 320, Platform Simulator 222 determines whether to switch to Direct Execution. If Platform Simulator 222 decides to switch to Direct Execution, Monitor 224 is invoked with request to launch (or resume) Direct Execution and a guest state is virtualized, processing block 330. Otherwise, simulation continues at Platform Simulator 222, processing block 380. At processing block 340, the Virtual Machine is launched (or resumed). Subsequently, the Virtual Machine begins to run guest OS code. [0029] At some time during the running of the guest OS code, a sensitive (or virtualization) event occurs. Therefore, at processing block 350, the Virtual Machine is exited and the current state is saved/restored. At decision block 360, it is determined whether the sensitive event is a complex event. If the event is not a complex event, the event is a virtualization event, and the virtualization event is managed at processing block 365. Subsequently, control is returned to processing block 330 where the guest state is virtualized. [0030] If the event is a complex event, the guest state is de-virtualized, processing block 370. At processing block 380, instructions are again simulated. At decision block 390, it is determined whether the simulation has ended. If not, control is returned to processing block 310 where simulation continues. Otherwise, the simulation is stopped. <Desc/Clms Page number 9> [0031] The above description describes a Virtual Machine architecture that enables support for the creation, maintenance and control of a Virtual Machine that can run Guest (simulated) code while creating a full abstraction of a real machine. Thus, Virtual Machine Extensions are used for the easy detection of sensitive CPU events, resulting in the ability to switch between a Virtual Machine that runs Guest (or simulated) code and a Virtual Machine monitor that is a component of the host software. Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention.
A microelectronic device (100) includes a die (102) with input/output (I/O) terminals (104), and a dielectric layer (106) on the die (102). The microelectronic device (100) includes electrically conductive pillars (110) which are electrically coupled to the I/O terminals (104), and extend through the dielectric layer (106) to an exterior of the microelectronic device (100). Each pillar (110) includes a column (112) electrically coupled to one of the I/O terminals (104), and a head (114) contacting the column (112) at an opposite end of the column (112) from the I/O terminal (104). The head (114) extends laterally past the column (112) in at least one lateral direction.
CLAIMSWhat is claimed is:1. A microelectronic device, comprising:a die;an input/output (I/O) terminal on the die;a dielectric layer on the die; anda pillar electrically coupled to the I/O terminal, the pillar being electrically conductive, the pillar extending from the I/O terminal, through the dielectric layer, to an exterior of the microelectronic device, wherein the pillar includes:a column electrically coupled to the I/O terminal, the column being electrically conductive; anda head electrically coupled to the column at an opposite end of the column from the I/O terminal, the head being electrically conductive, the head extending laterally past the column in at least one lateral direction, wherein the dielectric layer extends from the die to the head.2. The microelectronic device of claim 1, wherein the dielectric layer includes photosensitive polymer material.3. The microelectronic device of claim 1, wherein: the dielectric layer includes a column trench sublayer which laterally surrounds the column and a head trench sublayer which laterally surrounds the head; the column includes copper; and the head includes copper.4. The microelectronic device of claim 3, wherein the column includes a column liner which is electrically conductive, extending around a lateral boundary of the column, and the head includes a head liner which is electrically conductive, extending around a lateral boundary of the head.5. The microelectronic device of claim 3, wherein the pillar includes a pillar liner which is electrically conductive, extending around a lateral boundary of the column and around a lateral boundary of the head.6. The microelectronic device of claim 1, wherein the pillar includes a barrier layer on the head, the barrier layer including a metal selected from the group consisting of nickel, palladium, platinum, titanium, tantalum, cobalt, tungsten, molybdenum, and zinc.7. The microelectronic device of claim 1, wherein the pillar includes a solder layer on the head, the solder layer being located at the exterior of the microelectronic device.8. The microelectronic device of claim 1, wherein the pillar includes a portion of a seed layer located between the column and the I/O terminal which is electrically coupled to the column, the seed layer being electrically conductive.9. A method of forming a microelectronic device, the method comprising:obtaining a die having an input/output (I/O) terminal;forming a dielectric layer on the die; andforming a pillar, the pillar being electrically conductive, so that the pillar is electrically coupled to the I/O terminal, and so that the pillar extends from the I/O terminal, through the dielectric layer, to an exterior of the microelectronic device, wherein forming the pillar includes:forming a column for the pillar, so that the column is electrically conductive, and so that the column is electrically coupled to the I/O terminals; and forming a head electrically coupled to the column at an opposite end of the column from the I/O terminal, so that the head is electrically conductive, so that the head extends laterally past the column in at least one lateral direction, and so that the dielectric layer extends from the die to the head.10. The method of claim 9, wherein:forming the dielectric layer includes forming a column trench sublayer on the die, the column trench sublayer having a column trench which exposes the I/O terminal; andforming the column includes:forming a column liner on the column trench sublayer, the column liner extending into the column trench and contacting the I/O terminal;forming a column layer on the column liner, so that the column layer fills the column trench and extends over the column trench sublayer adjacent to the column trench; andremoving the column layer and the column liner from over the column trench sublayer adjacent to the column trench.11. The method of claim 10, wherein forming the column layer includes an electroplating process to electroplate metal on the column liner.12. The method of claim 10, wherein forming the column trench sublayer includes:forming a trench material layer on the die, the trench material layer including photosensitive polymer material;exposing the trench material layer to patterned radiation, the patterned radiation having a spatial distribution aligned to a spatial distribution of the I/O terminal; anddeveloping the trench material layer to form the column trench.13. The method of claim 10, wherein:forming the dielectric layer includes forming a head trench sublayer on the column trench sublayer, the head trench sublayer having a head trench which exposes the column; andforming the head includes:forming a head liner on the head trench sublayer, the head liner extending into the head trench and contacting the column;forming a head layer on the head liner, so that the head layer fills the head trench and extends over the head trench sublayer adjacent to the head trench; and removing the head layer and the head liner from over the head trench sublayer adjacent to the head trench.14. The method of claim 9, wherein:the dielectric layer includes a column trench which exposes the I/O terminals and a head trench which open onto the column trench; andforming the pillar includes:forming a pillar liner on the dielectric layer, the pillar liner extending into the head trench, into the column trench, and contacting the I/O terminal;forming a pillar layer on the pillar liner, so that the pillar layer fills the column trench and the head trench, and extends over the dielectric layer adjacent to the head trench; andremoving the pillar layer and the pillar liner from over the dielectric layer adjacent to the head trench.15. The method of claim 9, wherein forming the dielectric layer includes an additive process which disposes dielectric material on the die to form at least a portion of the dielectric layer.16. The method of claim 9, wherein forming the pillar includes:forming a seed layer which is electrically coupled to the I/O terminal, the seed layer being electrically conductive;forming a plating mask on the seed layer, the plating mask including a column opening which exposes the seed layer;forming the column in the column opening by a plating process;removing the plating mask; andremoving the seed layer where exposed by the column.17. The method of claim 16, wherein forming the plating mask includes forming the column opening using a laser ablation process.18. The method of claim 9, wherein forming the pillar includes an additive process which disposes electrically conductive material on the die to form at least a portion of the pillar.19. The method of claim 9, wherein forming the dielectric layer includes disposing dielectric material on the die around the pillar and using a press mold process to mold the dielectric layer.20. The method of claim 9, wherein forming the pillar further includes forming a barrier layer on the head, the barrier layer including a metal selected from the group consisting of nickel, palladium, platinum, titanium, tantalum, cobalt, tungsten, molybdenum, and zinc.
INDUSTRIAL CHIP SCALE PACKAGE FOR MICROELECTRONIC DEVICE[0001] This relates generally to microelectronic devices, and more particularly to chip scale packaging in microelectronic devices.BACKGROUND[0002] Microelectronic devices are continually reducing in size and cost. Moreover, densities of components in the microelectronic devices are increasing. As the size is reduced, power and current density is increased through the input/output (I/O) structures such as bump bond structures. This results in higher temperatures, and risks failures due to electromigration. Meeting reliability targets and cost targets together has been challenging for package designs. SUMMARY[0003] In described examples, a microelectronic device has a die with input/output (I/O) terminals, a dielectric layer on the die, and pillars electrically coupled to the I/O terminals, and extending through the dielectric layer to an exterior of the microelectronic device. The pillars are electrically conductive. Each pillar includes a column electrically coupled to one of the I/O terminals, and a head contacting the column at an opposite end of the column from the I/O terminal. The head extends laterally past the column in at least one lateral direction.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 is a cross section of an example microelectronic device.[0005] FIG. 2A through FIG. 2L are cross sections of a microelectronic device depicted in stages of an example method of formation.[0006] FIG. 3A through FIG. 3F are cross sections of a microelectronic device depicted in stages of another example method of formation.[0007] FIG. 4A through FIG. 4F are cross sections of a microelectronic device depicted in stages of another example method of formation.[0008] FIG. 5A through FIG. 5G are cross sections of a microelectronic device depicted in stages of another example method of formation.DETAILED DESCRIPTION OF EXAMPLE EMB ODFMENT S[0009] The drawings are not drawn to scale. This description is not limited by the illustrated ordering of acts or events, as some acts or events may occur in different orders and/or concurrently with other acts or events. Furthermore, some illustrated acts or events are optional to implement a methodology in accordance with this description.[0010] A microelectronic device has a die with input/output (I/O) terminals. The die may be manifested, for example, as an integrated circuit, a discrete semiconductor device, or a microelectrical mechanical system (MEMS) device. The I/O terminals may include, for example, bond pads, bond areas of a redistribution layer (RDL), or bond areas of a top interconnect level. The microelectronic device includes a dielectric layer on the die. The dielectric layer may include, for example, organic polymer, silicone polymer, or inorganic dielectric material. The microelectronic device further includes pillars electrically coupled to the I/O terminals. The pillars may directly contact the I/O terminals, or may be electrically coupled to the I/O pads through electrically conductive material. The pillars extend through the dielectric layer to an exterior of the microelectronic device. The pillars are electrically conductive. Each pillar includes at least one column electrically coupled to at least one of the I/O terminals. Each pillar further includes a head contacting the at least one column. The head is located on an opposite end of the pillar from the I/O terminal. The head extends laterally past the column in at least one lateral direction. The dielectric layer extends from the die to the head, and laterally surrounds the column. In this description, the terms "lateral" and "laterally" refer to a direction parallel to a plane of a surface of the die on which the I/O terminals are located.[0011] Also, in this description, terms such as top, over, and above should not be construed as limiting the position or orientation of a structure or element, but should be used to provide spatial relationship between structures or elements.[0012] In this description, if an element is referred to as being connected to, coupled to, on, or in contact with, another element, then it may be directly connected to, directly coupled to, directly on, or directly in contact with, the other element, or intervening elements may be present. Also, in this description, if an element is referred to as being directly connected to, directly coupled to, directly on, or directly in contact with, another element, no other intentionally disposed intervening elements are present. Other terms used to describe relationships between elements should be interpreted in like fashion, for example, between versus directly between, adjacent versus directly adjacent, and so on.[0013] FIG. 1 is a cross section of an example microelectronic device. The microelectronic device 100 includes a die 102. The die 102 may contain at least one integrated circuit having a semiconductor substrate and an interconnect region. Alternatively, the die 102 may contain at least one discrete semiconductor device such as a power transistor. Further, the die 102 may contain a MEMS device such as an acceleration sensor. Other manifestations of the die 102 are within the scope of this example. The die 102 includes I/O terminals 104. The I/O terminals 104 may be bond pads electrically coupled to interconnects of the microelectronic device. Alternatively, the I/O terminals 104 may be bond areas of an RDL which is located over, and is electrically coupled to, the interconnects of the microelectronic device. Further, the I/O terminals 104 may be bump pads in a bond-over-active (BOAC) structure of the microelectronic device. Other manifestations of the I/O terminals 104 are within the scope of this example. The I/O terminals 104 may vary in size across the die 102, or may be uniform in size.[0014] The microelectronic device 100 includes a dielectric layer 106 on the die 102. The dielectric layer 106 may include, for example, organic polymer such as epoxy, crosslinked polyisoprene, polyimide, or methacrylate. Alternatively, the dielectric layer 106 may include silicone polymer. Further, the dielectric layer 106 may include inorganic dielectric material such as silicon dioxide, silicon nitride, silicon oxynitride, or aluminum oxide. The dielectric layer 106 may have a thickness 108 of 5 microns to 100 microns, for example.[0015] The microelectronic device 100 includes pillars 110 which are electrically coupled to the I/O terminals 104. The pillars 110 extend through the dielectric layer 106 to an exterior of the microelectronic device 100. Each pillar 110 includes a column 112 which is electrically coupled to one of the I/O terminals 104. The columns 112 may be directly contacting the I/O terminals 104, as depicted in FIG. 1. Alternatively, the columns 112 may be electrically coupled to the I/O terminals 104 through an electrically conductive material, such as a seed layer for an electroplating operation. The columns 112 are electrically conductive. The columns 112 may have, for example, a copper core laterally surrounded by a column liner which reduces diffusion of copper from the copper core into the dielectric layer 106. Alternatively, the columns 112 may include other metals such as nickel, platinum, aluminum, tungsten, or gold, or other electrically conductive material such as graphene or carbon nanotubes.[0016] The pillars 110 further include heads 114 on the columns 112. Each of the columns 112 is contacted by at least one of the heads 114, and each of the heads 114 contacts at least one of the columns 112. The heads 114 may directly contact the columns 112, or may contact the columns 112 through an electrically conductive material such as a portion of a diffusion barrier or seed layer. The heads 114 may have compositions similar to compositions of the columns 112, or may have different compositions. The I/O terminals 104 are coupled to a first end of the columns 112, and the heads 114 contact a second end of the columns 112, the second end being located opposite from the first end. Each of the heads 114 extends laterally past the column 112 contacted by that head 114 in at least one lateral direction, and possibly in lateral directions. The columns 112 and the heads 114 may have any of the configurations and may include any of the materials disclosed in the commonly assigned Patent Application Serial Number US 16/030,371, filed July 9, 2018, which is incorporated herein by reference, but which is not admitted to be prior art.[0017] The pillars 110 may include barrier layers 116 on the heads 114. The barrier layers 116 may include, by way of example, nickel, palladium, platinum, titanium, tantalum, cobalt, tungsten, molybdenum, or zinc. The barrier layers 116 may advantageously reduce oxidation or contamination of the heads 114.[0018] The pillars 110 may further include solder layers 118 on the barrier layers 116, or on the heads 114 if the barrier layers 116 are omitted. The solder layers 118 are located at an exterior of the microelectronic device 100. The solder layers 118 may include, by way of example, tin, silver, bismuth, or other metals. The barrier layers 116 may advantageously reduce formation of intermetallic compounds.[0019] The dielectric layer 106 extends from the die 102 to the heads 114, and may optionally extend further, to the barrier layers 116 or to the solder layers 118. The pillars 110 extend from the I/O terminals 104, through the dielectric layer 106, to an exterior of the microelectronic device 100. The dielectric layer 106 may advantageously provide support for the pillars 110 and provide protection for the die 102 during subsequent assembly and packaging operations.[0020] FIG. 2A through FIG. 2L are cross sections of a microelectronic device depicted in stages of an example method of formation. Referring to FIG. 2A, the microelectronic device 200 includes a die 202. The die 202 may be a portion of a semiconductor wafer or MEMS substrate. The semiconductor wafer or the MEMS substrate may contain additional die, not shown in FIG. 2A, similar to the die 202. Alternatively, the die 202 may be separate from other die, for example as a result of singulating the die 202 from a semiconductor wafer or MEMS substrate.[0021] The die 202 includes I/O terminals 204. The I/O terminals 204 may include primarily aluminum or copper, and may have cap layers or under bump metallization (UBM) layers of nickel, palladium, platinum, gold, or other metals. The I/O terminals 204 may be electrically coupled to components in the die 202 through vias 220 or other electrically conductive structures in the die 202.[0022] A trench material layer 222 is formed on the die 202, covering the I/O terminals 204. The trench material layer 222 may include photosensitive polymer material, for example, photoresist containing polyisoprene, photosensitive polyimide, photosensitive epoxy such as SU-8, or photoresist containing methacrylate. The trench material layer 222 may include organic resin such as poly methyl methacrylate (PMMA) which is sensitive to electron beam radiation. The trench material layer 222 may be formed, for example, by a spin-coat process, or by application as a dry film.[0023] The trench material layer 222 is exposed to patterned radiation 224 such as ultraviolet (UV) radiation from a photolithographic tool. The patterned radiation 224 has a spatial distribution aligned to a spatial distribution of the I/O terminals 204. In one version of this example, in which the photosensitive polymer material in the trench material layer 222 has a negative tone, the patterned radiation 224 may expose the trench material layer 222 in areas for a subsequently-formed column trench sublayer 226, shown in FIG. 2B. Referring back to FIG. 2A, the patterned radiation 224 may be blocked from areas for column trenches 228 over the I/O terminals 204, as depicted in FIG. 2A. In an alternative version of this example, in which the photosensitive polymer material in the trench material layer 222 has a positive tone, the patterned radiation 224 may expose the trench material layer 222 in the areas for the column trenches 228, and may be blocked from areas for the subsequently-formed column trench sublayer 226.[0024] Referring to FIG. 2B, a develop operation removes material from the trench material layer 222 of FIG. 2A in the column trenches 228, to form the column trench sublayer 226. The column trench sublayer 226 may be heated to remove volatile material such as solvent, and optionally to increase cross-linking between polymer molecules in the column trench sublayer 226 to provide more durability. The column trenches 228 in the column trench sublayer 226 expose the I/O terminals 204.[0025] Alternatively, the column trench sublayer 226 may be formed by removing material from the trench material layer 222 of FIG. 2A by a laser ablation process. Using the laser ablation process enables forming the column trench sublayer 226 from a wider range of materials, including materials that are not photosensitive, which may advantageously reduce fabrication costs of the microelectronic device 200.[0026] Referring to FIG. 2C, a column liner 230 is formed on the column trench sublayer 226, extending into the column trenches 228 and contacting the I/O terminals 204. The column liner 230 may include an adhesion sublayer which directly contacts the column trench sublayer 226 in the column trenches 228. The adhesion sublayer may include metals which have good adhesion to the column trench sublayer 226, such as titanium or titanium tungsten, and may be formed by a sputter process. The column liner 230 may also include a barrier sublayer which is effective at reducing diffusion of copper into the column trench sublayer 226. The barrier sublayer may include, for example, titanium nitride or tantalum nitride, and may be formed by a reactive sputter process or by an atomic layer deposition (ALD) process. The column liner 230 may include a seed sublayer which provides a suitable electrically conductive surface for a subsequent electroplating operation. The seed sublayer may include nickel or copper, for example, and may be formed by a sputter process or an evaporation process.[0027] Referring to FIG. 2D, a column electroplating process using a column plating bath (232) forms a column layer 234 on the column liner 230. The column layer 234 fills the column trenches 228 and extends over the column trench sublayer 226 adjacent to the column trenches 228. The column layer 234 may include primarily copper, for example, greater than 90 weight percent copper. The column layer 234 may also include other metals, such as nickel, silver, or gold. The column plating bath (232) includes copper, for example in the form of copper sulfate. The column plating bath (232) may include additives such as levelers; suppressors, sometimes referred to as inhibitors; and accelerators, sometimes referred to as brighteners, to provide a desired low thickness of the column layer 234 over the column trench sublayer 226 adjacent to the column trenches 228.[0028] Referring to FIG. 2E, the column layer 234 and the column liner 230, over the column trench sublayer 226 adjacent to the column trenches 228, are removed, leaving the column liner 230 and the column layer 234 in the column trenches 228 to provide columns 212. The column liner 230 extends around a lateral boundary of each column 212. The column layer 234 over the column trench sublayer 226 may be removed, for example, by a copper chemical mechanical polishing (CMP) process, which uses a polishing pad and a slurry which removes copper. The column liner 230 over the column trench sublayer 226 may also be removed by the copper CMP process, or may be removed by a selective wet etch process. The method to form the columns 212 as disclosed in reference to FIG. 2C through FIG. 2E is sometimes referred to as a damascene process, specifically a copper damascene process.[0029] Referring to FIG. 2F, a head trench sublayer 236 is formed over the column trench sublayer 226. The head trench sublayer 236 has head trenches 238 which expose tops of the columns 212. Each of the head trenches 238 extends laterally past the top of the column 212 which is exposed by that head trench 238, in at least one lateral direction. The head trench sublayer 236 may have a composition similar to a composition of the column trench sublayer 226. Furthermore, the head trench sublayer 236 may be formed by a process sequence similar to the steps disclosed in reference to FIG. 2A and FIG. 2B used to form the column trench sublayer 226.[0030] Referring to FIG. 2G, a head liner 240 is formed on the head trench sublayer 236, extending into the head trenches 238, and contacting the columns 212. The head liner 240 may have a sublayer structure and composition similar to a sublayer structure and composition of the column liner 230, that is, an adhesion sublayer including titanium or titanium tungsten, a barrier sublayer including titanium nitride or tantalum nitride, and a seed sublayer including nickel or copper. The sublayers of the head liner 240 may be formed by processes similar to the processes used to form the sublayers of the column liner 230, that is, a sputter process, a reactive sputter process or an ALD process, and a sputter process or an evaporation process.[0031] A head electroplating process using a head plating bath (242) forms a head layer 244 on the head liner 240. The head layer 244 fills the head trenches 238 and extends over the head trench sublayer 236 adjacent to the head trenches 238. The head layer 244 may include primarily copper, and may have a composition similar to the column layer 234. The head plating bath (242) includes copper, and may include similar additives to the column plating bath 232 of FIG. 2D, that is, levelers; suppressors, and accelerators, to provide a desired low thickness of the head layer 244 over the head trench sublayer 236 adjacent to the head trenches 238.[0032] Referring to FIG. 2H, the head layer 244 and the head liner 240, over the head trench sublayer 236 adjacent to the head trenches 238, are removed, leaving the head liner 240 and the head layer 244 in the head trenches 238 to provide heads 214. The head liner 240 extends around a lateral boundary of each head 214. The head layer 244 and the head liner 240 may be removed from over the head trench sublayer 236 by a copper CMP process, optionally followed by a wet etch process. The heads 214 make electrical connections to the columns 212. The columns 212 combined with the heads 214 provide pillars 210 of the microelectronic device 200. The column liner 230 may advantageously reduce diffusion of copper from the column layer 234 into the column trench sublayer 226. Similarly, the head liner 240 may advantageously reduce diffusion of copper from the head layer 244 into the head trench sublayer 236. Diffusion of copper into the column trench sublayer 226 or into the head trench sublayer 236 may degrade reliability of the microelectronic device 200.[0033] Referring to FIG. 21, a barrier plating process using a barrier plating bath 246 forms barrier layers 216 on the heads 214. The barrier layers 216 are parts of the pillars 210. The barrier plating process may be an electroless plating process. The barrier layers 216 may have compositions as disclosed in reference to the barrier layers 116 of FIG. 1. The barrier plating bath 246 may include nickel, in the form of nickel sulfate, and may include other metals, in the form of metal salts, to form a desired composition for the barrier layers 216. The barrier layers 216 are components of the pillars 210. Other methods of forming the barrier layers 216 are within the scope of this example.[0034] Referring to FIG. 2 J, the barrier layers 216 are exposed to a liquid solder source 248 containing melted solder which forms solder layers 218 on the barrier layers 216. The solder layers 218 are parts of the pillars 210. The liquid solder source 248 may be pumped onto the microelectronic device 200 to expose the barrier layers 216 to the melted solder. Alternatively, the microelectronic device 200 may be dipped into the melted solder of the liquid solder source 248 to expose the barrier layers 216 to the melted solder. The solder layers 218 may have a composition as disclosed in reference to the solder layers 118 of FIG. 1, that is, may include tin, silver, bismuth, or other metals. The solder layers 218 are components of the pillars 210.[0035] Referring to FIG. 2K, the microelectronic device 200 is assembled onto a circuit substrate 250. The circuit substrate 250 may be manifested as a printed circuit board (PCB) or a ceramic wiring substrate, for example. The circuit substrate 250 has pads 252 which are electrically conductive, located on an insulating layer 254. The pads 252 may be manifested as die pads, leads, traces, routings, or other electrically conductive component of the circuit substrate 250. The pads 252 may include primarily copper, and may optionally include gold, nickel, or other metal to provide a suitable surface for a solder joint. The insulating layer 254 may be manifested as a fiberglass reinforced plastic (FRP) board, a ceramic substrate, or other insulating medium. The microelectronic device 200 is assembled onto the circuit substrate 250 by bringing the solder layers 218 into contact with the pads 252 and heating the solder layers 218 to form solder connections between the pillars 210 and the pads 252.[0036] FIG. 2L depicts the microelectronic device 200 assembled onto the circuit substrate 250. The solder layers 218 provide solder connections between the pillars 210 and the pads 252. A combination of the column trench sublayer 226 and the head trench sublayer 236 provide a dielectric layer 206. The column trench sublayer 226 laterally surrounds the columns 212. The head trench sublayer 236 laterally surrounds the heads 214. The dielectric layer 206 of this example extends from the die 202 to the barrier layers 216, laterally surrounding the columns 212 and the heads 214. The dielectric layer 206 advantageously provides support for the pillars 210 and provides protection for the die 202 during assembly to the circuit substrate 250, and afterward, during use of the assembled microelectronic device 200.[0037] FIG. 3A through FIG. 3F are cross sections of a microelectronic device depicted in stages of another example method of formation. Referring to FIG. 3A, the microelectronic device 300 includes a die 302. The die 302 may be a portion of a semiconductor wafer or MEMS substrate, or may be a discrete workpiece. The die 302 includes I/O terminals 304. The I/O terminals 304 may have compositions similar to the compositions disclosed in reference to the I/O terminals 204 of FIG. 2A. The die 302 may include electrically conductive members 320 which electrically couple the I/O terminals 304 to one or more components in the die 302.[0038] A dielectric layer 306 is formed on the die 302. The dielectric layer 306 is formed to have column trenches 328 which expose the I/O terminals 304. The dielectric layer is further formed to have one or more head trenches 338 which open onto the column trenches 328. In this example, the head trench 338 opens onto two column trenches 328.[0039] The dielectric layer 306 may be formed by a first additive process, as depicted in FIG. 3 A, which disposes dielectric material 356 using a binder jetting apparatus 358 onto the die 302 to form at least a portion of the dielectric layer 306. In this description, an additive process disposes the dielectric material 356 in a desired area and does not dispose the dielectric material 356 outside of the desired area, so that it is unnecessary to remove a portion of the disposed dielectric material 356 to produce a final desired shape of the dielectric layer 306. Additive processes may enable forming the dielectric layer 306 without photolithographic processes, thus advantageously reducing fabrication cost and complexity. Examples of additive processes suitable for forming the dielectric layer 306 include binder jetting, material jetting, directed energy deposition, material extrusion, powder bed fusion, sheet lamination, vat photopolymerization, direct laser deposition, electrostatic deposition, laser sintering, and photo-polymerization extrusion.[0040] In one version of this example, the dielectric layer 306 may include organic polymer such as epoxy, benzo-cyclobutene (BCB), polyimide, or acrylic. In another version, the dielectric layer 306 may include silicone polymer. In a further version, the dielectric layer 306 may include inorganic dielectric material such as silicon dioxide, silicon nitride, boron nitride, or aluminum oxide. The inorganic dielectric material may be implemented as particles of the inorganic material, sintered or with a polymer binder.[0041] The dielectric layer 306 may be heated after disposing the dielectric material 356, to remove volatile material from the dielectric layer 306, or to crosslink polymer material in the dielectric layer 306. The dielectric layer 306 may be heated, for example, by a radiant heating process, by a hotplate heating process, by a furnace heating process, or by a forced air convection heating process.[0042] Referring to FIG. 3B, a pillar liner 360 is formed on the dielectric layer 306, extending into the head trench 338 and into the column trenches 328, and contacting the I/O terminals 304. The pillar liner 360 may have a layer structure and composition similar to the layer structure and composition disclosed in reference to the column liner 230 of FIG. 2C, that is, an adhesion sublayer including titanium or titanium tungsten, a barrier sublayer including titanium nitride or tantalum nitride, and a seed sublayer including nickel or copper. The pillar liner 360 may be formed by any of the processes disclosed in reference to the column liner 230, that is, a sputter process, a reactive sputter process or an ALD process, and a sputter process or an evaporation process.[0043] Referring to FIG. 3C, a pillar layer 362 is formed on the pillar liner 360, filling the column trenches 328 and the head trench 338, and extending onto the pillar liner 360 adjacent to the head trench 338. The pillar layer 362 may be formed by an electroplating process. The pillar layer 362 may include primarily copper, that is, greater than 90 weight percent copper. The pillar layer 362 may optionally include other metals, such as nickel, silver, or gold.[0044] Referring to FIG. 3D, the pillar layer 362 and the pillar liner 360, over the dielectric layer 306 adjacent to the head trench 338, are removed, leaving the pillar layer 362 and the pillar liner 360 in the column trenches 328 and the head trench 338 to provide columns 312 and a head 314, respectively, of a pillar 310. The pillar layer 362 and the pillar liner 360 may be removed from over the dielectric layer 306 adjacent to the head trench 338, for example, by a CMP process, an etch back process, or a combination thereof. The method to form the columns 312 and the head 314 as disclosed in reference to FIG. 3B through FIG. 3D is sometimes referred to as a dual damascene process. The dual damascene process may provide reduced fabrication cost and complexity compared to other methods of forming the pillar 310.[0045] Referring to FIG. 3E, a barrier layer 316 is formed on the head 314. Additional barrier layers 316 are formed on additional heads 314, if present in the microelectronic device 300. The barrier layer 316 may have a composition as disclosed in reference to the barrier layers 116 of FIG. 1, that is, may include nickel, palladium, platinum, titanium, tantalum, cobalt, tungsten, molybdenum, or zinc, and may be formed as disclosed in reference to the barrier layers 216 of FIG. 21. The barrier layer 316 is a component of the pillar 310, that is, by an electroless plating process, using a barrier plating bath.[0046] A solder layer 318 is formed on the barrier layer 316. The solder layer 318 may be formed by a second additive process, for example a material extrusion process which disposes solder paste 364 onto the barrier layer 316 using a material extrusion apparatus 366. The solder layer 318 may be heated to remove volatile material or to reduce a resistance between the solder layer 318 and the barrier layer 316. The solder layer 318 is a component of the pillar 310. Additional solder layers 318 are formed on additional barrier layers 316, if present in the microelectronic device 300.[0047] Referring to FIG. 3F, the microelectronic device 300 is assembled onto a circuit substrate 350. The circuit substrate 350 has a pad 352, which is electrically conductive, located on an insulating layer 354. The microelectronic device 300 is assembled onto the circuit substrate 350 by bringing the solder layer 318 into contact with the pad 352 and heating the solder layer 318 to form a solder connection between the pillar 310 and the pad 352. The dielectric layer 306 may accrue advantages for the microelectronic device 300 similar to those disclosed in reference to FIG. 2L, that is, may provide support for the pillar 310 and provides protection for the die 302 during assembly to the circuit substrate 350, and afterward, during use of the assembled microelectronic device 300. [0048] FIG. 4A through FIG. 4F are cross sections of a microelectronic device depicted in stages of another example method of formation. Referring to FIG. 4A, the microelectronic device 400 includes a die 402. The die 402 may be a portion of a workpiece containing additional devices, or may be a discrete workpiece containing only the die 402. The die 402 includes at least one I/O terminal 404. The I/O terminal 404 may have a composition similar to the compositions disclosed in reference to the I/O terminals 204 of FIG. 2A, that is, may include primarily aluminum or copper, and may have a cap layer or UBM layer of nickel, palladium, platinum, gold, or other metals.[0049] A dielectric layer 406 is formed on the die 402. The dielectric layer 406 is formed to have a column trench 428 which exposes the I/O terminal 404. The dielectric layer is further formed to have a head trench 438 which opens onto the column trench 428. The dielectric layer 406 may have additional column trenches, not shown which expose additional I/O terminals, also not shown, and may have additional head trenches, not shown, which open onto the additional column trenches. At least a portion of the dielectric layer 406 may be formed by a first additive process, such as a directed energy process using a directed energy apparatus 458 to dispose dielectric material 456 onto the die 402, as depicted in FIG. 4A. The directed energy process delivers the dielectric material 456 in the form of microparticles or nanoparticles in an inter gas stream to the die 402, and uses directed thermal energy, for example, from a focused laser beam, to fuse the dielectric material 456 on the die 402. The dielectric layer 406 may include any of the materials disclosed in reference to the dielectric layer 306 of FIG. 3A, that is, may include organic polymer such as epoxy, BCB, polyimide, or acrylic, may include silicone polymer, or may include inorganic dielectric material such as silicon dioxide, silicon nitride, boron nitride, or aluminum oxide, optionally implemented as particles of the inorganic material, sintered or with a polymer binder.[0050] Referring to FIG. 4B, electrically conductive material 468 is disposed in the column trench 428 and in the head trench 438 to form at least a portion of a pillar conductor 470. The pillar conductor 470 in the column trench 428 provides a column 412 of a pillar 410 of the microelectronic device 400. The pillar conductor 470 in the head trench 438 provides a head 414 of the pillar 410. The electrically conductive material 468 may be disposed in the column trench 428 and the head trench 438 by a second additive process, such as an electrostatic deposition process using an electrostatic deposition apparatus 472, as depicted in FIG. 4B. Other additive processes may be used to form the column 412 and the head 414. The electrically conductive material 468 may include metal nanoparticles, such as copper, gold, silver, or aluminum nanoparticles. The electrically conductive material 468 may include carbon nanotubes, graphene, or other graphitic material. In one version of this example, the column 412 and the head 414 may be formed by separate additive processes using different electrically conductive materials. The column 412 or the head 414 may be heated to remove volatile material such as solvent or carrier fluid, to fuse electrically conductive particles of the electrically conductive material 468 together, or to melt metals in the electrically conductive material 468 to form an alloy in the column 412 or the head 414. Metal nanoparticles in the electrically conductive material 468 may be fused or melted at temperatures significantly lower than melting temperatures of bulk metals having a same composition, which may advantageously reduce thermal degradation of the microelectronic device 400.[0051] Referring to FIG. 4C, barrier layers 416 are formed on the head 414 in a first contact area 474 and in a second contact area 476. The barrier layers 416 may have compositions similar to the compositions disclosed for the barrier layers 116 of FIG. 1. The barrier layers 416 may be formed by a third additive process, such as an electrochemical deposition process using an electrochemical deposition apparatus 478, as depicted in FIG. 4C. The barrier layers 416 may be formed by other methods, such as sputtering thin films of barrier metals, followed by masking and etching. The barrier layers 416 are components of the pillar 410.[0052] Referring to FIG. 4D, an isolation layer 480 is formed on the head 414, adjacent to the barrier layers 416. The isolation layer 480 may prevent unintended electrical contact to the head 414. The isolation layer 480 may include, for example, organic polymer material, silicone polymer material, inorganic material, or a combination thereof. The isolation layer 480 may be formed by a third additive process, such as a photo-polymerization extrusion process using a photo-polymerization extrusion apparatus 482 having a monomer source 482a, and an ultraviolet laser 482b. The isolation layer 480 is a component of the pillar 410.[0053] Referring to FIG. 4E, solder layers 418 may be formed on the barrier layers 416. The solder layers 418 may be formed by a fourth additive process, for example a material extrusion process which disposes solder paste 464 onto the barrier layers 416 using a material extrusion apparatus 466. The solder layers 418 may be heated, as disclosed in reference to FIG. 3E, that is, to remove volatile material or to reduce a resistance between the solder layer 418 and the barrier layer 416. The solder layers 418 are components of the pillar 410.[0054] Referring to FIG. 4F, the microelectronic device 400 is assembled onto a circuit substrate 450. The circuit substrate 450 has an insulator layer 454 and pads 452a, 452b, and 452c, which are electrically conductive, on the insulator layer 454. The microelectronic device 400 is assembled onto the circuit substrate 450 by bringing the solder layers 418 into contact with the pads 452a and 452c, and heating the solder layers 418 to form solder connections between the pads 452a and 452c and the pillar 410 in the first contact area 474 and the second contact area 476, respectively. The isolation layer 480 may prevent electrical contact between the pad 452b and the head 414. The dielectric layer 406 may accrue advantages for the microelectronic device 400 similar to those disclosed in reference to FIG. 2L, that is, may provide support for the pillar 410 and provides protection for the die 402 during assembly to the circuit substrate 450, and afterward, during use of the assembled microelectronic device 400.[0055] FIG. 5A through FIG. 5G are cross sections of a microelectronic device depicted in stages of another example method of formation. Referring to FIG. 5A, the microelectronic device 500 includes a die 502. The die 502 includes I/O terminals 504. A seed layer 584 is formed over the die 502. The seed layer 584 is electrically conductive, and makes electrical contact with the I/O terminals 504. The seed layer 584 may include an adhesion sublayer with titanium, tungsten, or nickel, directly on the die 502. The seed layer 584 may include a plating surface sublayer with copper or nickel, to provide a suitable surface for an electroplating process.[0056] A plating mask 586 is formed on the seed layer 584. The plating mask 586 has column openings 588 which expose the seed layer 584 over the I/O terminals 504. The column openings 588 may be tapered to be more narrow at an end of each column opening 588 that is proximate to the I/O terminals 504 and wider at an opposite end of each column opening 588 that is distal to the I/O terminals 504.[0057] In one version of this example, the plating mask 586 may include organic polymer, and may be formed by forming a mask layer of the organic polymer on the seed layer 584. The column openings 588 may be formed in the mask layer by a laser ablation process using a scanned laser ablation apparatus 590. After formation of the column openings 588 is completed, the remaining mask layer provides the plating mask 586. Forming the column openings 588 with the tapered configuration of FIG. 5 A may advantageously provide additional process latitude for the laser ablation process. [0058] In another version, the plating mask 586 may include photoresist, photosensitive polyimide, or photosensitive silicone polymer, and may be formed by a photolithographic operation. Forming the column openings 588 with the tapered configuration may advantageously provide additional process latitude for the photolithographic operation. Alternatively, the plating mask 586 may be formed by an additive process, or a screen printing process.[0059] Referring to FIG. 5B, pillar conductors 570 are formed in the column openings 588 by an electroplating operation using the seed layer 584. The pillar conductors 570 may include, for example, copper, nickel, gold, silver, palladium, platinum, or tungsten. FIG. 5B depicts the pillar conductors 570 partway to completion by the electroplating operation.[0060] Referring to FIG. 5C, the electroplating operation is continued to complete the pillar conductors 570. The pillar conductors 570 of this example extend above and laterally past the column openings 588. Portions of the pillar conductors 570 in the column openings 588 provide columns 512 of pillars 510 of the microelectronic device 500. Portions of the pillar conductors 570 above the plating mask 586 provide heads 514 of the pillars 510.[0061] Referring to FIG. 5D, barrier layers 516 are formed on the heads 514. The barrier layers 516 may be formed, for example, by one or more electroplating processes using the seed layer 584, one or more electroless plating processes, by an additive process, or by sputtering thin films of barrier metals, followed by masking and etching. The barrier layers 516 may have compositions as disclosed in reference to the barrier layers 116 of FIG. 1. The barrier layers 516 are components of the pillars 510.[0062] Referring to FIG. 5E, the plating mask 586 of FIG. 5D is removed. The plating mask 586 may be removed, for example, by an asher process using oxygen, an ozone process, a wet clean process using organic solvents, or a combination thereof. After the plating mask 586 is removed, the seed layer 584 is removed where exposed by the columns 512, leaving the seed layer 584 between the columns 512 and the I/O terminals 504. The seed layer 584 may be removed, for example, by a plasma etch process, a wet etch process, an electrochemical etch process (sometimes referred to as a reverse plating process), or a combination thereof. Portions of the seed layer 584 between the columns 512 and the I/O terminals 504 are components of the pillars 510.[0063] Referring to FIG. 5F, a dielectric layer 506 is formed on the die 502. The dielectric layer 506 may include any of the dielectric materials disclosed in reference to the dielectric layer 106 of FIG. 1. The dielectric layer 506 extends from the die 502 to the heads 514, and may optionally extend partway up lateral sides of the heads 514. The dielectric layer 506 may provide the advantages disclosed in reference to the dielectric layers 106, 206, 306, and 406 of the other examples herein, that is, that is, may provide support for the pillars 510 and provides protection for the die 502 during assembly, and afterward, during use of the assembled microelectronic device 500.[0064] The dielectric layer 506 may be formed by a press mold process, in which dielectric material is disposed on the die 502 between the pillars 510 and subsequently molded into a desired configuration using a press mold plate 592. Other methods for forming the dielectric layer 506, such as a spin coat process followed by an etchback process, are within the scope of this example.[0065] Referring to FIG. 5G, the microelectronic device 500 is assembled onto a circuit substrate 550. The circuit substrate 550 has an insulator layer 554 and pads 552. The pads 552 are electrically conductive. Solder preforms 594 may be disposed on the pads 552. The microelectronic device 500 is assembled by bringing the pillars 510 and the pads 552 into contact with the solder preforms 594, as indicated in FIG. 5G. The solder preforms 594 are heated to reflow the solder preforms 594, forming solder joints between the pillars 510 and the pads 552.[0066] Various features of the examples disclosed herein may be combined in other manifestations of example integrated circuits. For example, the pillars 110 of FIG. 1 may be formed by any of the methods disclosed in reference to FIG. 2A through FIG. 2L, FIG. 3A through FIG. 3F, FIG. 4 A through FIG. 4F, or FIG. 5 A through FIG. 5F. Similarly, the dielectric layer 106 of FIG. 1 may be formed by any of the methods disclosed in reference to FIG. 2 A through FIG. 2L, FIG. 3 A through FIG. 3F, FIG. 4A through FIG. 4F, or FIG. 5A through FIG. 5F. Steps disclosed in reference to example methods herein for forming the dielectric layers 206, 306, 406, or 506, may be combined with steps disclosed in reference to other examples herein for forming the columns 212, 312, 412, or 512, and may further be combined with steps disclosed in reference to further examples herein for forming the heads 214, 314, 414, or 514.[0067] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
A fastening system and a method for retaining temperature control devices used on semiconductor dies transfers the load of the temperature control devices around the semiconductor dies using a spring collet arrangement. An elongated spring collet is installed in a hole in the temperature control device so as to extend outwardly from the temperature control device with the spring collet being movable in the hole relative to the temperature control device in the direction of elongation of the spring collet. An expansion spring resiliently biases the spring collet so as to extend outwardly. An outer end of the spring collet is positioned in a pocket of a retention mechanism of a support member. An electrical connection is made as the temperature control device with semiconductor die is forced down until a pin grid array thereon is properly seated in a socket on the support member. A retention screw is then extended through the spring collet for connection with the retention mechanism and expansion of the spring collet into locking engagement with the temperature control device to retain the temperature control device.
What is claimed is: 1. A fastening system for retaining on a support member an assembly of at least one semiconductor die and a temperature control device arranged in heat-conducting relation with said semiconductor die, said fastening system comprising:a support member; an assembly including at least one semiconductor die and a temperature control device arranged in heat-conducting relation with said semiconductor die, said temperature control device having at least one hole therein for receiving an elongated mounting member to mount said assembly on said support member; an elongated mounting member which can be installed in said hole in said temperature control device so as to extend outwardly from the temperature control device with said mounting member being movable in the hole relative to the temperature control device in the direction of elongation of said mounting member; and a retention member receivable in said mounting member for expanding said mounting member in the hole of the temperature control device into locking engagement with the temperature control device to prevent relative movement between the temperature control device and the mounting member in the direction of elongation of said mounting member. 2. The fastening system according to claim 1, wherein said assembly includes a spring clip holding said temperature control device and said at least one semiconductor in heat-conducting relation.3. The fastening system according to claim 1, wherein said support member includes a retention mechanism connected to a mother board.4. The fastening system according to claim 1, wherein said support member includes at least one pocket for receiving an end of said mounting member.5. The fastening system according to claim 1, wherein said temperature control device is in the form of a heat pipe lid of said assembly.6. The fastening system according to claim 1, wherein said semiconductor die is a silicon processor die.7. The fastening system according to claim 1, wherein said elongated mounting member is a spring collet.8. The fastening system according to claim 7, wherein said spring collet has a one-way snap on one end thereof that allows the collet to be installed in said hole in the temperature control device but prevents the collet from falling out of the hole after installation.9. The fastening system according to claim 8, further comprising an expansion spring which can be telescoped over said one end of said spring collet, and wherein another end of said spring collet opposite said one end includes a retainer upon which one end of the said expansion spring can be seated.10. The fastening system according to claim 1, wherein said system comprises a plurality of said elongated mounting members for installation in respective holes in said temperature control device and a plurality of said retention members receivable in respective ones of said mounting members.11. The fastening system according to claim 1, wherein said retention member is a screw which can be extended through said mounting member for threaded connection with said support member, said screw having a chamfer which expands said mounting member as said screw is threaded into the support member.12. A fastening system for retaining on a support member an assembly of at least one semiconductor die and a temperature control device arranged in heatconducting relation with said semiconductor die, said fastening system comprising:an elongated mounting member which can be installed in a hole in the temperature control device so as to extend outwardly from the temperature control device with said mounting member being movable in the hole relative to the temperature control device in the direction of elongation of said mounting member, and a retention member receivable in said mounting member for expanding said mounting member in the hole of the temperature control device into locking engagement with the temperature control device to prevent relative movement between the temperature control device and the mounting member in the direction of elongation of said mounting member. 13. The fastening system according to claim 12, wherein said elongated mounting member is a spring collet.14. The fastening system according to claim 13, wherein said spring collet has a one-way snap on one end thereof that allows the collet to be installed in a hole in the temperature control device but prevents the collet from falling out of the hole after installation.15. The fastening system according to claim 14, further comprising an expansion spring which can be telescoped over said one end of said spring collet, and wherein another end of said spring collet opposite said one end includes a retainer upon which one end of said expansion spring can be seated.16. The fastening system according to claim 12, wherein said system comprises a plurality of said elongated mounting members for installation in respective holes in said temperature control device and a plurality of said retention members receivable in respective ones of said mounting members.17. The fastening system according to claim 12, wherein said elongated mounting member and said retention member have respective camming surfaces thereon which cooperate when said retention member is received in said mounting member to expand said mounting member.18. The fastening system according to claim 12, wherein said retention member is a screw which can be extended through said mounting member for threaded connection with the support member, said screw having a chamfer which expands said mounting member as said screw is screwed into the support member.19. A method for retaining on a supporting member an assembly of at least one semiconductor die and a temperature control device arranged in heat-conducting relation with said semiconductor die, said method comprising:installing at least one elongated mounting member in a hole in said temperature control device of said assembly so that said mounting member extends outwardly from the temperature control device with said mounting member being movable in the hole relative to the temperature control device in the direction of elongation of said mounting member; and mounting said assembly directly on said supporting member by way of said elongated mounting member, said mounting including inserting a retention member in said mounting member to expand said mounting member in the hole of the temperature control device into locking engagement with the temperature control device to prevent relative movement between the temperature control device and the mounting member in the direction of elongation of said mounting member. 20. The method according to claim 19, wherein said mounting member is a spring collet and said retention member is a screw which is extended through said mounting member and screwed into the support member, said screw having a chamfer which expands said mounting member as said screw is screwed into the support member.
FIELDThe present invention relates to a fastening system and method for holding and securing a temperature control device in heat conducting relation with the surface of a semiconductor die.BACKGROUNDTemperature control devices for semiconductor dies such as silicon dies include but are not restricted to heat sinks, heat pipes, iso-chillers, heaters, heat exchangers, cold plates, iso-sinks, heat pumps, thermal electric coolers and heaters, and peltiers. To allow good temperature transfer, the temperature control devices are held and secured in heat conducting relation with the surface of the semiconductor dies. The semiconductor dies include but are not restricted to processor dies, central processing unit (CPU) dies, chip sets and substrates.Most silicon dies are very brittle and do not withstand shock loading very well. This, in addition to the heavy temperature control devices, e.g., heat sinks, that are placed on top of the silicon dies presents a challenging problem in maintaining good heat transfer without damaging the dies.Spring clips, per se, are a known type of fastener used on many existing products. While a spring clip can provide a consistent load between a silicon die and a temperature control device, a problem with the use of a spring clip is that it allows the load of the temperature control device to be transferred through the silicon die. There is a need for an improved fastening system and method which overcome this problem. The present invention addresses this need.BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing and a better understanding of the present invention will become apparent from the following detailed description of example embodiments and the claims when read in connection with the accompanying drawings, all forming a part of the disclosure of this invention. While the foregoing and following written and illustrated disclosure focuses on disclosing example embodiments of the invention, it should be clearly understood that the same is by way of illustration and example only and the invention is not limited thereto. The spirit and scope of the present invention are limited only by the terms of the appended claims.The following represents brief descriptions of the drawings, wherein:FIG. 1 is an exploded view of a processor cartridge from the bottom, for use in a fastening system according to the present invention.FIG. 2 is an exploded view of the processor cartridge of FIG. 1 from the top.FIG. 3 is a view from the side and slightly above a collet spring of the fastening system of the invention.FIG. 4 is a side view from slightly above the collet spring of FIG. 3, shown inverted, and in relation to an expansion spring of the fastening system.FIG. 5 is a side view of the processor cartridge of FIG. 1 shown in assembled condition with three of four collet springs with expansion springs as shown in FIGS. 3 and 4, installed in holes of the temperature control device of the cartridge for use in retaining the assembly of the cartridge on a support member.FIG. 6 is a side view from above of a support member in the form of a mother board with socket and retention mechanism over which the cartridge of FIG. 5 is placed for electrical connection of the socket with a pin grid array on a CPU aid substrate of the cartridge and retention of the temperature control device of the cartridge on the retention mechanism of the support member according to the fastening system of the invention.FIG. 7 is a side view like FIG. 6 after the cartridge is installed and showing installation of the retention screws of the fastening system.FIG. 8 is a view from the top of the installed cartridge of the support member of FIG. 7 showing the location of section A-A illustrated in FIG. 9.FIG. 9 is a sectional view of the fastening system of the disclosed embodiment taken along the section A-A in FIG. 8.DETAILED DESCRIPTIONReferring now to the drawings, the disclosed embodiment of the fastening system I of the invention is used to retain a processor cartridge 2 on a support member 3. The support member 3 in the disclosed embodiment is a retention mechanism 4 secured on a mother board 5. The fastening system 1 transfers the load of a temperature control device 6 of the processor cartridge 2 around silicon dies 7 of the processor cartridge, which are in heat-conducting relation with the temperature control device 6, by using a spring collet arrangement 8 to transfer the load from the temperature control device directly to the retention mechanism mounted on the mother board.The processor cartridge 2 as shown in FIGS. 1 and 2 is an assembly involving a plurality of silicon dies 7 mounted on one side of a CPU substrate 9 for contact with the temperature control device 6. The device 6 is in the form of a heat pipe lid. The silicon dies 7 on the CPU substrate 9 are maintained in heat-conducting relation with the heat pipe lid 6 in the assembly of the processor cartridge 2 by a spring clip 10. The side of the CPU substrate 9 opposite the silicon dies 7 is provided with a pin grid array 11 for electrical connection with a socket 12 on the mother board 5 when the cartridge is installed on the mother board. The spring clip 10 provides a constant load between the silicon dies 7 and the heat pipe lid 6 to allow maximum heat transfer. The heat pipe lid 6 is formed with a plurality of holes 13 extending therethrough. The holes are for receiving respective ones of elongated mounting members 14, in the form of spring collets, of the spring collet arrangement 8.The spring collets 14, see FIGS. 3, 4, and 9, are each in the form of a sleeve 15 having slits 16 extending from one end thereof along the longitudinal axis B-B of the sleeve. A snap-fit 17 is formed at one end of the sleeve, and a flange 18 is provided at the opposite end for seating one end of an expansion spring 19 of the spring collet arrangement 8 provided about the spring collet in the assembled condition of the arrangement on the heat pipe lid 6. As shown in FIG. 5, the spring collets 14 and expansion springs 19 are installed into the respective holes 13 of the heat pipe lid 6. The holes 13 in the heat pipe lid are slightly larger in diameter than the spring collets, allowing the spring collets to be installed and moved freely in the axial direction B-B. The holes 24 are also countersunk at 20 to prevent the springs from being crushed. The expansion springs 19 bias the springs collet 14 in the open position, e.g., with flanged ends 18 maintained outwardly from the surface of the processor cartridge 2 so that the cartridge is always ready to be installed on the mother board 5, regardless of its orientation. The one-way snaps 17 allow the spring collets to be installed in the holes 13 but do not permit the spring collets to fall out of the holes after assembly.Once the collets 14 and springs 19 are installed, the cartridge 2 is ready to be placed on the mother board 5. See FIG. 6. At this point, the cartridge 2 is positioned over the socket 12 and retention mechanism 4. The cartridge is then placed in the retention mechanism so that the bottom of the spring collets 14 rest in countersunk pockets 21 located on the retention mechanism. The expansion springs 19 cause the cartridge to sit above the socket 12. The cartridge is then forced down in the direction of the arrow in FIG. 6 against the bias of the expansion springs 19 until the pin grid array 11 of the cartridge is properly seated with the socket 12 on the mother board. The spring collets 14 of the fastening system 1 advantageously provide blind mating alignment of the pin grid array 11 of the cartridge into the socket 12, because the spring collets work as alignment pins, allowing movement along only one axis, e.g., in the direction of the longitudinal axis B-B of the spring collets.After the pin grid array 11 of the cartridge 2 is seated in the socket 12, retention members 22 of the spring collet arrangement 8 of the fastening system 1 are installed to retain the processor cartridge 2 on the support member 3. The retention members 22 are in the form of retention screws, which have a chamfer 23 on the screw head that matches a chamfer 24 on the inside of the spring collet 14. The screw 22 is inserted into the top of the spring collet and is threaded into the retention mechanism 4. See FIGS. 7-9. When the chamfer 23 of the screw meets the chamfer 24 on the spring collet, the spring collet expands and locks the processor cartridge 2 into place. Because the top of the spring collet is split by slits 16, the spring collet is allowed to expand as the retention screw is driven into the retention mechanism. This expansion is limited to the size of the surrounding hole 13 in the heat pipe lid 6 of the cartridge 2. Once the spring collet has expanded to that hole size, it acts like a wedge, locking the cartridge into place with respect to the retention screw and retention mechanism.While the fastening system and method of the invention for retaining temperature control devices used on semiconductor dies have been disclosed for use in an embodiment with the processor cartridge 2 having a heat pipe lid 6 as its temperature control device to be mounted upon a retention mechanism 4 of the mother board 5, the invention is not limited thereto, but is applicable to other arrangements. For example, other temperature control devices which can be retained include other types of heat pipe attachments, heat sinks, heat exchangers, iso-chillers, heaters, cold plates, iso-sinks, heat pumps, and thermal electric coolers. More specifically, the fastening system and method of the invention are particularly useful for retaining McKinley processor heat pipe and heat sink attachments, Merced processor heat pipe and heat sink attachments, and any temperature control device used on silicon dies. The expression "semiconductor dies" includes, but is not restricted to, processors dies, CPU dies, chip sets, and substrates. It is expected that the present invention will be of great value to future and existing 64-bit (Itanium/McKinley) and 32-bit products and can also be used on chip sets. The spring collets of the fastening system also enable a very large tolerance stack to be accommodated. The invention affords a simple and inexpensive method and fastening system for retaining temperature control devices without transferring the load thereof through the silicon dies in heat-conducting relation with the devices. In view of the above, we do not wish to be limited to the details shown and described herein, but intend to cover all such changes and modifications as are encompassed by the scope of the appended claims.
A system is described that includes a microprocessor and a thermal control subsystem. The microprocessor includes execution resources to support processing of instructions and consumes power. The microprocessor also includes at least one throttling mechanism to reduce the amount of heat generated by the microprocessor. The thermal control subsystem is configured to estimate an amount of power used by the microprocessor and to control the throttling mechanism based on the estimated amount of current power usage to ensure that junction temperature will not exceed the maximum allowed temperature.
CLAIMS What is claimed is: 1. A microprocessor comprising: at least one throttling mechanism; and a thermal control subsystem to estimate an amount of power used by said microprocessor and to control said at least one throttling mechanism based on said estimated power usage. 2. The microprocessor of claim 1, wherein the amount of power used by the microprocessor is estimated based on the number of occurrences of at least one activity performed in said microprocessor. 3. The microprocessor of claim 1, wherein thermal control subsystem includes a power usage monitoring unit which determines the number of occurrences of at least one activity performed by the microprocessor within a sampling time period and computes the estimated power usage based on (1) the count value associated with said at least one activity, (2) current clock frequency and (3) operating voltage level of the microprocessor 4. The microprocessor of claim 3, wherein the power usage monitoring unit estimates the amount of the power used by the microprocessor by averaging the current estimated power usage value with a defined number of most recently estimated power usage values obtained during previous sampling time periods. <Desc/Clms Page number 15> 5. The microprocessor of claim 1, wherein the thermal control subsystem further comprises a throttling control unit which compares said estimated amount of power used by the microprocessor against a threshold and activates the throttling mechanism if the estimated power used by the microprocessor is greater than said threshold or deactivates the throttling mechanism if the estimated power used by the microprocessor is less than said threshold. 6. The microprocessor of claim 1, wherein the throttling mechanism is activated in a deterministic manner by the thermal control subsystem. 7. The microprocessor of claim 2, wherein said at least one activity monitored by the thermal control subsystem comprises at least one of the following activities; (1) floating point operation, (2) cache memory access and (3) instruction decoding. 8. A method comprising: estimating an amount of power used by a microprocessor; and controlling at least one throttling mechanism incorporated in the microprocessor based on said estimated power usage. 9. The method of claim 8, wherein the amount of power used by the microprocessor is estimated based on the number of occurrences of at least one activity performed in the microprocessor. 10. The method of claim 8, wherein the estimating the amount of power <Desc/Clms Page number 16> used by the microprocessor further comprises: counting the number of occurrences of at least one activity performed by the microprocessor within a sampling time period; and adjusting the number of occurrences of said at least one activity according to current operating frequency and voltage level of the microprocessor. 11. The method of claim 10, wherein the estimating the amount of the power used by the microprocessor further comprises averaging the current estimated power usage value with a defined number of most recently estimated power usage values obtained during previous sampling time periods. 12. The method of claim 8, further comprising: comparing said estimated amount of power used by the microprocessor against a threshold; activating said at least one throttling mechanism if said estimated power used by the microprocessor is greater than said threshold; and deactivating said at least one throttling mechanism if said estimated power used by the microprocessor is less than said threshold. 13. The method of claim 8, wherein the throttling mechanism is activated in a deterministic manner. 14. The method of claim 10, wherein said at least one activity monitored is selected from the following activities; (1) floating point operation, (2) cache memory access and (3) instruction decoding. <Desc/Clms Page number 17> 15. A thermal control system comprising: a power usage estimator to estimate an amount of power used by a microprocessor based on the number of occurrences of at least one activity performed by the microprocessor during a defined time period; and a throttling control unit to control at least one throttling mechanism incorporated in the microprocessor based on the estimated amount of power used by the microprocessor. 16. The thermal control system of claim 15, wherein said power usage estimator estimates the amount of power used by the microprocessor based on (1) the number of occurrences of at least one activity, (2) current clock frequency and (3) operating voltage level of the microprocessor. 17. The thermal control system of claim 15, further comprising a filter to adjust the estimated amount of power usage by applying recently estimated power usage values obtained during previous sampling time periods with the current estimated power usage value. 18. The thermal control system of claim 15, wherein said throttling control unit compares said estimated amount of power used by the microprocessor against a threshold and activates the throttling mechanism if the estimated power used by the microprocessor is greater than said threshold or deactivates the throttling mechanism if the estimated power used by the microprocessor is less than said threshold. 19. A machine-readable medium that provides instructions, which when <Desc/Clms Page number 18> executed by a microprocessor cause said microprocessor to perform operations comprising: estimating an amount of power used by a microprocessor; and controlling at least one throttling mechanism incorporated in the microprocessor based on said estimated power usage. 20. The machine-readable medium of claim 19, wherein the amount of power used by the microprocessor is estimated based on the number of occurrences of at least one activity performed in the microprocessor. 21. The machine-readable medium of claim 19, wherein the operation of estimating the amount of power used by the microprocessor further comprises reading count data representing the number of occurrences of at least one activity performed by the microprocessor within a sampling time period and adjusting the number of occurrences of said at least one activity according to current operating frequency and voltage level of the microprocessor. 22. The machine-readable medium of claim 21, wherein the operation of estimating the amount of the power used by the microprocessor further comprises averaging the current estimated power usage value with a defined number of most recently estimated power usage values obtained during previous sampling time periods. 23. The machine-readable medium of claim 19, wherein the operations further comprises: comparing said estimated amount of power used by the microprocessor <Desc/Clms Page number 19> against a threshold; activating said at least one throttling mechanism if said estimated power used by the microprocessor is greater than said threshold; and deactivating said at least one throttling mechanism if said estimated power used by the microprocessor is less than said threshold. 24. The machine-readable medium of claim 19, wherein the throttling mechanism is activated in a deterministic manner. 25. The machine-readable medium of claim 21, wherein said at least one activity monitored is selected from the following activities; (1) floating point operation, (2) cache memory access and (3) instruction decoding.
<Desc/Clms Page number 1> DETERMINISTIC POWER-ESTIMATION FOR THERMAL CONTROL BACKGROUND Field of the Invention [0001] This invention relates to thermal control for microprocessors. Description of the Related Art [0002] With the increasing complexity of new microprocessors, thermal control becomes more challenging. Current microprocessors include extensive execution resources to support concurrent processing of multiple instructions. A drawback to providing a microprocessor with extensive execution resources is that significant amounts of power are required to run the microprocessors. Different execution units may consume more or less power, depending on their size and the functions they implement, but the net effect of packing so much logic onto a relatively small process chip is to create the potential for significant power dissipation problems. [0003] In conventional thermal control systems, junction temperature (Tj) on a die is observed to ensure that it does not exceed an allowed maximum value to avoid reliability issues. When the junction temperature approaches the allowed maximum value, throttling may be activated to cool the microprocessor, resulting in a significant performance loss. [0004] Detection of a maximum junction temperature violation may be accomplished by measuring the temperature of an area of a die close to the known hot spots. Some microprocessors use a thermal diode on the microprocessor die for temperature tracking. Temperature tracking can be used to activate some sort of throttling when the temperature level exceeds the <Desc/Clms Page number 2> maximum allowed value, or can be used to increase the microprocessor performance level (e. g. , increase voltage/frequency) when the temperature level is low. It has been found that the current passing through the thermal diode is a function of temperature. Accordingly, a circuitry is provided, in at least some of the conventional thermal control systems, which is adapted to detect the amount of current passing through the thermal diode and to trigger throttling whenever the temperature on the die exceeds the allowed maximum value. [0005] Currently used thermal diodes protect microprocessors from overheating situations, but may not be useful in mobile systems. In general, original equipment manufacturers (OEMs) of mobile systems prefer not to support thermal diode based throttling in normal operating conditions while running typical applications. Thermal diode throttling introduces non- deterministic behavior to mobile systems, something an OEM prefers to avoid. OEMs operate on the assumption that systems of the same type and having the same chip version behave similarly and provide the same benchmark score. Thermal diode based throttling creates a non-deterministic behavior since each chip has a different thermal response, leakage current, etc. BRIEF DESCRIPTION OF THE DRAWINGS [0006] The features, aspects, and advantages of the invention will become more thoroughly apparent from the following detailed description, appended claims, and accompanying drawings in which : Figure 1 shows a block diagram of a thermal control system according to one embodiment of the invention; Figure 2 shows a block diagram of a power usage monitoring unit according to one embodiment of the invention; and Figure 3 shows a flow diagram of estimating power usage by a microprocessor according to one embodiment of the invention. <Desc/Clms Page number 3> DETAILED DESCRIPTION [0007] In the following description, specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order to avoid obscuring the present invention. A thermal control system is described. In one embodiment, a thermal control system is provided that uses a digital power monitoring for thermal control in computer systems. The digital monitoring of power is configured to estimate an amount of power used by a microprocessor. Based on the estimated power usage, the thermal control system controls the activation and deactivation of throttling mechanism to avoid unsafe junction temperature that may cause system degradation or that exceeds system specification. It has been found than the amount of power consumed by a microprocessor during a time interval is related to junction temperature (Tj) on the die of the microprocessor. In other words, when the microprocessor within a computer system consumes relatively a large amount of power for a period of time, this may indicate that the microprocessor is operating at relatively high temperature. Accordingly, when the estimated power usage is relatively high, the thermal control system may activate one or more of its thermal throttling mechanism to enable the microprocessor to cool itself. Additionally, when the estimated power usage is quite low, the thermal control system may be configured to increase the microprocessor performance, for example, by increasing operating voltage level, increasing clock frequency, or enabling additional activities. <Desc/Clms Page number 4> [00010] According to one embodiment, the thermal control system dynamically estimates an average power consumed by a microprocessor during a given time interval by periodically executing software codes (e. g. , micro-code, system management mode (SMM) software, or the like) in the microprocessor. In one embodiment, the power consumption level is estimated based on the frequency of various activities occurring within the microprocessor. Unlike the thermal-diode based temperature estimation, the power estimation carried out by software has deterministic behavior (per a given system and a given set of applications), resulting in a deterministic system behavior. In contrast, currently used thermal diodes do not provide deterministic power estimation for thermal control. [00011] Figure 1 depicts a thermal control system 104 according to one embodiment of the invention. The thermal control system 104 is incorporated in a microprocessor 102 having, among other things, a semiconductor die including at least one throttling mechanism 106. In the illustrated embodiment, three types of throttling mechanisms are shown, which include stop-clock throttling logic 114, a voltage control logic 112, and an interrupt logic 116. When the thermal control system 104 detects that the power consumed by the microprocessor 102 exceeds the maximum allowed power, one or more of the throttling mechanisms 106 may be invoked to ensure that die temperature will not exceed thermal design limits during operation. For example, the stop-clock throttling logic 114, which when activated, momentarily reduces or stops the clock of the microprocessor for example, for few microseconds. The die temperature can also be reduced by lowering the operating voltage level, which is controlled by the thermal control system 104 via the voltage control logic 112. Toggling of any one of the throttling mechanisms, including voltage control logic 112, stop-clock throttling logic 114 and interrupt logic 116, may <Desc/Clms Page number 5> significantly reduce the amount of heat generated by the microprocessor in a relatively short period of time. [00012] The throttling mechanisms 106 presented in Figure 1 are for illustrative purposes only, and those of ordinary skill in the art will understand that, in practice, the thermal control system 104 may employ other types of throttling mechanisms. Accordingly, it should be understood that the thermal control system described herein is generally applicable to all types of microprocessors, irrespective of the specific throttling mechanisms employed. [00013] The illustrated thermal control system 104 includes a power usage monitoring unit 108 and a throttle control unit 110. In one embodiment, the power usage monitoring unit 104 is embodied in the form of software code such as micro-code executed periodically within the microprocessor to estimate power consumption based on the number of occurrences of various activities performed in the microprocessor. Based on the estimated power usage provided by the power usage monitoring unit 104, the throttle control unit 110 generates and sends signals to the one or more of the throttling mechanisms 106 to cool the microprocessor if cooling is necessary to avoid unsafe die temperature that may cause system degradation. [00014] In general, there are a number of functional units within a microprocessor, each of which consumes different amount of power. Accordingly, by counting the number of times certain functional units are activated during a defined time period, the amount of power consumed by the microprocessor during that time period may be estimated. To count the number of occurrences of certain activities, the power usage monitoring unit 104 communicates with a set of counters 118-122 incorporated in the microprocessor. The counters 118-122 may be implemented as registers in hardware components <Desc/Clms Page number 6> and variables in software codes and are used to count the number of occurrences of a particular activity. [00015] For example, one counter monitored by the power usage monitoring unit 108 may be configured to count the number of floating point operations performed by the microprocessor during a sampling time period. Another counter may be configured to count the number of cache memory accesses occurring in the microprocessor, data from which may be used to estimate the amount of power consumed by the microprocessor. The number of instructions decoded by the decoder may also be another activity monitored by the power usage monitoring unit 104 via some sort of counter mechanism. It should be understood that the present invention may be implemented by monitoring any other suitable activities occurring within the microprocessor and is limited to examples specified herein. [00016] In accordance with one embodiment, the thermal control utilizes a combination of software and hardware, as opposed to currently used hardware circuitry in combination with a thermal diode. Accordingly, by using both hardware and software to estimate power usage, additional flexibility is provided, enabling the thermal control system to factor in various parameters such as the operating voltage level and clock frequency into the power estimation. By using software code to estimate power usage, rather than using pure logic or hardware circuitry, the maximum junction temperature violation can be detected with sufficient accuracy for activating throttling with minimal system-level tuning by Basic Input/Output System (BIOS). [00017] Figure 2 depicts a block diagram of a power usage monitoring (PUM) unit 108 according to one embodiment of the invention. The PUM unit 108 includes power usage estimator 202 to estimate power usage based on counter data and a filter 204 to provide an average power usage value of <Desc/Clms Page number 7> estimated power usage (EPU) values 218-222 obtained during the current and past sampling periods. [00018] As noted above, to estimate the power consumed by the microprocessor, the power usage estimator 202 periodically obtains counter data 238-242 from various counters incorporated in the microprocessor. In one embodiment, the power usage is estimated every few microseconds since thermal response may be relatively slow (e. g. , in the rage of tens of microseconds). In this regard, at the beginning of each sampling period, the power usage estimator 202 will first access the counter data from each counter and then will reset the counters once the count data has been read. Once the counter data has been obtained, the power usage estimator 202 applies a respective weighted factor 212-216 to each of the counter data 238-242 and combines the weighted counter data to provide a weighted sum of the counter data. [00019] It has been found that the amount of power consumed by the microprocessor is also influenced by the clock frequency and operating voltage level of the microprocessor. For example, if the microprocessor within a computer system is operating at higher frequency or higher voltage level, it will consume more power. In one embodiment, the weighted sum of the counter data is adjusted by the current clock frequency 206 and voltage level 208 to more accurately estimate the power usage. In one embodiment, the estimated power usage (EPU) 218-222 is computed as follows: EPU = WSCD * V2 * F (1) [00020] where WSCD represented the weighted sum of the counter data, V represents current voltage level and F represents current clock frequency. <Desc/Clms Page number 8> [00021] The current operating clock frequency 206 and voltage level 208 may be determined by examining registers in the BIOS that has been designated to store the current frequency and voltage level values. In at least some of the recently developed microprocessors, the voltage level and the operating frequency may change during runtime under various operating conditions. For example, the voltage level and the frequency could be adjusted by one of the throttling mechanisms. Alternatively, in mobile computer systems, the voltage level may change when a mobile computer system switches from an external power source mode to a battery power mode. [00022] Once the estimated power usage (EPU) value has been computed, it is averaged with past EPU values 218-222 to filter out momentary peak power usage. Then, the average power usage value is compared with a maximum allowed power level (referred hereinafter as"TDP"210). The value associated with TDP 210 may be programmed in one of the registers in BIOS and is useful in determining when the junction temperature of the microprocessor may violate the maximum allowed temperature based on the estimated power usage. The TDP value 210 may be determined by executing benchmark program and determining how much power can be consumed by the microprocessor before it exceeds the maximum allowed temperature under normal or worst-case scenario. If the current power usage exceeds the TDP value 210 for a period of time, the junction temperature of microprocessor will start to exceed the maximum allowed temperature. Therefore, to reduce the junction temperature under such condition, the throttling control unit 110 will activate one or more of the throttling mechanism when the average power usage exceeds the TDP value. [00023] It has been found that the relationship between the power consumption (power) and the junction temperature (Tj) may be expressed as follows: <Desc/Clms Page number 9> Tj = Ta + Tsys + Rjc * power (2) [00024] where Ta represents ambient temperature around the microprocessor; Tsys represents motherboard contribution to heat; and Rjc represents thermal resistance. [00025] The values associated with Ta, Tsys and Rjc are system dependent and are typically unknown. For example, the value associated with the thermal resistance (Rjc) of a system is difficult to obtain since it depends on a number of factors such as the cooling capacity of its fan and heat sink, and the like. Accordingly, in one embodiment, the thermal control system does not calculate the junction temperature directly. Instead, the estimated power is compared to a fixed reference point (e. g. , TDP). By doing so, thermal control can be provided without having to compute parameters such as Ta, Tsys and Rjc. [00026] Figure 3 depicts operations of estimating power usage according to one embodiment of the invention. In one implementation, the software code running in the microprocessor estimates the current power usage level based on an assumption that the current power usage is proportional to a set of counter data adjusted by a corresponding weighting factor associated with each individual counter data. The estimated power usage (PU) may be expressed as follows: EPU = (weighting factor (i) * counter data (i) ) + idle power (3) [000271 where weighting factor (i) represents a coefficient value associated with its corresponding counter data used to adjust the counter data collected during a sampling period and idle power represents a constant value corresponding to an amount of power consumed by the microprocessor when it <Desc/Clms Page number 10> is not executing instructions (e. g. , clocking power, static current power, leakage power). [00028] Referring to Figure 3, a set of counter data is read from counters in block 310. In one embodiment, the set of counter data relates to certain high level activities which may be counted by counters incorporated in the microprocessor. For example, the counter data may be collected from the existing performance monitor counters or other counters incorporated into the microprocessor for the purpose of monitoring power usage. If existing performance monitor counters are used, the performance monitoring logic or software program may be used to track the level of activities associated with the corresponding counters. [00029] Then, in block 320, a respective weighting factor is applied to each of the counter data. For example, in one implementation, the weighted counter data is obtained by multiplying each individual counter data with the corresponding weighting factor. When the thermal system is being designed, a respective weight factor is assigned to each counter data to represent the level of power usage associated with the functional unit corresponding to the counter data. Each weighting factor may be derived by microprocessor IC designers using some sort of power estimation tools (e. g., Architectural Level Power Simulator (ALPS) ). Once weighted counter data has been computed, an accumulated counter value is obtained by combining the weighted counter data together in block 330. [00030] The power consumed by the microprocessor will depend on a number of factors, including operating clock frequency, voltage level applied to the microprocessor, which values may change during runtime. In order to take such factors into consideration, the accumulated counter value is adjusted based on the current operating frequency and voltage level in block 340. For example, <Desc/Clms Page number 11> the accumulated counter value may be adjusted by a multiplied factor of the current operating frequency and voltage level. [00031] In block 350, the amount of power consumed by the microprocessor is estimated based on the adjusted counter data. Then, to avoid responding to momentary change in estimated power consumption (e. g. , peak power usage), past history of power usage is factored into consideration. In this regard, the estimated power usage levels obtained during past certain number of sampling periods are averaged in block 360. One way of doing this is to maintain a sliding window with a defined number of past power estimations and use weighted sum to estimate the average power usage. [00032] Once the average power usage has been estimated, it may be compared with a defined threshold value. In one embodiment, the estimated average power usage is compared against a maximum allow power usage value (TDP). Based on the ratio between the estimated power usage and TDP, the frequency, the operating voltage level and performance of a microprocessor may be adjusted up or down. For example, when this ratio approaches one, light throttling is initiated. [00033] One problem associated with the conventional microprocessors using thermal diodes is that they do not provide deterministic results from one system to another system. For example, because the temperature of the die is measured using thermal diodes, various factors may affect the temperature measurement and the performance of the system. In addition, each microprocessor is fabricated with slightly different parameters such as static power level, temperature responses, etc. and slightly different behavior such as heat sink capability, quality, etc. As a result, the performance of different microprocessors measured using the same benchmark program under similar condition will provide different performance results. Because the timing of <Desc/Clms Page number 12> when throttling is activated is different from one microprocessor to another, the behavior of each microprocessor will be non-deterministic, resulting in one microprocessor performing better than another microprocessor. To avoid high junction temperature for all microprocessors, a higher margin value may need to be assigned so that throttling can be timely activated in less sensitive microprocessors, which results in a loss of performance. Another problem associated with non-deterministic behavior is the added complexity in validation and system debugging, typically performed by OEM and IT managers of large companies purchasing a large amount of portable computer systems, such as notebooks. [00034] In contrast, a microprocessor implementing the thermal control system according to one embodiment provides a deterministic behavior. This means that the performance of the microprocessor does not depend on chance but rather can be replicated one run after another. This means that when the same application program is executed on different motherboards, they will generate the same count value and have the same throttling behavior and performance. Advantageously, by using the same maximum allowed power usage value and weighting factor values, the scheme taught by the present invention enables the throttling mechanism to be activated in a deterministic manner. [00035] In one embodiment, the thermal control system is implemented in a portable computer system such as notebook computers to provide deterministic throttling behavior. It has been found that deterministic behavior is particularly desirable in portable computer systems. In one embodiment, the digital power monitoring capability of the thermal control system is used to improve performance of portable computer systems by using the thermal control system to detect situations when the microprocessor is operating at low temperature and when the microprocessor temperature is approaching <Desc/Clms Page number 13> maximum value. By doing so, the performance level may be increased in low temperature situations by increasing operating frequency and voltage level. Further, light throttling may be enabled when the microprocessor temperature is approaching a maximum value. By using light throttling, the maximum allowed temperature may be avoided without use of full throttling. [00036] The operations performed by the present invention may be embodied in the form of software program stored on any type of machinereadable medium capable of storing or encoding a sequence of instructions for execution by a machine. The term"machine-readable medium"shall be taken to include, but not limited to, solid-state memories, magnetic and optical memories and carrier wave signals. Moreover, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. [000371 While the foregoing embodiments of the invention have been described and shown, it is understood that variations and modifications, such as those suggested and others within the spirit and scope of the invention, may occur to those skilled in the art to which the invention pertains. The scope of the present invention accordingly is to be defined as set forth in the appended claims.
For fabricating a field effect transistor in SOI (semiconductor on insulator) technology, an opening is etched through a first surface of a first semiconductor substrate, and a dielectric material is deposited to fill the opening. The dielectric material and the first surface of the first semiconductor substrate are polished down to form a dielectric island comprised of the dielectric material surrounded by the first surface of the first semiconductor substrate that is exposed. The semiconductor material of the first semiconductor substrate remains on the dielectric island toward a second surface of the first semiconductor substrate. A layer of dielectric material is deposited on a second semiconductor substrate. The first surface of the first semiconductor substrate is placed on the layer of dielectric material of the second semiconductor substrate such that the dielectric island and the first surface of the first semiconductor substrate are bonded to the layer of dielectric material. A drain extension region and a source extension region are formed by the drain and source dopant being implanted in the thinner semiconductor material disposed on the dielectric island. In addition, a drain contact region and a source contact region are formed by the drain and source dopant being implanted in the thicker semiconductor material of the first semiconductor substrate disposed to sides of the dielectric island.
I claim: 1. A method for fabricating a field effect transistor in SOI (semiconductor on insulator) technology, the method including the steps of:A. etching an opening through a first surface of a first semiconductor substrate; B. depositing a dielectric material to fill said opening; C. polishing said dielectric material and said first surface of said first semiconductor substrate to form a dielectric island comprised of said dielectric material surrounded by said first surface of said first semiconductor substrate that is exposed; wherein semiconductor material of said first semiconductor substrate remains on said dielectric island toward a second surface of said semiconductor substrate; D. forming a layer of dielectric material on a second semiconductor substrate; E. placing said first surface of said first semiconductor substrate on said layer of dielectric material of said second semiconductor substrate such that said dielectric island and said first surface of said first semiconductor substrate are bonded to said layer of dielectric material; wherein said second surface of said first semiconductor substrate is exposed; F. forming a gate dielectric and a gate electrode over a portion of said semiconductor material disposed on said dielectric island on said second surface of said first semiconductor substrate; and G. implanting a drain and source dopant into said second surface of said first semiconductor substrate that is exposed; wherein a drain extension region and a source extension region are formed by said drain and source dopant being implanted in said semiconductor material disposed on said dielectric island; and wherein a drain contact region and a source contact region are formed by said drain and source dopant being implanted in said semiconductor material of said first semiconductor substrate disposed to sides of said dielectric island. 2. The method of claim 1, further including the step of:performing a thermal anneal process to activate said drain and source dopant within said drain and source extension regions and within said drain and source contact regions. 3. The method of claim 2, wherein a laser thermal anneal process is performed with a laser fluence in a range of from about 0.5 Joules/cm<2 >to about 0.8 Joules/cm<2 >for a time period of from about 1 nanoseconds to about 10 nanoseconds to activate said drain and source dopant within said drain and source extension regions and within said drain and source contact regions.4. The method of claim 1, further including the steps of:forming spacers on sidewalls of said gate electrode and said gate dielectric such that said spacers are disposed over said drain and source extension regions; and forming a drain silicide with said drain contact region and forming a source silicide with said source contact region. 5. The method of claim 1, wherein said first semiconductor substrate is comprised of silicon, and wherein said dielectric island is comprised of silicon dioxide (SiO2).6. The method of claim 5, wherein said second semiconductor substrate is comprised of silicon, and wherein said layer of dielectric material formed on said second semiconductor substrate is comprised of silicon dioxide (SiO2).7. The method of claim 1, wherein said semiconductor material of said first semiconductor substrate remaining on said dielectric island for forming said drain and source extension regions has a thickness in a range of from about 50 angstroms to about 200 angstroms.8. The method of claim 7, wherein said semiconductor material of said first semiconductor substrate disposed to the sides of said dielectric island for forming said drain and source contact regions has a thickness in a range of from about 500 angstroms to about 1000 angstroms.9. The method of claim 1, wherein said step E is performed within a chamber with nitrogen gas (N2) flowing through said chamber.10. The method of claim 1, wherein said dose of said drain and source dopant is in a range of from about 1*10<15>/cm<2 >to about 1*10<16>/cm<2>.11. The method of claim 1, further including the step of:polishing down a portion of said semiconductor material on said dielectric island before said step F. 12. The method of claim 1, wherein said drain and source dopant is comprised of an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor).13. The method of claim 1, wherein said drain and source dopant is comprised of a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor).14. A method for fabricating a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) in SOI (semiconductor on insulator) technology, the method including the steps of:A. etching an opening through a first surface of a first semiconductor substrate; B. depositing a dielectric material to fill said opening; C. polishing said dielectric material and said first surface of said first semiconductor substrate to form a dielectric island comprised of said dielectric material surrounded by said first surface of said first semiconductor substrate that is exposed; wherein semiconductor material of said first semiconductor substrate remains on said dielectric island toward a second surface of said semiconductor substrate; wherein said first semiconductor substrate is comprised of silicon, and wherein said dielectric island is comprised of silicon dioxide (SiO2); D. forming a layer of dielectric material on a second semiconductor substrate; wherein said second semiconductor substrate is comprised of silicon, and wherein said layer of dielectric material formed on said second semiconductor substrate is comprised of silicon dioxide (SiO2); E. placing said first surface of said first semiconductor substrate on said layer of dielectric material of said second semiconductor substrate such that said dielectric island and said first surface of said first semiconductor substrate are bonded to said layer of dielectric material; wherein said step E is performed within a chamber with nitrogen gas (N2) flowing through said chamber; and wherein said second surface of said first semiconductor substrate is exposed; F. polishing down a portion of said semiconductor material on said dielectric island; G. forming a gate dielectric and a gate electrode over a portion of said semiconductor material disposed on said dielectric island on said second surface of said first semiconductor substrate; H. implanting a drain and source dopant into said second surface of said first semiconductor substrate that is exposed; wherein a drain extension region and a source extension region are formed by said drain and source dopant being implanted in said semiconductor material disposed on said dielectric island; wherein said dose of said drain and source dopant is in a range of from about 1*10<15>/cm<2 >to about 1*10<16>/cm<2>; wherein said semiconductor material of said first semiconductor substrate remaining on said dielectric island for forming said drain and source extension regions has a thickness in a range of from about 50 angstroms to about 200 angstroms; wherein a drain contact region and a source contact region are formed by said drain and source dopant being implanted in said semiconductor material of said first semiconductor substrate disposed to sides of said dielectric island; wherein said semiconductor material of said first semiconductor substrate disposed to the sides of said dielectric island for forming said drain and source contact regions has a thickness in a range of from about 500 angstroms to about 1000 angstroms; and wherein said drain and source dopant is comprised of an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor), and wherein said drain and source dopant is comprised of a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor); I. performing a laser thermal anneal process with a laser fluence in a range of from about 0.5 Joules/cm<2 >to about 0.8 Joules/cm<2 >for a time period of from about 1 nanoseconds to about 10 nanoseconds to activate said drain and source dopant within said drain and source extension regions and within said drain and source contact regions; J. forming spacers on sidewalls of said gate electrode and said gate dielectric such that said spacers are disposed over said drain and source extension regions; and K. forming a drain silicide with said drain contact region and forming a source silicide with said source contact region.
TECHNICAL FIELDThe present invention relates generally to fabrication of field effect transistors having scaled-down dimensions, and more particularly, to fabrication of a fully depleted field effect transistor with a thin body formed on a dielectric island in SOI (semiconductor on insulator) technology such that a single implantation step is used to form the drain and source extension regions and the drain and source contact regions.BACKGROUND OF THE INVENTIONReferring to FIG. 1, a common component of a monolithic IC is a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) 100 which is fabricated within a semiconductor substrate 102. The scaled down MOSFET 100 having submicron or nanometer dimensions includes a drain extension junction 104 and a source extension junction 106 formed within an active device area 126 of the semiconductor substrate 102. The drain extension junction 104 and the source extension junction 106 are shallow junctions to minimize short-channel effects in the MOSFET 100 having submicron or nanometer dimensions, as known to one of ordinary skill in the art of integrated circuit fabrication.The MOSFET 100 further includes a drain contact junction 108 with a drain silicide 110 for providing contact to the drain of the MOSFET 100 and includes a source contact junction 112 with a source silicide 114 for providing contact to the source of the MOSFET 100. The drain contact junction 108 and the source contact junction 112 are fabricated as deeper junctions such that a relatively large size of the drain silicide 110 and the source silicide 114 respectively may be fabricated therein to provide low resistance contact to the drain and the source respectively of the MOSFET 100.The MOSFET 100 further includes a gate dielectric 116 and a gate electrode 118 which may be comprised of polysilicon. A gate silicide 120 is formed on the polysilicon gate electrode 118 for providing contact to the gate of the MOSFET 100. The MOSFET 100 is electrically isolated from other integrated circuit devices within the semiconductor substrate 102 by shallow trench isolation structures 121. The shallow trench isolation structures 121 define the active device area 126, within the semiconductor substrate 102, where a MOSFET is fabricated therein.The MOSFET 100 also includes spacers 122 disposed on the sidewalls of the gate electrode 118 and the gate dielectric 116. When the spacers 122 are comprised of silicon nitride (Si3N4), then a spacer liner oxide 124 is deposited as a buffer layer between the spacers 122 and the sidewalls of the gate electrode 118 and the gate dielectric 116.A long-recognized important objective in the constant advancement of monolithic IC (Integrated Circuit) technology is the scaling-down of IC dimensions. Such scaling-down of IC dimensions reduces area capacitance and is critical to obtaining higher speed performance of integrated circuits. Moreover, reducing the area of an IC die leads to higher yield in IC fabrication. Such advantages are a driving force to constantly scale down IC dimensions.As the dimensions of the MOSFET 100 are scaled down further, the junction capacitances formed by the drain and source extension junctions 104 and 106 and by the drain and source contact junctions 108 and 112 may limit the speed performance of the MOSFET 100. Thus, referring to FIG. 2, a MOSFET 150 is formed with SOI (semiconductor on insulator) technology. In that case, a layer of buried insulating material 152 is formed on the semiconductor substrate 102, and a layer of semiconductor material 154 is formed on the layer of buried insulating material 152. Elements such as the gate dielectric 116, the gate electrode 118, the spacers 122, and the spacer liner oxide 124 having the same reference number in FIGS. 1 and 2 refer to elements having similar structure and function.A drain extension junction 156 and a source extension junction 158 of the MOSFET 150 are formed in the layer of semiconductor material 154. The drain extension junction 156 and the source extension junction 158 are shallow junctions to minimize short-channel effects in the MOSFET 150 having submicron or nanometer dimensions, as known to one of ordinary skill in the art of integrated circuit fabrication. A channel region 160 of the MOSFET 150 is the portion of the layer of semiconductor material 154 between the drain and source extension junctions 156 and 158.In addition, a drain contact region 162 is formed by the drain extension junction 156, and a source contact region 164 is formed by the source extension junction 158. A drain silicide 166 is formed with the drain contact region 162 to provide contact to the drain of the MOSFET 150, and a source silicide 168 is formed with the source contact region 164 to provide contact to the source of the MOSFET 150. Processes for formation of such structures of the MOSFET 150 are known to one of ordinary skill in the art of integrated circuit fabrication.The drain contact region 162 and the source contact region 164 are formed to extend down to contact the layer of buried insulating material 152. Thus, because the drain contact region 162 and the source contact region 164 of the MOSFET 160 do not form a junction with the semiconductor substrate 102, junction capacitance is minimized for the MOSFET 150 to enhance the speed performance of the MOSFET 150 formed with SOI (semiconductor on insulator) technology.Furthermore, during operation of the MOSFET 150, the channel region 160 may be fully depleted when the layer of semiconductor material 154 is relatively thin having a thickness in a range of from about 50 angstroms to about 200 angstroms. When the channel region 160 of the MOSFET 150 is fully depleted, undesired short channel effects of the MOSFET 150 are further minimized, as known to one of ordinary skill in the art of integrated circuit fabrication. However, such a thin layer of semiconductor material 154 is undesirable because a low volume of the drain silicide 166 and the source silicide 168 results in high parasitic series resistance at the drain and the source of the MOSFET 150. Such high parasitic series resistance at the drain and the source of the MOSFET 150 degrades the speed performance of the MOSFET 150.Thus, a mechanism is desired for forming a field effect transistor in SOI (semiconductor on insulator) technology with drain and source extension regions formed with a thin portion of semiconductor material such that the channel region of the field effect transistor is fully depleted and with drain and source contact regions formed with a thick portion of the semiconductor material to minimize series resistance at the drain and source of the field effect transistor.SUMMARY OF THE INVENTIONAccordingly, in a general aspect of the present invention, a field effect transistor is formed in SOI (semiconductor on insulator) technology with the drain and source extension regions formed with thinner semiconductor material on a dielectric island and with drain and source contact regions formed with thicker semiconductor material disposed to the sides of the dielectric island.In one embodiment of the present invention, for fabricating a field effect transistor in SOI (semiconductor on insulator) technology, an opening is etched through a first surface of a first semiconductor substrate, and a dielectric material is deposited to fill the opening. The dielectric material and the first surface of the first semiconductor substrate are polished down to form a dielectric island comprised of the dielectric material surrounded by the first surface of the first semiconductor substrate that is exposed. The semiconductor material of the first semiconductor substrate remains on the dielectric island toward a second surface of the first semiconductor substrate. A layer of dielectric material is deposited on a second semiconductor substrate. The first surface of the first semiconductor substrate is placed on the layer of dielectric material of the second semiconductor substrate such that the dielectric island and the first surface of the first semiconductor substrate are bonded to the layer of dielectric material.The second surface of the first semiconductor substrate is exposed, and a gate dielectric and a gate electrode are formed over a portion of the semiconductor material disposed on the dielectric island on the second surface of the first semiconductor substrate. A drain and source dopant is implanted into the second surface of the first semiconductor substrate that is exposed. A drain extension region and a source extension region are formed by the drain and source dopant being implanted in the semiconductor material disposed on the dielectric island. In addition, a drain contact region and a source contact region are formed by the drain and source dopant being implanted in the semiconductor material of the first semiconductor substrate disposed to sides of the dielectric island.The present invention may be practiced to particular advantage when the first and second semiconductor substrates are comprised of silicon and when the dielectric island within the first semiconductor substrate and the layer of dielectric material on the second semiconductor substrate are comprised of silicon dioxide (SiO2), according to one embodiment of the present invention.In this manner, the drain and source extension regions are formed with the thinner semiconductor material on the dielectric island to minimize short channel effects of the field effect transistor. On the other hand, the drain and source contact regions are formed with the thicker semiconductor material to the sides of the dielectric island such that thicker drain and source suicides are formed. With thicker drain and source suicides, the series resistance at the drain and source of the field effect transistor is minimized to enhance the speed performance of the field effect transistor. In addition, the field effect transistor is formed in SOI (semiconductor on insulator) technology such that junction capacitance is minimized for the field effect transistor to further enhance the speed performance of the field effect transistor.These and other features and advantages of the present invention will be better understood by considering the following detailed description of the invention which is presented with the attached drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a cross-sectional view of a conventional MOSFET (Metal Oxide Semiconductor Field Effect Transistor) fabricated within a bulk semiconductor substrate, according to the prior art;FIG. 2 shows a cross-sectional view of a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) fabricated with SOI (semiconductor on insulator) technology for minimizing junction capacitance, according to the prior art;FIGS. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 show cross-sectional views for illustrating the steps for forming a field effect transistor in SOI (semiconductor on insulator) technology with the drain and source extension regions formed with thinner semiconductor material on a dielectric island and with drain and source contact regions formed with thicker semiconductor material to the sides of the dielectric island, according to an embodiment of the present invention.The figures referred to herein are drawn for clarity of illustration and are not necessarily drawn to scale. Elements having the same reference number in FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 refer to elements having similar structure and function.DETAILED DESCRIPTIONIn the cross-sectional view of FIG. 3, for fabricating a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) in SOI (semiconductor on insulator) technology, a hardmask layer 204 is formed on a first face 205 of a first semiconductor substrate 202. The first semiconductor substrate 202 is comprised of silicon, and the hardmask layer 204 is comprised of silicon nitride (Si3N4), according to one embodiment of the present invention. In addition, a layer of masking material 206 is patterned to form an opening 207 in the layer of masking material 206. The layer of masking material 206 is comprised of photoresist material according to one embodiment of the present invention, and processes for patterning such material to form the opening 207 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 4, the exposed portion of the hardmask layer 204 and the first semiconductor substrate 202 is etched such that the opening 207 extends down through the hardmask layer 204 and down into the first semiconductor substrate 202. Processes for etching the exposed portion of the hardmask layer 204 comprised of silicon nitride (Si3N4) for example and the first semiconductor substrate 202 comprised of silicon for example are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 5, the layer of masking material 206 is removed after the opening 207 is formed in the first semiconductor substrate 202. Processes for etching away the layer of masking material 206 comprised of photoresist material for example are known to one of ordinary skill in the art of integrated circuit fabrication. Further referring to FIG. 5, dielectric material 208 is deposited to fill the opening 207, and the dielectric material 208 is comprised of silicon dioxide (SiO2) according to one embodiment of the present invention. Processes for depositing such dielectric material 208 to fill the opening 207 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 6, the dielectric material 208 is polished down until the hardmask layer 204 is exposed when the hardmask layer 204 is used as a polish stop. Polishing processes such as CMP (chemical mechanical polishing) processes for polishing down the dielectric material 208 comprised of silicon dioxide (SiO2) for example with the hardmask layer 204 being used as a polish stop are known to one of ordinary skill in the art of integrated circuit fabrication. The dielectric material 208 contained within the opening 207 forms a dielectric island 210.Referring to FIG. 7, the hardmask layer 204 is etched away to expose the first surface 205 of the first semiconductor substrate 202. Processes for etching away the hardmask layer 204 comprised of silicon nitride (Si3N4) for example are known to one of ordinary skill in the art of integrated circuit fabrication. Further referring to FIG. 7, the dielectric island 210 and the first surface 205 of the first semiconductor substrate 202 are further polished down such that the dielectric island 210 is level with the first surface 205 of the first semiconductor substrate 202. Polishing processes such as CMP (chemical mechanical polishing) processes for polishing down the dielectric island 210 and the first surface 205 of the first semiconductor substrate 202 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 8, a layer of dielectric material 214 is formed on a second semiconductor substrate 212. The second semiconductor substrate 212 is comprised of silicon, and the layer of dielectric material 214 is comprised of silicon dioxide (SiO2) having a thickness in a range of from about 2000 angstroms to about 5000 angstroms according to one embodiment of the present invention. Processes for forming such a layer of dielectric material 214 on the second semiconductor substrate 212 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 9, the first face 205 of the first semiconductor substrate 202 is placed on the layer of dielectric material 214 of the second semiconductor substrate 212. Pressure is applied against the first semiconductor substrate 202 and the second semiconductor substrate 212 such that the dielectric island 210 and the first surface 205 of the first semiconductor substrate 202 are bonded to the layer of dielectric material 214. In one embodiment of the present invention, referring to FIG. 10, the first semiconductor substrate 202 is bonded to the layer of dielectric material 214 of the second semiconductor substrate 212 within a chamber 213 with nitrogen gas (N2) from a nitrogen gas (N2) source 215 flowing through the chamber 213 to prevent oxidation of any exposed surfaces of the first and second semiconductor substrates 202 and 212. Mechanisms for flowing nitrogen gas (N2) through the chamber 213 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 11, in this manner, the dielectric island 210 and the first surface 205 of the first semiconductor substrate 202 are bonded to the layer of dielectric material 214 of the second semiconductor substrate 212. Further referring to FIG. 11, the second surface 217 of the first semiconductor substrate 202 is exposed. In addition, the second surface 217 of the first semiconductor substrate 202 is also polished down to adjust the thickness 216 of the semiconductor material of the first semiconductor substrate 202remaining on top of the dielectric island 210. The thickness 216 of the semiconductor material of the first semiconductor substrate 202 remaining on top of the dielectric island 210 is in a range of from about 50 angstroms to about 200 angstroms according to one embodiment of the present invention. On the other hand, the thickness 219 of the semiconductor material of the first semiconductor substrate 202 disposed to the sides of the dielectric island 210 is in a range of from about 500 angstroms to about 1000 angstroms according to one embodiment of the present invention. Processes for polishing down the second surface 217 of the first semiconductor substrate 202 such as CMP (chemical mechanical polishing) processes are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 12, a gate dielectric 218 and a gate electrode 220 are formed on a portion of the thinner semiconductor material of the first semiconductor substrate 202 remaining on the dielectric island 210. Portions of the thinner semiconductor material of the first semiconductor substrate 202 remaining on the dielectric island 210 remain exposed to the sides of the gate dielectric 218 and the gate electrode 220.The gate dielectric 218 is comprised of silicon dioxide (SiO2) according to one embodiment of the present invention. Alternatively, the gate dielectric 218 is comprised of a dielectric material such as metal oxide for example having a dielectric constant that is higher than that of silicon dioxide (SiO2). When the gate dielectric 218 has a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2), the gate dielectric 218 has a higher thickness than if the gate dielectric 218 were comprised of silicon dioxide (SiO2), to minimize undesired tunneling current through the gate dielectric 218. Processes for forming such a gate dielectric 218 are known to one of ordinary skill in the art of integrated circuit fabrication.The gate electrode 220 is comprised of polysilicon according to one embodiment of the present invention. Processes for forming such a gate electrode 220 are known to one of ordinary skill in the art of integrated circuit fabrication. The present invention may be practiced with other types of materials for the gate dielectric 218 and the gate electrode 220, as would be apparent to one of ordinary skill in the art of integrated circuit fabrication from the description herein.Referring to FIG. 13, a drain and source dopant is implanted into exposed regions of the first semiconductor substrate 202. The drain and source dopant is an N-type dopant such as phosphorous or arsenic for example for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor). Alternatively, the drain and source dopant is a P-type dopant such as boron for example for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor). The dose of such drain and source dopant is in a range of from about 1*10<15 >/cm<2 >to about 1*10<16 >/cm<2>. Processes for implantation of such drain and source dopant are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 13, a drain extension region 222 and a source extension region 224 are formed by the drain and source dopant being implanted in the thinner semiconductor material of the first semiconductor substrate 202 disposed on the dielectric island 210. In addition, a drain contact region 226 and a source contact region 228 are formed by the drain and source dopant being implanted in the thicker semiconductor material of the first semiconductor substrate 202 disposed to the sides of the dielectric island 210.Referring to FIG. 14, the drain and source dopant within the drain and source extension regions 222 and 224 and within the drain and source contact regions 226 and 228 is activated in a laser thermal anneal process. In the laser thermal anneal process according to one embodiment of the present invention, laser beams are directed toward the drain and source extension regions 222 and 224 and toward the drain and source contact regions 226 and 228 with a laser fluence in a range of from about 0.5 Joules/cm<2 >to about 0.8 Joules/cm<2 >for a time period of from about 1 nanoseconds to about 10 nanoseconds. Laser thermal anneal processes for activating dopant within semiconductor material are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 15, spacers 230 are formed at the sidewalls of the gate dielectric 218 and the gate electrode 220 to be disposed over the drain and source extension regions 222 and 224. The spacers 230 are comprised of a dielectric material such as silicon dioxide (SiO2) according to one embodiment of the present invention. Processes for forming such spacers 230 are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 15, a drain silicide 232 is formed with the drain contact region 226, and a source silicide 234 is formed with the source contact region 228. A gate silicide 236 is formed with the gate electrode 220. Silicidation processes for forming the drain silicide 232, the source silicide 234, and the gate silicide 236 are known to one of ordinary skill in the art of integrated circuit fabrication. The drain silicide 232, the source silicide 234, and the gate silicide 236 provide contact to the drain, the source, and the gate, respectively, of the MOSFET.In this manner, the drain and source extension regions 222 and 224 are formed with the thinner semiconductor material on the dielectric island 210 to minimize short channel effects of the MOSFET. On the other hand, the drain and source contact regions 226 and 228 are formed with the thicker semiconductor material to the sides of the dielectric island 210 such that thicker drain and source silicides 232 and 234 are formed. With thicker drain and source suicides 232 and 234, series resistance at the drain and source of the MOSFET is minimized to enhance the speed performance of the MOSFET. In addition, the MOSFET is formed in SOI (semiconductor on insulator) technology such that junction capacitance is minimized for the MOSFET to enhance the speed performance of the MOSFET.Furthermore, referring to FIG. 13, the drain and source extension regions 222 and 224 and the drain and source contact regions 226 and 228 are formed with one implantation step for implanting the drain and source dopant. In contrast, referring to the MOSFET 150 formed in SOI (semiconductor on insulator) technology according to the prior art, two separate implantation steps are used for forming the drain and source extension junctions 156 and 158 and then forming the drain and source contact regions 162 and 164. A separate implantation step is used for forming the drain and source extension junctions 156 and 158 having a shallow depth than the drain and source contact junctions 162 and 164, according to the prior art. The extra implantation step of the prior art may slow down the manufacture of integrated circuits. The dielectric island 210 limits the depth of drain and source extension regions 222 and 224 according to an aspect of the present invention such that just one implantation step is used for forming both the drain and source extension regions 222 and 224 and the drain and source contact regions 226 and 228 for more efficient manufacture of integrated circuits.The foregoing is by way of example only and is not intended to be limiting. For example, any specified material or any specified dimension of any structure described herein is by way of example only. Furthermore, as will be understood by those skilled in the art, the structures described herein may be made or used in the same way regardless of their position and orientation. Accordingly, it is to be understood that terms and phrases such as "side" and "on" as used herein refer to relative location and orientation of various portions of the structures with respect to one another, and are not intended to suggest that any particular absolute orientation with respect to external objects is necessary or required.The present invention is limited only as defined in the following claims and equivalents thereof.
In the present method of fabricating a semiconductor device, openings of different configurations (for example, different aspect ratios) are provided in a dielectric layer. Substantially undoped copper is deposited over the dielectric layer, filling the openings and extending above the dielectric layer, the different configurations of the openings providing an upper surface of the substantially undoped copper that is generally non-planar. A portion of the substantially undoped copper is removed to provide a substantially planar upper surface thereof, and a layer of doped copper is deposited on the upper surface of the substantially undoped copper. An anneal step is undertaken to difffuse the doping element into the copper in the openings.
What is claimed is: 1. A method of fabricating a semiconductor device comprising:providing an opening in a dielectric layer; depositing substantially undoped copper in the opening, and providing a substantially non-planar upper surface of the substantially undoped copper; removing a portion of the substantially undoped copper to increase the planarity of the semiconductor device; depositing a layer of copper containing a dopant element on the upper surface of the semiconductor device; and annealing to diffuse the dopant element into the copper in the opening. 2. The method of claim 1 wherein the substantially undoped copper is deposited to fill the opening.3. The method of claim 2 wherein the step of depositing substantially undoped copper is a singe deposition step.4. The method of claim 3 and further comprising the step of removing a portion of the substantially undoped copper to provide a substantially planar upper surface of the semiconductor device.5. The method of claim 1 and further comprising the step of deposing the substantially undoped copper by electroplating.6. The method of claim 1 and further comprising the step of depositing the layer of copper containing a dopant element by physical vapor deposition (PVD).7. A method of fabricating a semiconductor device comprising:providing a plurality of openings in a dielectric layer, at least first and second openings of the plurality thereof having different configurations; depositing substantially undoped copper on the dielectric layer, Jil the openings and extending above the dielectric layer, the different configurations of the first and second openings providing that an upper surface of the substantially undoped copper is generally non-planar; removing a portion of the substantially undoped copper to increase the planarity of the upper surface thereof; depositing a layer of copper containing a dopant element on the upper surface of the substantially undoped copper; and annealing to diffuse the dopant element into the copper in the openings. 8. The method of claim 7 wherein the step of depositing substantially undoped copper on the dielectric layer and in the openings is a single deposition step.9. The method of claim 8 and further comprising the step of removing a portion of the substantially undoped copper to provide a substantially planar upper surface thereof.10. The method of claim 9 and further comprising planarizing the semiconductor device after the annealing step.11. The method of claim 8 and further comprising the step of depositing the substantially undoped copper by electroplating.12. The method of claim 8 and further comprising the step of depositing the layer of copper containing a dopant element by physical vapor deposition (PVD).
BACKGROUND OF THE INVENTION1. Technical FieldThis invention relates generally to a method of forming copper interconnects having high electromigration resistance.2. Background ArtRecently, copper has received considerable attention as a candidate for replacing aluminum and/or tungsten in wiring and interconnection technology for very large-scale integration (VLSI) and ultra-large-scale integration (ULSI) applications. In particular, copper has a lower resistivity than aluminum or tungsten, and in addition has high conformality and filling capability for deposition in via holes and trenches, along with low deposition temperature.However, a disadvantage of copper is that it exhibits poor electromigration resistance. That is, with current flow through a copper conductor, copper atoms may migrate to cause a break in the metal.U.S. Pat. No. 6,022,808 to Nogami et al., issued Feb. 8, 2000, and assigned to the Assignee of this invention (herein incorporated by reference), discloses a method for improving the electromigration resistance of copper in this environment. In furtherance thereof, interconnects are formed in vias and/or trenches in a dielectric by depositing undoped copper, and then a copper layer containing a depant element is deposited on the undoped copper. An annealing step is undertaken to diffuse dopant into the previously undoped copper, thereby improving the electromigration resistance of the copper. Also of general interest is U.S. Pat. No. 6,346,479 to Woo et al., issued on Feb. 12, 2002 and assigned to the Assignee of this invention (herein incorporated by reference).While this method is significantly advantageous, a device environment with varying feature sizes presents special problems, as will now be described with reference to FIGS. 1-6.FIG. 1 is a cross-section of a semiconductor device 20 illustrating a step in a prior process. As shown therein, a dielectric layer 22, such as silicon dioxide or other material having a low dielectric constant, is formed above a semiconductor substrate 24, typically comprising monocrystalline silicon. It should be understood, however, that dielectric layer 22 may be an interlayer dielectric a number of layers above the surface of the semiconductor substrate 24.Openings 26, 28, 30, 32 are formed in the dielectric layer 22 using conventional photolithographic and etching techniques. These openings 26-32 represent holes for forming contacts or vias or trenches for forming interconnect lines. As shown in FIG. 1, openings 26-32 each have the same depth, and the widths of the openings 26, 28, 30 are substantially the same, while the width of the opening 32 is substantially greater than the widths of the openings 26, 28, 30. Thus, openings 26, 28, 30 have high aspect ratios, and opening 32 has a lower aspect ratio. With reference to FIG. 2, if chosen, a layer 34 may be included, made up of a diffusion barrier layer deposited over the structure, and a copper seed layer deposited over the diffusion barrier layer, as is well-known and described in the above cited patents.FIG. 3 illustrates the step of depositing an undoped copper layer 36 over the resulting structure by, for example, electroplating. The undoped copper 36 fills the openings 26, 28, 30, 32 and is deposited to define an upper surface 38 which extends above the dielectric layer 22. As will be seen in FIG. 3, because of the small features defined by the openings 26, 28, 30, and their close proximity, the surface portion 38A over those openings 26, 28, 30 is generally planar in configuration. However, because of the substantially greater width of the opening 32, the surface portion 38B over the opening 32 is recessed relative to the surface portion 38A over the openings 26, 28, 30, causing the overall upper surface 38 of the copper 36 to be substantially non-planar.Next, as illustrated in FIG. 4, a layer of doped copper 40 is sputter deposited on the undoped copper layer 36. Annealing is then undertaking to difffuse dopant element atoms 42 from doped copper layer 40 into undoped copper layer 36 (FIG. 5).During this step, because of the substantial planarity of the surface portion 38A over the openings 26, 28,30, the copper 36 in each opening 26, 28, 30 will be doped generally to the same concentration. However, because the surface portion 38B of the copper layer 36 is recessed over the opening (causing reduced volume of copper under the layer 40 adjacent the opening 32), the concentration of dopant 42 in the copper 36 in opening 32 will be substantially higher. After chemical mechanical polishing (CMP) to provide that the surface of the copper 36 in each opening 26, 28,30, 32 is coplanar with the upper surface of the dielectric layer 22, it will be seen that features 36A, 36B, 36C, 36D are formed, with feature 36D being of a configuration different from features 36A, 36B, 36C. In accordance with the analysis above, concentration of dopant 42 in the feature 36D is higher than in any of the features 36A, 36B, 36C, i.e., concentration of dopant 42 is dependent on feature size. Thus, uniformity in electromigration resistance from feature to feature is not achieved.Therefore, what is needed is a method for providing substantially uniform concentration of dopant material in copper interconnects of the varying features size.DISCLOSURE OF THE INVENTIONIn the present method of fabricating a semiconductor device, openings of different configurations, for example, different aspect ratios are provided in a dielectric layer. Substantially undoped copper is deposited over the dielectric layer, filling the openings and extending above the dielectric layer, the different configurations of the openings providing an upper surface of the substantially undoped copper that is generally non-planar. A portion of the substantially undoped copper is removed to increase the planarity of the upper surface thereof, and a layer of doped copper is deposited on the upper surface of the substantially undoped copper. An anneal step is undertaken to diffuse the doping element into the copper in the openings.The present invention is better understood upon consideration of the detailed description below, in conjunction with the accompanying drawings. As will become readily apparent to those skilled in the art from the following description, there is shown and described an embodiment of this invention simply by way of the illustration of the best mode to carry out the invention. As will be realized, the invention is capable of other embodiments and its several details are capable of modifications and various obvious aspects, all without departing from the scope of the invention. Accordingly, the drawings and detailed description will be regarded as illustrative in nature and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSThe novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as said preferred mode of use, and further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:FIGS. 1-6 are cross-sectional views illustrating steps of a process of the prior art; andFIGS. 7-13 are cross-sectional views illustrating steps of the process of the present invention.BEST MODE(S) FOR CARRYING OUT THE INVENTIONReference is now made in detail to a specific embodiment of the present invention which illustrates the best mode presently contemplated by the inventors for practicing the invention.As shown in FIG. 7, in the fabrication of a semiconductor device 50, a dielectric layer 52 such a silicon dioxide or other material having a low dielectric consant is formed above semiconductor substrate 54, typically comprising monocrystalline silicon. It should be understood, again, that dielectric layer 52 may be an interlayer dielectric formed a number of layers above the surface of the semiconductor substrate 54.Openings 56, 58, 60, 62 are formed in the dielectric layer 52 using conventional photolithographic and etching techniques. These openings 56-62 represent holes for forming contacts or vias or trenches for forming interconnect lines. As shown in FIG. 7, the openings each have substantially the same depth, and the width of the openings 56, 58, 60 (in close proximity to each other) are substantially the same, i.e., relatively narrow in configuration so as to have a relatively high aspect ratio, while the opening 62, on the other hand, is relatively wide in configuration so as to have a lower aspect ratio than the opening 56, 58, 16.As an option a diffusion barrier layer may be deposited over the resulting structure, as is well-known. Such diffusion barrier can comprise any of a variety of materials, such as Ta, TaN, TiN, TiW, or Ti. The diffusion barrier layer can be formed at a suitable thickness, such as about 30 angstroms to about 1500 angstroms. A seed layer can be deposited on the barrier layer for enhanced nucleation and adhesion of the copper later applied. The barrier layer and seed layer are indicated by the layer 64 (FIG. 8).A substantially undoped copper layer 66 is deposited over the resulting structure in a single deposition step (FIG. 9) by, for example, electroplating to a sufficient thickness to fill each of the openings 56-62 with copper, forming an upper surface 68 thereof which extends above the dielectric layer 52. As previously noted because of the small feature sizes defined by the openings 56, 58, 60, and their close proximity, the surface portion 68A over these openings 56, 58, 60 is generally planar in configuration. However, because of the substantially greater width of the opening 62, the surface portion 68B over openng 62 is recessed relative to the surface portion 68A over openings, causing the overall upper surface 68 of the copper 66 to be substantially non-planar.Then, a planarization step of the copper layer 66 is undertaken (FIG. 10), using, for example, chemical mechanical polish (CMP), electropolishing, or electroplating planarization. During this step, copper is removed from the layer 66, resulting in the upper surface 70 of the copper layer 66 being planarized, substantially parallel to the upper surface of the dielectric layer 52. Thus, no recess exists in the upper surface 70 of the copper layer 66, as compared to the prior art.After this step, a doped layer of copper 72, i.e., an alloy of copper and a dopant element, is deposited on the upper surface 70 of the undoped copper layer 66 to a thickness of for example 500-5000 angstroms by, for example, physical vapor deposition (PVD), chemical vapor deposition (CVD), or enhanced chemical vapor deposition (ECVD) (FIG. 11). The doped copper layer 72 contains a dopant element which, upon diffusing into the undoped copper 66 to form an alloy wit the undoped copper 66, improves the electromigration resistance of the copper 66. Suitable dopant elements include Pd, Zr, Sn, Mg, Cr, and Ta. The dopant atoms are so diffused into the undoped copper 66 by undertaking an annealing step, at for example 200-400[deg.] C. for from a few minutes to one hour (FIG. 12).Because of the overall substantial planarity of the upper surface 70 of the layer 66, the concentration of dopant in copper in each opening 56, 58, 60, 62 will be substantially the same. Then, when a (CMP) step is undertaken to planarize the entire structure (FIG. 13) and form individual copper features 66A, 66B, 66C, 66D in the respective openings 56, 58, 60, 62, even though the copper feature 66D has a configuration different from the configuration of the copper features 66A, 66B, 66C, each of the copper features 66A, 66B, 66C, 66D will be doped to substantially the same concentration, rest in a uniformity of increased resistance to electromigration from feature to feature, independent of feature size.As noted, the doped copper layer 72 can be deposited using a conventional PVD chamber, a simple process for achieving alloy deposition.The foregoing description of the embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Other modifications or variations are possible in light of the above teachings.The embodiment was chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill of the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled.
A method of compressing a sequence of program instructions begins by examining a program instruction stream to identify a sequence of two or more instructions that meet a parameter. The identified sequence of two or more instructions is replaced by a selected type of layout instruction which is then compressed. A method of decompressing accesses an X-index and a Yindex together as a compressed value. The compressed value is decompressed to a selected type of layout instruction which is decoded and replaced with a sequence of two or more instructions. An apparatus for decompressing includes a storage subsystem configured for storing compressed instructions, wherein a compressed instruction comprises an X-index and a Y-index. A decompressor is configured for translating an X-index and Y-index accessed from the storage subsystem to a selected type of layout instruction which is decoded and replaced with a sequence of two or more instructions.
What is claimed is: 1. A method of compressing a sequence of program instructions, the method comprising: identifying a sequence of two or more instructions that meet a parameter; replacing the identified sequence of two or more instructions by a selected type of layout instruction; and compressing the selected type of layout instruction to an X-index and a Y-index pair of compressed values. 2. The method of claim 1, wherein the parameter indicates a sequence of two no- operation (NOP) instructions. 3. The method of claim 2, wherein the replacing comprises: replacing the sequence of two NOP instructions with a single layout instruction that represents the sequence of two NOP instructions. 4. The method of claim 1, wherein the parameter indicates a sequence of two or more instructions that is a combination of one or more no-operation (NOP) instruction and one or more function instructions. 5. The method of claim 4, wherein the replacing comprises: replacing at least one of the NOP instructions in the sequence of two or more instructions with a single layout instruction that identifies the number of NOP instructions and the placement of the NOP instructions in the sequence. 6. The method of claim 1, wherein the parameter indicates one or more frequently used sequences of instructions. 7. The method of claim 6, wherein the replacing comprises: replacing an indicated sequence of instructions with single layout instruction that identifies each instruction in the indicated sequence of instructions. 8. The method of claim 1, wherein the sequence of two or more instructions is a very long instruction word packet of two or more instructions. 9. A method of decompressing a compressed value representing a sequence of instructions, the method comprising: accessing an X-index and a Y-index together as a compressed value; decompressing the compressed value to a selected type of layout instruction; and decoding the selected type of layout instruction to replace the selected type of layout instruction with a sequence of two or more instructions. 10. The method of claim 9, wherein the decompressing comprises: selecting an X pattern from an X pattern memory according to the X-index; selecting a Y pattern from a Y pattern memory according to the Y-index; and combining the X pattern with the Y pattern according to a mix mask to create the selected type of layout instruction. 11. The method of claim 9, wherein the selected type of layout instruction indicates a sequence of two no-operation (NOP) instructions. 12. The method of claim 9, wherein the selected type of layout instruction indicates a sequence of two or more instructions that is a combination of one or more no-operation (NOP) instruction and one or more function instructions. 13. The method of claim 9, wherein the selected type of layout instruction indicates one or more frequently used sequences of instructions. 14. The method of claim 9, wherein the decompressing is accomplished on an instruction fetch from a memory hierarchy of a processor core. 15. An apparatus for decompressing a compressed value representing a sequence of instructions, the apparatus comprising: a storage subsystem configured for storing compressed instructions, wherein a compressed instruction comprises an X-index and a Y-index; a decompressor configured for translating an X-index and Y-index accessed from the storage subsystem to a selected type of layout instruction; and a decoder configured for replacing the selected type of layout instruction with a sequence of two or more instructions. 16. The apparatus of claim 15, wherein the decompressor comprises: an X pattern memory operable to store X patterns that are selected according to the X- index; a Y pattern memory operable to store Y patterns that are selected according to the Y- index; and a combiner is configured for combining a selected X pattern with a selected Y pattern according to a mix mask to create the selected type of layout instruction. 17. The apparatus of claim 15, wherein the sequence of two or more instructions is a sequence of two no-operation (NOP) instructions. 18. The apparatus of claim 15, wherein the sequence of two or more instructions is a combination of one or more no-operation (NOP) instruction and one or more function instructions. 19. The apparatus of claim 15, wherein the sequence of two or more instructions is a frequently used sequence of instructions. 20. The apparatus of claim 15, wherein the storage subsystem comprises: a level 1 instruction cache operable to store the compressed instructions.
METHODS AND APPARATUS FOR STORAGE AND TRANSLATION OF AN ENTROPY ENCODED INSTRUCTION SEQUENCE TO EXECUTABLE FORM Cross Reference to Related Applications {0001 } U.S. Patent Application Serial No. 13/099,463 filed May 3, 2011 entitled "Methods and Apparatus for Storage and Translation of Entropy Encoded Software Embedded within a Memory Hierarchy", has the same assignee as the present application, is a related application, and is hereby incorporated by reference in its entirety. Field of the Invention surrogate {0002} The present invention relates generally to processors having compressed instruction sets for improving code density in embedded applications, and more specifically to techniques for generating a compressed representation of an instruction sequence, storing the compressed instruction sequence, and translating the compressed instruction sequence to executable machine coded program instructions. Background of the Invention {0003} Many portable products, such as cell phones, laptop computers, personal digital assistants (PDAs) or the like, require the use of a processor executing a program supporting communication and multimedia applications. The processing system for such products includes one or more processors, each with storage for instructions, input operands, and results of execution. For example, the instructions, input operands, and results of execution for a processor may be stored in a hierarchical memory subsystem consisting of a general purpose register file, multi-level instruction caches, data caches, and a system memory.{0004} In order to provide high code density, a native instruction set architecture (ISA) may be used having two instruction formats, such as a 16-bit instruction format that is a subset of a 32-bit instruction format. In many cases, a fetched 16-bit instruction is transformed by a processor into a 32-bit instruction prior to or in a decoding process which allows the execution hardware to be designed to only support the 32-bit instruction format. The use of 16-bit instructions that are a subset of 32-bit instructions is a restriction that limits the amount of information that can be encoded into a 16-bit format. For example, a 16-bit instruction format may limit the number of addressable source operand registers and destination registers that may be specified. A 16-bit instruction format, for example, may use 3-bit or 4-bit register file address fields, while a 32-bit instruction may use 5-bit fields. Processor pipeline complexity may also increase if the two formats are intermixed in a program due in part to instruction addressing restrictions, such as, branching to 16-bit and 32-bit instructions. Also, requirements for code compression vary from program to program making a fixed 16-bit instruction format chosen for one program less advantageous for use by a different program. In this regard, legacy code for existing processors may not be able to effectively utilize the two instruction formats to significantly improve code density and meet real time requirements. These and other restrictions limit the effectiveness of reduced size instructions having fields that are subsets of fields used in the standard size instructions. SUMMARY OF THE DISCLOSURE {0005} Among its several aspects, embodiments of the invention address a need to improve code density by compressing sequences of program instructions, storing the compressed sequences and translating the compressed sequences to executable sequences of instructions.The techniques addressed herein allow highly efficient utilization of storage and a transmission conduit for embedded software. {0006} To such ends, an embodiment of the invention applies a method of compressing a sequence of program instructions. A sequence of two or more instructions that meet a parameter is identified. The identified sequence of two or more instructions is replaced by a selected type of layout instruction. The selected type of layout instruction is compressed to an X-index and a Y-index pair of compressed values. {0007} Another embodiment of the invention addresses a method of decompressing a compressed value representing a sequence of instructions. An X-index and a Y-index are accessed together as a compressed value. The compressed value is decompressed to a selected type of layout instruction. The selected type of layout instruction is decoded to replace the selected type of layout instruction with a sequence of two or more instructions. {0008} Another embodiment of the invention addresses an apparatus for decompressing a compressed value representing a sequence of instructions. A storage subsystem is configured for storing compressed instructions, wherein a compressed instruction comprises an X-index and a Y-index. A decompressor is configured for translating an X-index and Y-index accessed from the storage subsystem to a selected type of layout instruction. A decoder is configured for replacing the selected type of layout instruction with a sequence of two or more instructions. { 0009 } A more complete understanding of the embodiments of the invention, as well as further features and advantages of the invention, will be apparent from the following Detailed Description and the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS {0010} FIG. 1 is a block diagram of an exemplary wireless communication system in which an embodiment of the invention may be advantageously employed; {0011 } FIG. 2 is a system for code compression designed for efficient and low cost run time decompression in accordance with embodiments of the invention; {0012} FIG. 3 illustrates exemplary elements of an instruction partition process that splits an instruction based on a mix mask into an X pattern and a Y pattern with byte overlap pad bits in accordance with an embodiment of the invention; {0013} FIG. 4 is a decompressor and execution system wherein programs stored in compressed form in a level 2 cache and a level 1 cache are decompressed for execution in accordance with an embodiment of the invention; {0014} FIG. 5 illustrates exemplary very long instruction word (VLIW) packet formats comprising a first unpacked VLIW packet and a first compressed VLIW packet in accordance with an embodiment of the invention; {0015} FIG. 6 illustrates exemplary VLIW packet formats comprising a second unpacked VLIW packet, a second VLIW compressed packet, and a third VLIW compressed packet in accordance with an embodiment of the invention; {0016} FIG. 7 illustrates an exemplary listing of no-operation (NOP) and function instruction combinations supporting a VLIW compressed packet in accordance with an embodiment of the invention; {0017} FIG. 8 illustrates exemplary VLr packet formats comprising a third uncompressed VLr packet, comprising frequently used pairs of instructions and a fourth VLIW compressed packet in accordance with an embodiment of the invention;{0018} Fig. 9A illustrates a process for compacting a sequence of program instructions in accordance with an embodiment of the invention; {0019} Fig. 9B illustrates a process for decoding a compressed value representing a sequence of program instructions in accordance with an embodiment of the invention; and {0020} FIG. 10 illustrates an exemplary decompression state diagram in accordance with an embodiment of the invention. DETAILED DESCRIPTION { 0021 } The present invention will now be described more fully with reference to the accompanying drawings, in which several embodiments of the invention are shown. This invention may, however, be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. { 0022} Computer program code or "program code" for being operated upon or for carrying out operations according to the teachings of the invention may be initially written in a high level programming language such as C, C++, JAVA®, Smalltalk, JavaScript®, Visual Basic®, TSQL, Perl, or in various other programming languages. A source program or source code written in one of these languages is compiled to a target processor architecture by converting the high level program code into a native assembler program using instructions encoded in a native instruction format. For example, a native instruction format for an instruction set architecture (ISA) may be a fixed number of bits, such as a 32-bit format or a 16-bit format, or may be a variable number of bits, such as a combination of a 32-bit format and a 16-bitformat. Programs for the target processor architecture may also be written directly in a native assembler language. The native assembler program uses instruction mnemonic representations of machine level binary instructions. Program code or computer readable medium produced by a compiler or a human programmer as used herein refers to machine language code such as object code whose format is understandable by a processor. {0023} FIG. 1 illustrates an exemplary wireless communication system 100 in which an embodiment of the invention may be advantageously employed. For purposes of illustration, FIG. 1 shows three remote units 120, 130, and 150 and two base stations 140. It will be recognized that common wireless communication systems may have many more remote units and base stations. Remote units 120, 130, 150, and base stations 140 which include hardware components, software components, or both as represented by components 125A, 125C, 125B, and 125D, respectively, have been adapted to embody the invention as discussed further below. FIG. 1 shows forward link signals 180 from the base stations 140 to the remote units 120, 130, and 150 and reverse link signals 190 from the remote units 120, 130, and 150 to the base stations 140. {0024} In FIG. 1, remote unit 120 is shown as a mobile telephone, remote unit 130 is shown as a portable computer, and remote unit 150 is shown as a fixed location remote unit in a wireless local loop system. By way of example, the remote units may alternatively be cell phones, pagers, walkie talkies, handheld personal communication system (PCS) units, portable data units such as personal digital assistants, or fixed location data units such as meter reading equipment. Although FIG. 1 illustrates remote units according to the teachings of the disclosure, the disclosure is not limited to these exemplary illustrated units. Embodiments of the invention may be suitably employed in any processor system.{0025} FIG. 2 is a compression system 200 for code compression designed for efficient and low cost run time decompression in accordance with embodiments of the invention. The compression system 200 includes source code as described above and binary library files in uncompressed form in source code and library files 204 which comprise the current program application being compiled. The compression system 200 also includes a compiler and linker 206 and optional profile feedback information 208, which are used to generate linked executable code 210 based on native instruction set architecture (ISA) formats and supporting data sections 212. The native ISA is represented by a fixed, uncompressed format and can represent a variety of approaches, including, for example, fixed 64 or 32 or 16-bit encodings and a mixture of such encodings. The native ISA is developed for general utility and not specifically tailored for a particular application. By maintaining fixed word boundaries, such as 32-bit instruction word boundaries, an addressing model that supports only fixed word addresses for branches, calls, returns, and the like may be used even though 16-bit and 32-bit instructions may be mixed together in the code. {0026} Instructions selected from such an ISA may be compressed and tailored to the current application while maintaining addressability of the code and guaranteeing fast, fixed latency decompression time. Such compression may be automated to produce the compressed code in linear time. The native ISA has low informational entropy, which is increased in accordance with an embodiment of the invention by producing a custom entropy bounded encoding for the given source code and library files 204. Informational entropy in the present context is defined as a measure of informational content in a current representation of a program. The informational entropy may be viewed as a ratio of the current representation of the program using native instruction symbols and a shortest possible representation of the program usingcompressed symbols which preserves the original functionality of the program. The granularity of the alphabet used to create a compressed program is at an individual byte level, as an atomic storable and addressable entity in a computational system. The informational content that is preserved in both program representations is the original functionality of the program. For example, an entropy of "1" may represent one specific exact program in its shortest possible representation. A program having an entropy less than "1" indicates that more than one program may be supported and possibly a very large number of programs may be supported which requires an increased storage capacity in the memory. {0027} In Fig. 2, the linked executable code 210 is provided as input to a translation tool 216 which generates compressed code 218 and decoding tables 220. The compressed code 218 and the supporting data sections 212 are stored in a storage device 214, such as a hard disk, optical disk, flash memory of an embedded device or other such storage medium from which selected code may be provided to a processor complex 203 for execution. The processor complex 203 includes a main memory 222, a level 2 cache (L2 cache) 224, a level 1 instruction cache (LI cache) 226, and a processor core 228. The processor core 228 includes a decoder 230 having translation memory (TM) 232 in accordance with an embodiment and an execution pipeline 234. Compressed code is stored in the storage device 214, main memory 222, the L2 cache 224, and the LI cache 226. Decompressed code is generally fetched from the LI cache 226 and executed by the execution pipeline 234. Various embodiments of the translation tool 216 for generating the compressed code 218 and for decoding compressed instructions in decoder 230 are described in more detail below. {0028} The processor complex 203 may be suitably employed in components 125A-125D of FIG. 1 for executing program code that is stored in compressed form in the LI Icache 226, theL2 cache 224 and the main memory 222. Peripheral devices which may connect to the processor complex are not shown for clarity of discussion. The processor core 228 may be a general purpose processor, a digital signal processor (DSP), an application specific processor (ASP) or the like. The various components of the processing complex 203 may be implemented using application specific integrated circuit (ASIC) technology, field programmable gate array (FPGA) technology, or other programmable logic, discrete gate or transistor logic, or any other available technology suitable for an intended application. Though a single processor core 228 is shown, the processing of compressed instructions of an embodiment of the invention is applicable to superscalar designs and other architectures implementing parallel pipelines, such as multithreaded, multi-core, and very long instruction word (VIJW) designs. {0029} FIG. 3 illustrates exemplary elements 300 of an instruction partition process that splits a native ISA fixed size instruction A 302 based on a binary mix mask (MM) 304 into an Ax pattern 306 and an Ay pattern 308 with overlap pad bits 310 and 312 in accordance with an embodiment of the invention. Pad bits are produced due to requirements imposed by modern memory systems to represent instructions and data at least in byte granularity segments. The use of formats having byte granularity segments is utilized to provide a compacted representation allowing storage overlap on bit granularity while satisfying byte granularity requirements of the storage system. {0030} To compress a native ISA code segment, the code segment is partitioned into groups of instructions, with each group contributing a single shared X pattern and a set of unique Y patterns. The Ax pattern 306 represents a bit pattern that is common to a group of instructions to which instruction A belongs. The Ay pattern 308 represents a bit pattern embodying the differences between the native instruction A 302 and the Ax pattern 306. Note, that a codesegment can be partitioned into any number of groups between one and N, where N is the number of native instructions in the code segment. The X patterns for the code segment are stored in an X dictionary comprised of an X memory and the Ay patterns for the code segment are stored in a Y dictionary comprised of a Y memory. An X index is an address of a location in the X memory and a Y index is an address of a location in the Y memory. A combination of these two indexes, patterns from the X and the Y dictionaries, and the binary mix mask deterministically represents the native instruction. {0031 } A compress operation 314 uses at least one mix mask 304 for the code segment to select from a native instruction 302, an Ax pattern 306 and an Ay pattern 308. In the following examples, a hexadecimal number or hex number is represented with a Ox' prefix. For example, the native instruction 302 [0x9F6D0121] is combined with the mix mask 304 [0xFF80FF80] to produce the Ax pattern 306 [0x9F00(8,9,A,B)] and the Ay pattern 308 [0xDA8(4,5,6,7)]. The numbers in parentheses, for example (8,9,A,B) represent a set of possible numbers from which one number may be selected for a particular 4-bit position because of two "don't care" states of the overlap pad bits 310. A decoder 228 decompress operation 316 uses the at least one mix mask for the code segment, an X index fetched X memory pattern and a Y index fetched Y memory pattern to decompress the compressed instruction. For example, the mix mask 304 [0xFF80FF80] is combined with the Ax pattern 306, [0x9F00(8,9,A,B)] fetched from the X memory, and the Ay pattern 308, [0xDA8(4,5,6,7)] fetched from the Y memory, to produce the native instruction 302 [0x9F6D0121]. {0032} As described above, the X patterns and Y patterns are stored in a byte addressable X memory and a byte addressable Y memory, respectively. Index compression of X patterns, Y patterns, or both, makes use of a process that eliminates duplicate X patterns and duplicate Ypatterns and overlaps pad bits, such as the overlap pad bits 310 and 312 in a byte addressable location, thus reducing double utilization of physical storage. With a single mix mask, all X patterns are of the same number of bits and all Y patterns are of the same number of bits. With different mix masks, it is possible to have a variety of different X and Y patterns for each native instruction. However, only a few combinations of mix masks generally provide mappings between the native ISA code segment and the compressed code segment taking the least storage relative to the number of mix mask combinations tested. A single mix mask which generates compressed code requiring low storage capacity is considered a near optimal mix mask. {0033} FIG. 4 is a decompressor and execution system 400 wherein programs stored in compressed form in the L2 cache 224 and the LI cache 226 are decompressed for execution in the execution pipeline 234 in accordance with an embodiment of the invention. The LI cache 226 includes XY index memory 402 that stores an X index and a Y index pair in addressable locations, such as XY entry 404 having a 7-bit X index value of 0x54 and a 9-bit Y index value of 0x134. A multiplexer 405 is used to select an XY entry on an LI cache hit 406 or an XY value 407 from L2 cache 224 on a miss in LI cache 226. On miss in both LI cache 226 and L2 cache 224, the multiplexor is used to select an XY entry from a main memory via a path 403. The decompression operation is accomplished in the decoder 230 having index X register 408, index Y register 410, X memory 412, Y memory 414, single mix mask (MM) register 416, and a combiner 418. Decompressed instruction storage 419 includes a plurality of uncompressed instructions 420 which may include a selected type of layout instruction as described in more detail below. {0034} At program loading or in an embedded system boot process, main memory 222 is loaded with compressed code, X memory 412 and Y memory 414 are loaded with an associatedX and Y dictionary context and the single binary mix mask is set in MM register 416. Note, that the X and Y memory context as well as mix mask can be reloaded during execution if needed. For example, reloading may permit the original code to be compressed into multiple segments, each with its own custom encoding. For instance, some complex embedded systems, such as smart phones, can invoke multiple independent child processes from a main application, which do not share code space and are self contained. Each such application can have its own custom encoding comprised of an X/Y dictionary and a MM, which is loaded at child process startup. For the remainder of the description, the decompressor system 400 is described using the decoder 230 having the single translation memory 232 of FIG. 2 and a single encoding is used for the whole system including any application code. {0035} Next, the execution pipeline 234 begins fetching instructions from the LI Icache 226. Initially, each access to the LI Icache may generate a miss indication 422 causing an access to the L2 cache 224. Initially, the access to the L2 cache 224 may also generate a miss causing an access to main memory 222 of FIG. 2 which responds with a compressed instruction that is loaded in the L2 cache 224 and forwarded over path 403 to the decoder 230 through multiplexer 405. The decoder 230 decompresses the XY index compressed instruction to an uncompressed format for storage in the decompressed instruction storage 419 and for execution in the execution pipeline 234 as described with regard to the decompress operation 316 of FIG. 3. After a period of operation, the LI Icache 226 and L2 cache 224 may reach a steady state. {0036} From a processor perspective, the execution pipeline 234 attempts a fetch operation with a fetch address and control signals 421 for an instruction to be searched for in the LI Icache 226. The LI Icache 226 may determine that the instruction is present. The LI cache fetch operation, for example, is for XY entry 404 which is a hit in the LI cache 226, causing theXY entry 404 to be passed through multiplexer 405 to the decoder 230. The XY entry 404 is split with the X index value 0x54 received in the index X register 408 and the Y index value 0x134 received in the index Y register 410. The X pattern 306 fetched from the X memory 412 at address 0x54 is provided to the combiner 418. The Y pattern 308 fetched from the Y memory 414 at address 0x134 is also provided to the combiner 418. The single mix mask (MM) 304 [0xFF80FF80] stored in MM register 416 is further provided to the combiner 418. The combiner 418 combines the appropriate bits from the X pattern 306 with the appropriate bits from the Y pattern 308 according to the MM 304 to produce the native instruction 302 that is stored in the decompressed instruction storage 419 and passed to the execution pipeline 234. { 0037 } In the described system, program content is stored in an implied encryption format. Even though no specific encryption type of data scrambling is performed on the instruction stream, program code is stored in the storage device 214, main memory 222, the L2 cache 224, and the LI cache 226 in an application specific and compressed form. Since part of the encoded state of the program code resides inside the processor core 228 in the translation memory 232 which is not easily externally accessible in a final product, the storage 214 and upper memory hierarchy 222, 224, and 226 content is insufficient for restoring the original program, making it difficult to analyze or copy. {0038} VLIW architectures present a number of challenges to instruction set architecture (ISA) designers. For example, each VLIW packet is comprised of multiple instructions, each generally bound to a specific execution unit and executed in parallel. Control transfer granularity is the whole VLIW packet by definition. There could not be a jump target in the middle of such a VLIW packet. In one approach, a VLIW packet may be stored in memory using a unique encoding for each instruction to deterministically identify each instruction in thepacket. But since the instructions encoded within a VLIW are supposed to be executed in parallel, having fully encoded 32-bit instructions, for example, may be wasteful of storage space and slow to decode thus affecting performance. Indeed, decoding VLIW code may be a sequential task of parsing consecutive operations to determine VLIW packet boundaries. { 0039 } The other extreme is to form a VLIW packet fully populated with instructions, including no-operation (NOP) instructions, where each instruction may be explicitly determined by its position in the packet. Thus, each packet would have an instruction slot position for each functional unit that may be operated in parallel. For example, a processor architecture having a parallel issue rate of six instructions would have a corresponding VLIW packet of six instructions. Such an approach in the current framework could be viewed as a fully uncompressed statefull representation of a VLIW packet. One possible gain of this approach is an expansion of bits in each instruction in the VLIW packet to increase each instruction's capabilities. However, this method is wasteful in its own way for storing no-ops, since forming VLIW packets containing all useful instructions without dependencies for parallel execution is difficult to achieve. An embodiment of the present invention introduces a different approach that utilizes a compression infrastructure to more optimally encode VLIW packets. {0040} FIG. 5 illustrates exemplary very long instruction word (VLIW) packet formats 500 comprising a first uncompressed VLIW packet 502 and a first compressed VLIW packet 504 in accordance with an embodiment of the invention. The first uncompressed VLIW packet 502 comprises four 32-bit instruction set architecture (ISA) instructions, such as a 32-bit addition (ADD) instruction 506, a first 32-bit no-operation (NOP) instruction 507, a second 32-bit NOP instruction 508, and a 32-bit branch JUMP instruction 509. In an alternative embodiment utilizing a 16-bit ISA, four 16-bit instructions would be stored in an uncompressed VLIW 64-bitpacket, for example. In FIG. 5, the two 32-bit NOP instructions 507 and 508 are a sequence of two NOPs that is identified by the translation tool 216 of FIG. 2 and compressed to an X[2nop] compressed field 516 and Y[2nop] compressed field 517 as shown in FIG. 3. The ADD instruction 506 and JUMP instruction 509 are each compressed to a corresponding X index and Y index pair compressed value, as also shown in FIG. 3. Thus, the first uncompressed VIJW packet 502 is compressed to form the first compressed VIJW packet 504. { 0041 } The first compressed VIJW packet 504 comprises three sets of X and Y compressed fields representing the four instructions 506-509. The 32-bit ADD instruction 506 is represented by an eight bit X[add] compressed field 514 and an eight bit Y[add] compressed field 515. The sequence of the first NOP instruction 507 and second NOP instruction 508 are represented by the eight bit X[2nop] compressed field 516 and eight bit Y[2nop] compressed field 517. The X[2nop] compressed field 516 and Y[2nop] compressed field 517 represent an entropy encoded sequence of two NOP instructions that are expanded when decoded into the first NOP instruction 507 and the second NOP instruction 508. The JUMP instruction 509 is represented by an eight bit X[jmp] compressed field 518 and an eight bit Y[jmp] compressed field 519. {0042} FIG. 6 illustrates exemplary VIJW packet formats 600 comprising a second uncompressed VIJW packet 602, a second VIJW compressed packet 603, and a third VIJW compressed packet 604 in accordance with an embodiment of the invention. The first uncompressed VIJW packet 602 comprises four 32-bit instructions, such as a first 32-bit no- operation (NOP) instruction 606, a second 32-bit NOP instruction 607, a 32-bit store instruction 608, and a third 32-bit NOP instruction 609.{0043} The second compressed VLIW packet 603 comprises three X and Y compressed fields representing the four instructions 606-609. The sequence of the first NOP instruction 606 and second NOP instruction 607 is represented by an eight bit X[2nop] compressed field 614 and eight bit Y[2nop] compressed field 615. The X[2nop] compressed field 614 and Y[2nop] compressed field 615 represent an entropy encoded sequence of two ISA NOP instructions that are expanded when decoded into the first NOP instruction 606 and the second NOP instruction 607. The 32-bit store instruction 608 is represented by an eight bit X[store] compressed field 616 and an eight bit Y[store] compressed field 617. The third NOP instruction 609 is represented by an eight bit X[lnop] compressed field 618 and an eight bit Y[lnop] compressed field 619. {0044} The second uncompressed VLIW packet 602 has a low utilization of useful bits having only one payload instruction 608 that is surrounded by two groups of NOPs, 606, 607, and 609. In order to provide a more compact representation of the second uncompressed VLr packet 602 than that provided by the second compressed VLr packet 603, a specialized NOP instruction is utilized per packet that encodes a layout of instructions within the VLIW packet. This specialized NOP instruction is termed a layout NOP. As the number of instructions in a VLIW packet increases to accommodate a larger number of functional units, it becomes increasingly advantageous to dedicate a single layout NOP in the VLTW packet rather than separately encoding each individual NOP instruction that may be included. For those VLTW packets without a NOP instruction, no storage space is wasted since a layout NOP is not needed. By placing a layout NOP instruction at the beginning or end of a VLTW packet, layout restoration becomes a task complexity of order 0(1), as compared to a sequential task complexity of order O(n) for an "n" instruction VLr packet.{0045} The third VLIW compressed packet 604 comprises a set of two X and Y compressed fields representing the four instructions 606-609. The sequence of the first NOP instruction 606, second NOP instruction 607, store instruction 608 and third NOP instruction 609 is represented by a 2nop_LS_lnop instruction. The 2nop_LS_lnop instruction is an example of a new layout NOP instruction introduced to the current ISA. The 2nop_LS_lnop instruction identifies the number of NOP instructions and the placement of the NOP instructions in the combination of NOP instructions and the store instruction. The sequence of the two NOP instructions 606 and 607, store instruction 608, and third NOP instruction 609 is identified by the translation tool 216 of FIG. 2 and compressed to an X[2nop_LS_lnop] compressed field 622 and Y[2nop_LS_lnop] compressed field 623 as shown in FIG. 3. The Store instruction 608 is compressed to a single X index and Y index pair, also as shown in FIG. 3. Thus, the second uncompressed VLr packet 602 is compressed to form the third VLIW compressed packet 604. For example, the X[2nop_LS_lnop] compressed field 622 and Y[2nop_LS_lnop] compressed field 623 represents an entropy encoded sequence of two ISA NOP instructions, a store instruction, and a third NOP instruction that are each expanded when decoded into the first NOP instruction 606 and the second NOP instruction 607, a placeholder for a store instruction, and a third NOP instruction 609. The 32-bit store instruction 608 is represented by an eight bit X[store] compressed field 624 and an eight bit Y[store] compressed field 625 filling in for the placeholder for the store instruction. {0046} FIG. 7 illustrates an exemplary listing 700 of no-operation (NOP) and function instruction combinations supporting a VIJW compressed packet in accordance with the present invention. A layout NOP column 702 contains layout NOP instruction entries encoded to represent a sequence of four instructions. For example, sequences of four instructions comprisecombinations of NOP instructions (N), arithmetic logic unit instructions (A), load or store instructions (LS), and control instructions (C). The arithmetic logic unit 1 (ALU1) VIJW position column 704 contains entries encoded to represent an ALU1 instruction or a NOP instruction. The ALU2 VLIW position column 705 contains entries encoded to represent an ALU2 instruction or a NOP instruction. The load or store VIJW position column 706 contains entries encoded to represent a load or a store (LD/ST) instruction or a NOP instruction. The control VLIW position column 707 contains entries encoded to represent a control instruction or a NOP instruction. For example, the line entry 708 is a layout NOP instruction representing four NOP instructions (4N) and having a NOP instruction in each slot column 704-707. In another example, the line entry 709 is a layout NOP instruction representing three NOP instructions (3N) and a control instruction (C) and having a NOP instruction in each slot column 704-706 and a control instruction in slot column 707. In a further example, the line entry 710 is a layout NOP instruction representing two NOP instructions (2N) in each slot column 704 and 705, a load or store instruction (LD/ST) in slot column 706, and another NOP instruction (N) in slot column 707. The 2N_LS_N layout NOP instruction of line entry 710 may correspond to the third VLTW compressed packet 604 of FIG. 6. Also, note that the layout NOP instructions of column 702 in FIG. 7 are application independent, and generally depend on the underlying VLr architecture. {0047} The approach of using a layout NOP instruction may be extended to introduce new custom instructions tailored for a specific application, but decodable to the existing ISA space. For example, if during a program evaluation process, one or more specific VLr packets are determined to appear in the instruction stream with high frequency, each different VLTW packet may be encoded as a single specialized layout instruction. Such a sequence of frequently used instructions in a VLTW packet is identified by the translation tool 216 of FIG. 2 andcompressed to single X compressed field and single Y compressed field as shown in FIG. 3. Thus, multiple specialized layout instructions may be included in an ISA using an unused encoding of the ISA for decode purposes and for compressing a sequence of frequently used instructions as described in more detail below. Note that, unlike layout NOP instructions, the multiple specialized layout instructions are application dependent. {0048} FIG. 8 illustrates exemplary VLr packet formats 800 comprising a third uncompressed VLr packet 802 including frequently used pairs of instructions and a compressed specialized layout instruction 804 in accordance with an embodiment of the invention. The third uncompressed VLr packet 802 comprises four 32-bit instructions, such as a compare equal (Pl=cmp.eq(r0,0)) instruction 806, a first no-operation (NOP) instruction 807, a second NOP instruction 808, and a branch Return instruction 809. The compare equal instruction 806 and the return instruction 809 comprise a frequently used pair of instructions. Frequency of use may be determined from an analysis of programs running on the processor complex 203 of FIG. 2. Such frequency of use analysis may be determined dynamically in a simulation environment or statically in the compiler and linker 206 of FIG. 2, for example. { 0049 } The third uncompressed VIJW packet 802 may be represented by a cmp_2nop_return instruction which is an example of a specialized layout instruction introduced to the current ISA. The cmp_2nop_return instruction is identified by the translation tool 216 of FIG. 2 and compressed to an X[cmp_2nop_return] compressed field 822 and Y[cmp_2nop_return] compressed field 823 as shown in FIG. 3. Thus, the third uncompressed VLIW packet 802 is compressed to form the compressed specialized layout instruction 804. {0050} The compressed specialized layout instruction 804 comprises two X and Y compressed fields representing the four instructions 806-809. The frequently used sequence ofthe compare equal instruction 806 and the return instruction 809 is represented by an eight bit X[cmp_2nop_return] compressed field 822 and an eight bit Y[cmp_2nop_return] compressed field 823. The X[cmp_2nop_return] compressed field 822 and the Y[cmp_2nop_return] compressed field 823 represents an entropy encoded sequence of the two frequently used ISA instructions that are expanded when decoded into the compare equal instruction 806, the two NOP instructions 807 and 808, and the branch Return instruction 809. {0051 } Fig. 9A illustrates a process 900 for compacting a sequence of program instructions in accordance with an embodiment of the present invention. At block 902, instructions from a program instruction stream are received. At block 904, the program instruction stream is examined for a sequence of two or more instructions according to a parameter. An exemplary parameter for a sequence of two 32-bit instructions may be a 64-bit pattern comprising the sequence of two instructions. In another embodiment, a parameter for a sequence of two instructions may be a sequence of two assembler instruction mnemonics, each representing an instruction in an ISA. Such a sequence of frequently used instructions in a VLIW packet may be identified by a compiler, such as compiler 206 of FIG. 2, or a translation tool, such as the translation tool 216 of FIG. 2. As instructions are received from the program instruction stream, two instructions at a time may then be compared to the parameter to indicate whether the sequence of two instructions has been found. For example, a first parameter may indicate a sequence of two NOP instructions. A second parameter may indicate a sequence of three NOP instructions. The parameter may also indicate an entry in a list of specific instruction sequences, such as the exemplary listing 700 of NOP and function instruction combinations. Further examples of parameters may be determined on an application basis, such as parameters set to indicate frequently used sequences of instructions, as described with regard to Fig. 8.{0052} At decision block 906, a determination is made whether a current sequence of two or more instructions have been found to meet the parameter. If the current sequence of two or more instructions does not meet the parameter, the process 900 returns to block 904. If the current sequence of two or more instructions does meet the parameter, the process 900 proceeds to block 908. At block 908, the sequence of two or more instructions is replaced by a selected type of layout instruction associated with the parameter. At block 910, the selected type of layout instruction is compressed to an X compressed field and a Y compressed field. The process 900 then returns to block 904. {0053} Fig. 9B illustrates a process 950 for decoding a compressed value representing a sequence of program instructions in accordance with an embodiment of the invention. At block 952, compressed instructions from a compressed instruction stream are received. At block 954, a received X compressed field and Y compressed field are decompressed, for example to a selected type of layout instruction. The process 950 is repeated to decompress each received compressed instruction. At block 956, the exemplary selected type of layout instruction is replaced by two or more instructions according to a decoding of the layout instruction. At block 958, the two or more instructions are executed which completes the process 950 for the two or more instructions received at block 958. {0054} FIG. 10 illustrates an exemplary decompression state diagram 1000 in accordance with an embodiment of the invention. The decompression state diagram 1000 illustrates states that a compressed specialized layout instruction 1002, such as the compressed specialized layout instruction 804, enters to determine decompressed instructions 1005 for execution on a processor pipeline. FIG. 10 shows the compressed specialized layout instruction 1002 in the memory hierarchy 1006 comprising, for example, main memory 222, an L2 cache 224, and an LI cache226. The processor core 228 comprises a compressed instruction decoder 1008 and execution pipeline 234. A fetch operation 1007 retrieves the compressed specialized layout instruction 1002 from the memory hierarchy 1006 to the compressed instruction decoder 1008. The compressed instruction decoder 1008 is configured to use the X[cmp_2nop_return] 1003 as an X index to access the X memory 1010 for an X bit pattern and the Y[cmp_2nop_return] 1004 as a Y index to access the Y memory 1011 for a Y bit pattern. An appropriate mix mask (MM) 1012 is applied in a combiner 1014 which is configured to combine the X bit pattern with the Y bit pattern according to the MM 1012 and provide a translated value during a fetch operation 1016 to the processor core 228. For example, the translated value may be a 32 bit cmp_2nop_return instruction 1018. A decoder 230 is operable to decode the 32 bit cmp_2nop_return instruction 1018 in a decode operation 1020 and provide the decoded output as an uncompressed VLIW packet 1005 to the execution pipeline 234. The uncompressed VLIW packet 1005 comprises a compare equal instruction 1022, a first NOP instruction 1023, a second NOP instruction 1024, and a return instruction 1025. The two NOP instructions 1023 and 1024 are inserted as part of the decode operation 1020 thereby not requiring storage area for these two instructions in the memory hierarchy 1006. An ALU1 execution unit 1032, an ALU2 execution unit 1033, a load/store (LD/ST) execution unit 1034, and a control execution unit 1035 are each configurable to execute the corresponding instructions 1022-1025. {0055} The methods described in connection with the embodiments disclosed herein may be embodied in a combination of hardware and software, the software being a program or sequence of computer-readable instructions stored in a non-transitory computer-readable storage medium and executable by a processor. The program or sequence of computer-readable instructions may reside in random access memory (RAM), flash memory, read only memory(ROM), electrically programmable read only memory (EPROM), hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), or any other form of storage medium known in the art or to be advised in the future. A storage medium may be coupled to the processor such that the processor can read information from, and in some cases write information to, the storage medium. The storage medium coupling to the processor may be a direct coupling integral to a circuit implementation or may utilize one or more interfaces, supporting direct accesses or data streaming using downloading techniques. {0056} While the invention is disclosed in the context of illustrative embodiments for use in processors it will be recognized that a wide variety of implementations may be employed by persons of ordinary skill in the art consistent with the above discussion and the claims which follow below. For example, in an alternative embodiment, the decoder 230 of FIG. 2 may be placed after a level 2 (L2) cache in a system where the main memory and L2 cache store compressed instructions and a level 1 (LI) cache stores uncompressed instructions. In such a system the main memory and L2 cache would also store compressed sequences of two or more instructions.
Dynamic tag compare circuits employing P-type Field-Effect Transistor (PFET)-dominant evaluation circuits for reduced evaluation time, and thus increased circuit performance, are provided. A dynamic tag compare circuit may be used or provided as part of searchable memory, such as a register file or content-addressable memory (CAM), as non-limiting examples. The dynamic tag compare circuit includes one or more PFET-dominant evaluation circuits comprised of one or more PFETs used as logic to perform a compare logic function. The PFET-dominant evaluation circuits are configured to receive and compare input search data to a tag(s) (e.g., addresses or data) contained in a searchable memory to determine if the input search data is contained in the memory. The PFET-dominant evaluation circuits are configured to control the voltage/value on a dynamic node in the dynamic tag compare circuit based on the evaluation of whether the received input search data is contained in the searchable memory.
1.A dynamic label comparison system, which includes:N-type field effect transistor FET NFET-dominant pre-discharge circuit, which is coupled to the evaluation node, the NFET-dominant pre-discharge circuit configured to pre-discharge the evaluation node during the pre-discharge phase;A first PFET-dominant evaluation circuit coupled to a first memory bit cell in a memory, the first PFET-dominant evaluation circuit comprising: at least one first PFET coupled to the evaluation node; at least one first Search data input configured to receive at least one first input search data; and at least one first storage data input configured to receive at least one first input storage data in the first memory bit cell; andA second PFET-dominant evaluation circuit coupled to a second memory bit cell in the memory, the second PFET-dominant evaluation circuit comprising: at least one second PFET coupled to the evaluation node; at least one A second search data input configured to receive at least one second input search data; and at least one second storage data input configured to receive at least one second input storage data in the second memory bit unit;The first PFET-dominant evaluation circuit is configured to compare the at least one first input search data with the at least one first input storage data in the first memory bit cell in the evaluation phase. Evaluate node charging; andThe second PFET-dominant evaluation circuit is configured to compare in the evaluation phase based on the comparison of the at least one second input search data with the at least one second input storage data in the second memory bit cell Charging the evaluation node;Wherein the first PFET-dominant evaluation circuit and the second PFET-dominant evaluation circuit are coupled to a supply voltage node that receives a supply voltage;The first PFET-dominant evaluation circuit is configured to charge the evaluation node to all points in the evaluation phase based on the comparison of the at least one first input search data and the at least one first input storage data The supply voltage; andThe second PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase based on the comparison of the at least one second input search data and the at least one second input stored data.述Supply voltage.2.The dynamic label comparison system according to claim 1, whereinThe first PFET-dominant evaluation circuit is configured to not charge the evaluation node during the evaluation phase if the at least one first input search data matches the at least one first input storage data; andThe second PFET-dominant evaluation circuit is configured to not charge the evaluation node in the evaluation phase if the at least one second input search data matches the at least one second input storage data.3.The dynamic label comparison system according to claim 1, whereinThe first PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase if the at least one first input search data does not match the at least one first input storage data; andThe second PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase if the at least one second input search data does not match the at least one second input storage data.4.The dynamic label comparison system according to claim 1, whereinThe first PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase if the at least one first input search data matches the at least one first input storage data; andThe second PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase if the at least one second input search data matches the at least one second input storage data.5.The dynamic label comparison system according to claim 1, whereinThe first PFET-dominant evaluation circuit is configured to not charge the evaluation node during the evaluation phase if the at least one first input search data does not match the at least one first input storage data; andThe second PFET-dominant evaluation circuit is configured to not charge the evaluation node during the evaluation phase if the at least one second input search data does not match the at least one second input storage data.6.The dynamic label comparison system according to claim 1, whereinThe at least one first PFET is composed of a first PFET and a second PFET, the first PFET includes the at least one first search data input configured to receive the at least one first input search data, and the The second PFET includes the at least one first stored data input configured to receive the at least one first input stored data; andThe at least one second PFET is composed of a third PFET and a fourth PFET, the third PFET includes the at least one second search data input configured to receive the at least one second input search data, and the The fourth PFET includes the at least one second stored data input configured to receive the at least one second input stored data.7.The dynamic label comparison system according to claim 6, whereinThe first PFET includes a gate coupled to the at least one first search data input configured to receive the at least one first input search data, and the second PFET includes a gate coupled to the at least one first search data input configured to receive the At least one first input stored data gate of the at least one first stored data input; the third PFET includes coupled to the at least one second search configured to receive the at least one second input search data A gate of a data input, and the fourth PFET includes a gate coupled to the at least one second storage data input configured to receive the at least one second input storage data.8.The dynamic label comparison system according to claim 1, whereinThe at least one first search data input is configured to receive at least one first input search bit, and the at least one first stored data input is configured to receive at least one first input storage bit; andThe at least one second search data input is configured to receive at least one second input search bit, and the at least one second stored data input is configured to receive at least one second input storage bit.9.The dynamic label comparison system according to claim 8, whereinThe at least one first PFET includes:A first PFET comprising: a first complementary search data input configured to receive at least one first complementary input search bit; and a first true storage data input configured to receive at least one first true input storage bit; as well asA second PFET, which includes: a first true search data input configured to receive at least one first true input search bit; and a first complementary storage data input configured to receive at least one first complementary input storage bit;The first PFET-dominant evaluation circuit is configured to compare the at least one first complementary input search bit with the at least one first true input storage bit and the comparison between the at least one first true input search bit and the Comparing the at least one first complementary input storage bit, charging the evaluation node in the evaluation phase; and the at least one second PFET includes:A third PFET, which includes: a second complementary search data input, which is configured to receive at least one second complementary input search bit; and a second true storage data input, which is configured to receive at least one second true input storage bit; as well asA fourth PFET, which includes: a second true search data input configured to receive at least one second true input search bit; and a second complementary stored data input configured to receive at least one second complementary input storage bit;The second PFET-dominant evaluation circuit is configured to compare the at least one second complementary input search bit with the at least one second true input storage bit and the comparison between the at least one second true input search bit and the The at least one second complementary input storage bit is compared, and the evaluation node is charged in the evaluation phase.10.The dynamic tag comparison system of claim 1, wherein the NFET-dominant pre-discharge circuit is configured to pre-discharge the evaluation node during the pre-discharge phase in response to a clock signal.11.The dynamic tag comparison system of claim 1, wherein the NFET-dominant pre-discharge circuit is configured to pre-discharge the evaluation node to a ground node during the pre-discharge phase.12.The dynamic tag comparison system according to claim 1, wherein the NFET-dominant pre-discharge circuit is composed of at least one NFET.13.The dynamic tag comparison system of claim 1, further comprising a holder circuit coupled to the evaluation node, the holder circuit configured to search data based on the at least one first input and the at least one first input The comparison of an input storage data and the comparison of the at least one second input search data and the at least one second input storage data store the charge on the evaluation node during the evaluation phase.14.The dynamic tag comparison system of claim 1, which is further configured to indicate that the result of the comparison of the at least one first input search data and the at least one first input storage data is coupled to the evaluation A matching output signal is generated on the matching output of the node.15.The dynamic label comparison system according to claim 1, which is integrated into a system-on-chip (SoC).16.The dynamic tag comparison system according to claim 1, which is integrated into a device selected from the group consisting of: a set-top box; an entertainment unit; a navigation device; a communication device; a fixed location data unit; a mobile location data unit; mobile Telephone; Cellular Phone; Computer; Portable Computer; Desktop Computer; Personal Digital Assistant PDA; Monitor; Computer Monitor; TV; Tuner; Radio; Satellite Radio; Music Player; Digital Music Player; Portable Music Players; digital video players; video players; digital video disc DVD players; and portable digital video players.17.A dynamic label comparison system, which includes:A device for pre-discharging the evaluation node via an N-type field effect transistor FET NFET-dominant pre-discharging circuit during the pre-discharging phase;A first device coupled to a first memory bit unit in a memory for comparing at least one first input search data with at least one first input storage data, which includes:Means for receiving the at least one first input search data;Means for receiving the at least one first input storage data in the first memory bit unit; andFor comparing the search data based on the at least one first input with the at least one first input storage data in the first memory bit cell via at least one first P-type coupled to the evaluation node in the evaluation phase FETPFET means for charging the evaluation node; andA second device coupled to a second memory bit unit in the memory for comparing at least one second input search data with at least one second input storage data, which includes:Means for receiving the at least one second input search data;Means for receiving the at least one second input storage data in the second memory bit unit; andFor comparing the search data based on the at least one second input with the at least one second input storage data in the second memory bit unit via at least one second input coupled to the evaluation node in the evaluation phase A device for PFET to charge the evaluation node;Wherein the first device for comparison and the second device for comparison are coupled to a supply voltage node that receives a supply voltage;The first means for comparison is configured to charge the evaluation node to the evaluation phase in the evaluation phase based on the comparison of the at least one first input search data and the at least one first input storage data Supply voltage; andThe second means for comparison is configured to charge the evaluation node to the evaluation phase in the evaluation phase based on the comparison of the at least one second input search data and the at least one second input storage data Supply voltage.18.A method for performing dynamic logical comparison between search data and stored data in a searchable memory, which includes:During the pre-discharge phase, the evaluation node is pre-discharged through the N-type field effect transistor FET NFET-dominant pre-discharge circuit;In the evaluation phase:At least one first input search data is received on at least one first search data input in a first PFET-dominant evaluation circuit coupled to a first memory bit cell in the memory, the first PFET-dominant evaluation circuit comprising At least one first PFET coupled to the evaluation node;Receiving at least one first input storage data in the first memory bit cell on at least one first storage data input in the first PFET-dominant evaluation circuit;Compare the received at least one first input search data with the received at least one first input storage data in the first PFET-dominant evaluation circuit; At least one second input search data is received on at least one second search data input in two PFET-dominant evaluation circuits, and the second PFET-dominant evaluation circuit includes at least one second PFET coupled to the evaluation node;Receiving at least one second input storage data in the second memory bit cell on at least one second storage data input in the second PFET-dominant evaluation circuit;Comparing the received at least one second input search data with the received at least one second input storage data in the second PFET-dominant evaluation circuit; andThe evaluation node is charged to the supply voltage received on the supply voltage node in the evaluation phase based on the comparison of the received at least one first input search data and the received at least one first input stored data.19.The method of claim 18, wherein charging the evaluation node comprises: if the at least one first input search data does not match the at least one first input stored data or the at least one second input search data does not match If the at least one second input storage data is matched, then the evaluation node is charged in the evaluation phase.20.The method of claim 18, wherein charging the evaluation node comprises: if the at least one first input search data matches the at least one first input stored data and the at least one second input search data matches all The at least one second input stores data, then the evaluation node is not charged during the evaluation phase.21.The method according to claim 18, wherein charging the evaluation node comprises: if the at least one first input search data matches the at least one first input stored data or the at least one second input search data matches all The at least one second input stores data, then the evaluation node is charged in the evaluation phase.22.The method of claim 18, wherein charging the evaluation node comprises: if the at least one first input search data does not match the at least one first input storage data and the at least one second input search data does not If the at least one second input storage data is matched, then the evaluation node is not charged in the evaluation phase.23.The method of claim 18, comprising:Receiving the at least one first input search data on the at least one first search data input in the at least one first PFET in the first PFET-dominant evaluation circuit;Receiving the at least one first input stored data on the at least one first stored data input in the first PFET-dominant evaluation circuit;Receiving the at least one second input search data on the at least one second search data input in the at least one second PFET in the second PFET-dominant evaluation circuit; andThe at least one second input stored data is received on the at least one second stored data input in the second PFET-dominant evaluation circuit.24.The method of claim 18, comprising:Receiving the at least one first input search data including at least one first input search bit on the at least one first search data input in the first PFET-dominant evaluation circuit;Receiving the at least one first input storage data including at least one first input storage bit on the at least one first storage data input in the first PFET-dominant evaluation circuit;Comparing the received at least one first input search bit with the received at least one first input storage bit in the first PFET-dominant evaluation circuit;Receiving the at least one second input search data including at least one second input search bit on the at least one second search data input in the second PFET-dominant evaluation circuit;Receiving the at least one second input storage data including at least one second input storage bit on the at least one second storage data input in the second PFET-dominant evaluation circuit;Comparing the received at least one second input search bit with the received at least one second input storage bit in the second PFET-dominant evaluation circuit; andBased on the comparison of the at least one first input search bit and the at least one first input storage bit and the at least one second input search bit and the at least one second input storage bit, the evaluation phase is performed on the Evaluate node charging.25.The method of claim 24, whereinReceiving the at least one first input storage bit further includes receiving the at least one first input storage bit on at least one first bit line from the first memory bit cell; andReceiving the at least one second input storage bit further includes receiving the at least one second input storage bit on at least one second bit line from the second memory bit cell.26.The method of claim 18, wherein pre-discharging the evaluation node includes pre-discharging the evaluation node during the pre-discharging phase in response to a clock signal.27.The method of claim 18, wherein pre-discharging the evaluation node includes pre-discharging the evaluation node to a ground node during the pre-discharging phase.28.The method of claim 18, further comprising storing data based on the at least one first input search data and the at least one first input, and the at least one second input search data and the at least one second input The comparison of stored data stores the charge on the evaluation node in a keeper circuit during the evaluation phase.29.The method of claim 18, further comprising indicating the comparison between the at least one first input search data and the at least one first input stored data and the at least one second input search data and the At least one second input storage data is coupled to the matching output of the evaluation node to generate a matching output signal.30.A memory system includes:A memory including a plurality of binary static bit cells, each binary static bit cell is configured to store true data bits and complementary data bits in response to a search operation, and transfer the true data bits to the true bit line and transfer the Complementary data bits are transferred to complementary bit lines, and each binary static bit unit includes:A first inverter including a first input coupled to the complementary bit line and a first output coupled to the second input of the second inverter; andThe second inverter, which includes the second input coupled to the true bit line and a second output coupled to the first input of the first inverter;P-type field effect transistor FET PFET-dominant tag comparison circuit, which includes:At least one pre-discharge circuit coupled to the evaluation node, the at least one pre-discharge circuit configured to pre-discharge the evaluation node during the pre-discharge phase;A plurality of PFET-dominant evaluation circuits, each of which is coupled to a memory bit cell in the memory, and each PFET-dominant evaluation circuit in the plurality of PFET-dominant evaluation circuits includes:At least one PFET coupled to the evaluation node;True search data input, which is configured to receive true input search bits;Complementary search data input, which is configured to receive complementary input search bits;A true storage data input configured to receive a true input storage bit from the true bit line of a binary static bit cell of the plurality of binary static bit cells; andComplementary storage data input, which is configured to receive complementary input storage bits from the complementary bit line of the binary static bit cell;The PFET-dominant evaluation circuit is configured to compare the true input search bit with the true input storage bit, and compare the complementary input search bit with the complementary input storage bit; andEach PFET-dominant evaluation circuit of the plurality of PFET-dominant evaluation circuits is configured to be based on a corresponding comparison of the true input search bit and the true input storage bit and the complementary input search bit and the The comparison of complementary input storage bits, charging the evaluation node in the evaluation phase;Each PFET-dominant evaluation circuit of the plurality of PFET-dominant evaluation circuits is coupled to a supply voltage node that receives a supply voltage, and is configured to charge the evaluation node to the supply voltage during the evaluation phase .31.The memory system of claim 30, wherein each of the plurality of PFET-dominant evaluation circuits is further configured to perform the evaluation if the true input search bit matches the true input storage bit The evaluation node is not charged in the phase.32.The memory system of claim 30, wherein each of the plurality of PFET-dominant evaluation circuits is further configured to: if the true input search bit does not match the true input storage bit, then The evaluation phase charges the evaluation node.33.The memory system of claim 30, wherein each of the plurality of PFET-dominant evaluation circuits is further configured to: if the true input search bit does not match the true input storage bit, then The evaluation node is not charged in the evaluation phase.34.The memory system of claim 30, wherein each of the plurality of PFET-dominant evaluation circuits is further configured to perform the evaluation if the true input search bit matches the true input storage bit Phase to charge the evaluation node.35.The memory system according to claim 30, wherein the at least one PFET in each of the plurality of PFET-dominant evaluation circuits is composed of a first PFET and a second PFET, and the first PFET includes a The true input search bit configured to receive the true search data input, and the second PFET includes the true input storage bit configured to receive the true stored data input.36.The memory system of claim 30, wherein the at least one PFET in each PFET-dominant evaluation circuit of the plurality of PFET-dominant evaluation circuits comprises:A first PFET comprising: the complementary search data input, which is configured to receive the complementary input search bit; and the true storage data input, which is configured to receive the complementary input storage bit; andThe second PFET includes: the true search data input, which is configured to receive the true input search bit; and the complementary storage data input, which is configured to receive the true input storage bit.37.The memory system of claim 30, wherein the at least one pre-discharge circuit is composed of a single pre-discharge circuit configured to pre-discharge the evaluation node during the pre-discharge phase.38.The memory system of claim 30, wherein the at least one pre-discharge circuit comprises at least one N-type FET NFET-dominant pre-discharge circuit.39.30. The memory system of claim 30, further comprising at least one keeper circuit coupled to the evaluation node, the at least one keeper circuit configured to be based on the true input search bit and the true input storage bit The comparison of storing the charge on the evaluation node during the evaluation phase.40.The memory system according to claim 30, wherein the memory includes a content addressable memory CAM.41.The memory system of claim 30, wherein the memory includes a register file.42.The memory system according to claim 30, which is provided in a system based on a central processing unit (CPU).
Dynamic label comparison circuit and related system and methodPriority applicationThis application claims to be submitted on February 23, 2015 and is named "P-TYPE FIELD-EFFECT TRANSISTOR (PFET)-DOMINANTDYNAMIC LOGIC CIRCUITS , AND RELATED SYSTEMS AND METHODS)" U.S. Provisional Patent Application No. 62/119,769, which is incorporated herein by reference in its entirety.This application is also claimed to be submitted on September 22, 2015 and is titled "Dynamic tag comparison circuit using P-type field effect transistor (PFET)-dominant evaluation circuit to reduce evaluation time and related systems and methods (DYNAMIC TAGCOMPARE CIRCUITS EMPLOYING) P-TYPE FIELD-EFFECT TRANSISTOR(PFET)-DOMINANTEVALUATION CIRCUITS FOR REDUCED EVALUATION TIME, AND RELATED SYSTEMS ANDMETHODS)" U.S. Patent Application No. 14/860,844, which is incorporated herein by reference in its entirety in.Technical fieldThe technology of the present invention generally relates to a dynamic logic circuit clocked by a clock signal, and more specifically, relates to improving the speed performance of a dynamic logic circuit.Background techniqueCompared to static logic circuits, dynamic logic circuits provide significant performance advantages. The dynamic logic circuit reduces the transistor gate capacitance during logic evaluation. In this regard, for example, conventional processors contain many examples of dynamic logic circuits throughout their performance-critical logic design to provide faster evaluation of logic evaluation.In this regard, FIG. 1 is a circuit diagram of a NAND dynamic logic circuit 100 as an example of a dynamic logic circuit. The NAND dynamic logic circuit 100 precharges the voltage of the dynamic node (DYN) 102 in the precharge phase. When the clock signal (CLK) 108 is low, the P-type field effect transistor (PFET) 104 in the pre-charge circuit 106 is used to pre-charge the dynamic node (DYN) 102 to the voltage Vdd, so as to provide at the dynamic node (DYN) 102 Voltage Vdd. This is because the PFET 104 transmits a strong logic "1" or voltage Vdd, so that the dynamic node (DYN) 102 is charged to the voltage Vdd, which is the same as when only the threshold voltage Vt is lower than the voltage Vdd when using, for example, an N-type FET (NFET). The situation is quite the opposite. Due to the inverter 112, the voltage Vdd at the dynamic node (DYN) 102 converts the voltage of the output node (OUT) 110 into a ground voltage (GND).Then, once the clock signal 108 goes high during the evaluation phase, the PFET 104 in the precharge circuit 106 becomes passive. The NAND dynamic logic circuit 100 uses the N-type FET (NFET) 114(1), 114(2) in the pull-down logic circuit 116 to evaluate the logic based on input A and input B, respectively, so that the evaluation phase will quickly to evaluate. If the states of input A and input B are input A=voltage Vdd and input B=voltage Vdd, then the NFETs 114(1), 114(2) in the pull-down logic circuit 116 will be active. This causes the series NFETs 114(1), 114(2), 118 in the evaluation phase to pull the dynamic node (DYN) 102 to the ground voltage (GND), thereby causing the output node (OUT) 110 to transform into the voltage Vdd. Otherwise, if input A=ground voltage GND or input B=ground voltage GND, since the stacked PFET keeper circuit 120 maintains the dynamic node (DYN) 102 at the voltage Vdd, the dynamic node (DYN) 102 during the evaluation phase The voltage remains at the voltage Vdd. Therefore, due to the inverter 112, the output node (OUT) 110 is correspondingly maintained at the ground voltage (GND).Summary of the inventionAn aspect of the present invention relates to a dynamic tag comparison circuit using a P-type field effect transistor (PFET)-dominant evaluation circuit to reduce evaluation time. Also reveal related systems and methods. As an example, dynamic logic circuits may be provided in the processor to perform logic operations. Dynamic logic circuits are generally faster than their counterparts in static logic circuits, because dynamic logic circuits reduce the transistor gate capacitance during logic evaluation. Since the circuit delay is proportional to the output capacitance, the delay of dynamic logic circuits is usually lower than that of static logic circuits. It has been observed that as the size of the node technology shrinks, the PFET drive current (ie, drive strength) exceeds the N-type FET (NFET) drive current used for similar-sized FETs. This is due to the introduction of strained silicon in FET manufacturing, which reduces the effective mass of charge carriers.In this regard, in the exemplary aspects disclosed herein, the limiting condition of the dynamic tag comparison circuit is to use a PFET-dominant evaluation circuit to reduce the evaluation time and thus improve the circuit performance. The dynamic tag comparison circuit may be used or used as part of a searchable memory (e.g., as a non-limiting example, a register file or content addressable memory (CAM)). The dynamic tag comparison circuit includes one or more PFET-dominant evaluation circuits, and the one or more PFET-dominant evaluation circuits are composed of one or more PFETs used as logic to perform a comparison logic function. One or more PFET-dominant evaluation circuits are configured to receive input search data and compare the input search data with input storage data (for example, tag address or tag data) contained in the searchable memory to determine whether the input search data Contained in searchable memory. The PFET-dominant evaluation circuit is configured to control the voltage/value on the dynamic node in the dynamic tag comparison circuit based on the evaluation of whether the received input search data is contained in the searchable memory. The dynamic tag comparison circuit can provide or further adjust the voltage/value on the dynamic node to provide a matching output indicating whether the received input search data is contained in the searchable memory.In this regard, in one example, since the PFET in the PFET-dominant evaluation circuit can deliver a strong logic '1' voltage/value (ie, supply voltage), the NFET-dominant pre-discharge circuit is set in the dynamic label comparison In the circuit. The NFET-dominant pre-discharge circuit is set to fully discharge the dynamic node in the dynamic tag comparison circuit to a logic '0' voltage/value (for example, ground voltage), because the NFET can deliver a strong logic '0' voltage/value. Therefore, if the received input search data evaluated by the PFET-dominant evaluation circuit is contained in the searchable memory, the PFET-dominant evaluation circuit is configured to charge the dynamic node to a logic '1' voltage/value.In this regard, in an exemplary aspect, a dynamic tag comparison circuit is provided. The dynamic tag comparison circuit includes a pre-discharge circuit coupled to the evaluation node. The pre-discharge circuit is configured to pre-discharge the evaluation node during the pre-discharge phase. The dynamic tag comparison circuit includes a PFET-dominant evaluation circuit that includes: at least one search data input configured to receive at least one input search data; and at least one stored data input configured to At least one input storage data is received. The PFET-dominant evaluation circuit is configured to charge the evaluation node in the evaluation phase based on a comparison of at least one input search data with at least one input storage data.In another exemplary aspect, a dynamic tag comparison circuit is provided. The dynamic tag comparison circuit includes means for pre-discharging the evaluation node during the pre-discharging phase. The dynamic tag comparison circuit also includes a device for comparing at least one input search data with at least one input storage data. The means for comparison includes means for receiving at least one input search data, means for receiving at least one input storage data, and means for evaluating at least one input search data and at least one input storage data in the evaluation phase. Node charging device.In another exemplary aspect, a method of performing a dynamic logical comparison of search data and stored data in a searchable memory is provided. The method includes pre-discharging the evaluation node during the pre-discharging phase. The method also includes receiving at least one input search data on at least one search data input in the PFET-dominant evaluation circuit. The method also includes receiving at least one input storage data on at least one storage data input in the PFET-dominant evaluation circuit. The method also includes comparing the received at least one input search data with the received at least one input storage data in the PFET-dominant evaluation circuit. The method further includes charging the evaluation node in the evaluation phase based on a comparison of the received at least one input search data with the received at least one input storage data.In another exemplary aspect, a memory system is provided. The memory system includes a memory including a plurality of bit cells, and each bit cell is configured to store and transfer data bits to at least one bit line in response to a search operation. The memory system also includes a PFET-dominant tag comparison circuit. The PFET-dominant tag comparison circuit includes at least one pre-discharge circuit coupled to the evaluation node, the at least one pre-discharge circuit configured to pre-discharge the evaluation node during the pre-discharge phase. The PFET-dominant tag comparison circuit also includes multiple PFET-dominant evaluation circuits. Each PFET-dominant evaluation circuit of the plurality of PFET-dominant evaluation circuits includes: at least one search data input configured to receive at least one input search bit; at least one stored data input configured to receive data from a plurality of At least one input storage bit is received in at least one bit line of one of the bit cells, and each PFET-dominant evaluation circuit compares the at least one input search bit with the at least one input storage bit. Each of the plurality of PFET-dominant evaluation circuits is configured to charge the evaluation node in the evaluation phase based on the comparison of the at least one input search bit and the at least one input storage bit.Description of the drawingsFigure 1 is a circuit diagram of an exemplary NAND dynamic logic circuit;Figure 2 is a graph illustrating the relative saturation drain current (IDSAT) of N-type field effect transistor (NFET) and P-type FET (PFET) technologies as a function of the size of the technology node;3 is a block diagram of an exemplary memory system including a dynamic tag comparison circuit that includes a plurality of PFET-displays configured to compare received input data with data stored in a searchable memory in the memory system Performance evaluation circuit;4A illustrates more exemplary details of the memory system including the dynamic tag comparison circuit in FIG. 4, including additional exemplary details of the PFET-dominant evaluation circuit provided therein;4B illustrates a detailed view of the dynamic tag comparison circuit in the memory system in FIG. 4A;5 is a flowchart illustrating an exemplary process of the dynamic tag comparison circuit in the memory system in FIGS. 4A and 4B, the dynamic tag comparison circuit executes the comparison logic function in the PFET-dominant evaluation circuit to compare received in the searchable memory Input search data and input storage data of, so as to determine whether the received input search data is contained in the searchable memory; and6 is a block diagram of an exemplary processor-based system according to any aspect disclosed herein, which may include a dynamic tag comparison circuit employing a PFET-explicit evaluation circuit.Detailed waysReferring now to the drawings, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" need not be construed as preferred or advantageous over other aspects.As shown in the graph 200 in FIG. 2, it has been observed that as the size of the node technology shrinks, the PFET drive current (ie, drive strength) exceeds the NFET drive current for similar sized FETs. This is due to the introduction of strained silicon in FET manufacturing, which reduces the effective mass of charge carriers. As illustrated in FIG. 2, the size of the technology node on the X axis 202 is in nanometers (nm). On the Y axis 204 is the ratio of the saturation drain current (IDSAT, N) of the NFET to the saturation drain current (IDSAT, P) of the PFET. The ratio line 206 shows the ratio of IDSAT,N to IDSAT,P that varies with the size of the technology node (in nm). As shown by the ratio line 206 in FIG. 2, the PFET drive strength increases as the size of the technology node decreases compared to similar-sized NFETs. At point 208, the ratio line 206 crosses the ratio of NFET drive current to PFET drive strength of 1.0. Therefore, in this example, the drive strength of the PFET is greater than the drive strength of the similar size NFET.In this regard, a dynamic logic circuit is a circuit that uses FETs to evaluate logic conditions. As an example, dynamic logic circuits may be provided in the processor to perform logic operations. The dynamic logic circuit may be faster than the static logic circuit counterpart because the dynamic logic circuit reduces the transistor gate capacitance during logic evaluation. Since the circuit delay is proportional to the output capacitance, the delay of dynamic logic circuits is usually lower than that of static logic. It has been observed that as the size of the node technology shrinks, the PFET drive current (ie, drive strength) exceeds the NFET drive current used for similar-sized FETs. This is due to the introduction of strained silicon in the manufacture of FETs, thereby reducing the effective mass of charge carriers, thereby increasing the effective mobility of charge carriers. As shown in the saturation drive current (IDSAT) equation below, an increase in the effective mobility of charge carriers results in an increase in the saturation drive current (IDSAT).IDSAT=1/2μCoxW/L(VGS-VTH)2among them:IDSAT = saturation drive current,‘Μ’ is the effective mobility of charge carriers,‘W’ is the gate width,‘L’ is the gate length,‘Cox’ is the capacitance of the oxide layer;‘VGS’ is the gate-source voltage (VGS), and‘VTH’ is the threshold voltage.Strained silicon in FET manufacturing is beneficial to make the effective mobility of holes exceed the effective mobility of electrons. For this reason, PFET IDSAT is significantly improved compared to NFET IDSAT. Therefore, based on this knowledge, the limiting condition of the dynamic logic circuit is to use the PFET-dominant evaluation circuit to reduce the evaluation time and therefore improve the circuit performance. The PFET-dominant evaluation circuit contains one or more PFET circuits. The PFET circuit is configured to evaluate logical conditions based on one or more data inputs. Therefore, the PFET-dominant evaluation circuit can reduce the evaluation time in the dynamic logic circuit, and thus improve the circuit performance based on the PFET circuit drive current (ie, drive strength).In the example discussed below, since the PFET in the PFET-dominant evaluation circuit can pass a strong logic '1' voltage/value, the NFET-dominant pre-discharge circuit can be set in the dynamic logic using the PFET-dominant evaluation circuit In the circuit. The NFET-dominant pre-discharge circuit is set up to discharge the dynamic nodes in the dynamic logic circuit to a logic '0' voltage/value, because the NFET can deliver a strong logic '0' voltage/value. Therefore, the PFET-dominant evaluation circuit can be configured to charge the dynamic node to the logic '1' voltage/value through its ability to deliver a strong logic '1' voltage/value based on the evaluation result.In this regard, FIG. 3 is a block diagram of an exemplary dynamic tag comparison system 300 employing a plurality of dynamic tag comparison circuits 302(0) to 302(N) as the dynamic logic circuit type. In this example, the dynamic tag comparison system 300 is provided in the memory system 304 including the searchable memory 306. The memory system 304 may be provided in a central processing unit (CPU)-based system 308 or other processors (including a system on chip (SoC) 310 as a non-limiting example). For example, as a non-limiting example, the searchable memory 306 may be a register file or content addressable memory (CAM). N+1 dynamic tag comparison circuits 302(0) to 302(N) are provided in the dynamic tag comparison system 300, so that those including N+1 input search bits 314(0) to 314(N) have N+1 The bit-width input search data 312 can be received on the corresponding search data inputs 316(0) to 316(N). The N+1 bits of the input storage data 318 including the input storage bits 320(0) to 320(N) are input to the corresponding storage data 322(0) to 322(N) of the dynamic tag comparison circuits 302(0) to 302(N). ) Is received. The input storage bits 320(0) to 320(N) are stored in the corresponding tag units 324(0) to 324(N) in the searchable memory 306. It should be noted that although only one (1) of the tag units 324(0) to 324(N) is shown, the searchable memory 306 may contain multiple rows of tag units 324(0) to 324(N). The dynamic label comparison circuits 302(0) to 302(N) are configured to compare the corresponding input search bits 314(0) to 314(N) of the selected row label cells 324(0) to 324(N) in a bitwise manner with The storage bits 320(0) to 320(N) are input to determine whether the input search data 312 is contained in the searchable memory 306.Continuing to refer to FIG. 3, each of the dynamic tag comparison circuits 302(0) to 302(N) has a comparison output 326(0) to 326(N), the comparison output 326(0) to 326(N) provides Corresponding comparison output signals 328(0) to 328(N) from the dynamic tag comparison circuits 302(0) to 302(N), which indicate whether the corresponding input search bits 314(0) to 314(N) match the corresponding input Store bits 320(0) to 320(N). The comparison output signals 328(0) to 328(N) are provided to additional logic in the form of AND gates 330(1) to 330(3), in this example the comparison output signals 328(0) to 328( N) is configured to evaluate whether all corresponding input search bits 314(0) to 314(N) match the corresponding input storage bits 320(0) to 320(N). If in this example all the corresponding input search bits 314(0) to 314(N) match the corresponding input storage bits 320(0) to 320(N), then a matching output signal 332 is generated on the matching output 334 (e.g., Logic “1”), indicating that the input search data 312 is contained in the searchable memory 306. The input search data 312 can be considered as a "tag". If in this example all the corresponding input search bits 314(0) to 314(N) do not match the corresponding input storage bits 320(0) to 320(N), then a matching output signal 332 is generated on the matching output 334 (e.g. , Logic '0'), indicating that the input search data 312 is not included in the searchable memory 306.As will be discussed in more detail below with respect to FIGS. 4A and 4B, in this example, the dynamic tag comparison circuits 302(0) to 302(N) in the dynamic tag comparison system 300 in FIG. 3 each employ a PFET-dominant evaluation circuit. The PFET-dominant evaluation circuit is configured to perform logic between the corresponding input search bits 314(0) to 314(N) and the corresponding input storage bits 320(0) to 320(N) stored in the searchable memory 306 Comparative evaluation. In this way, as the size of the node technology shrinks, the PFET drive current (ie, drive strength) in the PFET-dominant evaluation circuit in the dynamic tag comparison circuits 302(0) to 302(N) will allow the PFET-dominant evaluation The circuit performs faster than the comparable NFET-based evaluation circuit-based comparison logic function used for similar-sized FETs.In this regard, FIGS. 4A and 4B illustrate more exemplary details of the memory system 304 in FIG. 3 to further illustrate in more detail the dynamic tag comparison circuits 302(0) to 302(N) and PFET-dominant included therein. Evaluation circuit. FIG. 4A illustrates more exemplary details of the memory system 304 in FIG. 3. 4B is a detailed diagram of the dynamic tag comparison circuits 302(0) to 302(N) in the memory system 304. The dynamic tag comparison circuits 302(0) to 302(N) include the PFET-dominant evaluation circuit provided therein , To evaluate the comparison of the corresponding input search bits 314(0) to 314(N) with the corresponding input storage bits 320(0) to 320(N). Figures 4A and 4B will be discussed in conjunction with each other.As shown in FIG. 4A, more exemplary details of a row of label cells 324(0) to 324(N) provided in the searchable memory 306 are shown. In this example, the tag units 324(0) to 324(N) are static random access memory (SRAM) bit units 400(0) to 400(N) (also referred to as “bit units 400(0) to 400 (N)”) is provided. In this example, as a non-limiting example, for example, the bit cell 400(0) is used as a representative of the other bit cells 400(1) to 400(N), and the bit cell 400(0) is set in six (6) transistors ( 6-T) In the framework. Two cross-coupled inverters 402(0)(T) and inverters 402(0)(C) are arranged in the storage circuit 404 in the bit cell 400(0) to store the true storage bit 406(0) (T) and complementary storage bit 406(0)(C). This allows differential sensing of the stored data in the bit cell 400(0) to make the reading operation more accurate. Two (2) access transistors 408(0)(T) and access transistors 408(0)(C) are also provided in the bit cell 400(0). The two (2) access transistors 408(0) )(T), the access transistor 408(0)(C) is activated through the word line (WL) 410 to select the bit cells 400(0) to 400(N) used for the read operation and the write operation What you want to do. In this current example of the dynamic tag comparison system 300, read operations are performed on bit cells 400(0) to 400(N). The access transistors 408(0)(T), 408(0)(C) are configured to provide true storage bits 406(0)(T) and complementary storage bits 406(0)(C) to the corresponding true bit lines 412(0)(T) and complementary bit line 412(0)(C) to combine the true storage bit 406(0)(C) and complementary storage for each bit cell 400(0) to 400(N) Bits 406(0)(T) are provided to the corresponding dynamic tag comparison circuits 302(0) to 302(N).It should be noted that in this example, the access transistors 408(0)(T) to 408(N)(T) and the access transistors 408(0) in the bit cells 400(0) to 400(N) in FIG. 4A (C) to 408(N)(C) are used as corresponding PFETs, which can also provide faster read operations in bit cells 400(0) to 400(N), but this faster read operation is not required . As another example, the access transistors 408(0)(T) to 408(N)(T) and the access transistors 408(0)(C) to 408( N)(C) can be NFET.Continuing to refer to FIG. 4A, the true storage bits 406(0)(T) to 406(N)(T) for each bit cell 400(0) to 400(N) in the searchable memory 306 are used as a dynamic tag comparison circuit The true input storage bits 320(0)(T) to 320(N)(T) in 302(0) to 302(N) are provided to the corresponding true storage data inputs 322(0)(T) to 322(N) (T). The complementary storage bits 406(0)(C) to 406(N)(C) used for each bit cell 400(0) to 400(N) in the searchable memory 306 are used as the dynamic tag comparison circuit 302(0) The complementary input storage bits 320(0)(C) to 320(N)(C) in 302(N) are provided to the corresponding complementary storage data inputs 322(0)(C) to 322(N)(C). The true input search bits 314(0)(T) to 314(N)(T) and the complementary input search bits 314(0)(C) to 314(N)(C) are respectively provided to the dynamic tag comparison circuit 302(0). ) To 302(N) corresponding true search data inputs 316(0)(T) to 316(N)(T) and complementary search data inputs 316(0)(C) to 316(N)(C). Each dynamic tag comparison circuit 302(0) to 302(N) includes a PFET-dominant evaluation circuit 414(0) to 414(N) respectively coupled to the evaluation node 416. The PFET-dominant evaluation circuits 414(0) to 414(N) are each configured to evaluate the true input storage bits 320(0)(T) to 320(N)(T) from the searchable memory 306 and the corresponding complementary input search Comparison logic operation between bits 314(0)(C) to 314(N)(C). The PFET-dominant evaluation circuits 414(0) to 414(N) are also configured to evaluate the complementary input storage bits 320(0)(C) to 320(N)(C) from the searchable memory 306 and the corresponding true input search Comparison logic operation between bits 314(0)(T) to 314(N)(T). As will be discussed in more detail below, in this example, based on the corresponding evaluation, the PFET-dominant evaluation circuits 414(0) to 414(N) are each configured to, if the corresponding true storage and search input bits and complementary storage If there is a mismatch between the search input bit and the search input bit, then the evaluation node 416 is charged in the evaluation phase. The PFET-dominant evaluation circuits 414(0) to 414(N) can deliver a strong logic '1' voltage/value based on the evaluation result.Continuing to refer to FIG. 4A, before the PFET-dominant evaluation circuits 414(0) to 414(N) are each configured to charge the evaluation node 416 during the evaluation phase to perform its evaluation, the dynamic tag comparison system 300 enables the evaluation during the pre-discharge phase Node 416 is pre-discharged. In this regard, the dynamic tag comparison system 300 in FIG. 4A includes a pre-discharge circuit 418. The pre-discharge circuit 418 is coupled between the evaluation node 416 and the ground node (GND). In this example, the pre-discharge circuit 418 is composed of an NFET-dominant pre-discharge circuit 420, and the NFET-dominant pre-discharge circuit 420 is composed of an NFET 422. The NFET 422 is able to pass a strong logic '0' voltage/value to the evaluation node 416 during the pre-discharge phase. The pre-discharge circuit 418 is configured to be activated to pre-discharge the evaluation node 416 to the voltage of the ground node (GND) (eg, logic '0' in this example) based on the clock signal 424 that activates the NFET 422 during the pre-discharge phase. Therefore, since the PFET-dominant evaluation circuits 414(0) to 414(N) are configured to charge the evaluation node 416 in response to a mismatch between the input search bit 314 and the input storage bit 320, the pre-discharge to ground is maintained The node (GND) voltage evaluation node 416 indicates that the true input search bits 314(0)(T) to 314(N)(T) match the true input storage bits 320(0)(T) to 320(N)(T) , And the complementary input search bits 314(0)(C) to 314(N)(C) match the complementary input storage bits 320(0)(C) to 320(N)(C).To further explain the evaluation operation of the PFET-dominant evaluation circuits 414(0) to 414(N) in the corresponding dynamic tag comparison circuits 302(0) to 302(N), FIG. 4B is provided. Figure 4B contains, for example, a detailed view of the dynamic tag comparison circuit 302(0) to further explain the evaluation phase of the PFET-dominant evaluation circuit 414(0). The explanation of the PFET-dominant evaluation circuit 414(0) is also applicable to the other PFET-dominant evaluation circuits 414(1) to 414(N) in the dynamic tag comparison circuits 302(1) to 302(N).In this regard, referring to FIG. 4B, the PFET-dominant evaluation circuit 414(0) is composed of a first PFET circuit 426(0)(0) and a second PFET circuit 426(0)(1). The first PFET circuit 426(0)(0) includes a first PFET 428(0)(0) and a second PFET 428(0)(1). The gate (G) of the first PFET 428(0)(0) is the true storage data input 322(0)(T) configured to receive the true input storage bit 320(0)(T). The gate (G) of the second PFET 428(0)(1) is a complementary search data input 316(0)(C) configured to receive a complementary input search bit 314(0)(C). Similarly, the second PFET circuit 426(0)(1) in the PFET-dominant evaluation circuit 414(0) includes a first PFET 430(0)(0) and a second PFET 430(0)(1). The gate (G) of the first PFET 430(0)(0) is a complementary storage data input 322(0)(C) configured to receive complementary input storage bits 320(0)(C). The gate (G) of the second PFET 430(0)(1) is a true search data input 316(0)(T) configured to receive a true input search bit 314(0)(T). In this way, the PFET-dominant evaluation circuit 414(0) is configured to compare the true input storage bit 320(0)(T) with the complementary input search bit 314(0)(C). The PFET-dominant evaluation circuit 414(0) is also configured to compare the complementary input storage bit 320(0)(C) with the true input search bit 314(0)(T). For the input storage data 318 stored in the searchable memory 306 to match the input search data 312, there should be a mismatch between the true input storage bit 320 (T) and the complementary input search bit 314 (C) and vice versa. For example, if the true input storage bit 320(0)(T) is a logic “0” and the complementary input search bit 314(0)(C) is also a logic “0”, then the first PFET428(0)(0) and The second PFET 428(0)(1) will be activated so that the first PFET circuit 426(0)(0) will charge the evaluation node 416 to the voltage Vdd, which means that the tag bit does not match. However, if the true input storage bit 320(0)(T) is a logic “0” and the complementary input search bit 314(0)(C) is a logic “1”, then the second PFET428(0)(1) will not be Activate so that the first PFET circuit 426(0)(0) will not charge the evaluation node 416 to the voltage Vdd, which means that the tag bits match. Therefore, if the evaluation node 416 is not charged by one of the PFET-dominant evaluation circuits 414(0) to 414(N) in the dynamic tag comparison system 300, then a tag match occurs, which means that the input search data 312 matches the searchable memory Input storage data 318 for selected rows of label cells 324(0) to 324(N) in 306.It should be noted that, continuing to refer to FIG. 4B, even if the true input storage bit 320(0)(T) and the complementary input search bit 314(0)(C) are both logic '1', the evaluation node 416 will be charged to indicate the tag Mismatch. The first PFET 428(0)(0) and the second PFET 428(0)(1) in the first PFET circuit 426(0)(0) will not be activated to charge the evaluation node 416 because of the true input The storage bit 320(0)(T) and the complementary input search bit 314(0)(C) are logical '1' values. However, this means that the complementary input storage bit 320(0)(C) and the true input search bit 314(0)(T) will be logical '0'. Therefore, this mismatch will cause the first PFET 430(0)(0) and the second PFET 430(0)(1) in the second PFET circuit 426(0)(1) to be activated to make the PFET-dominant evaluation The second PFET circuit 426(0)(1) in circuit 414(0) charges the evaluation node 416 to the voltage Vdd, which indicates that the tags do not match.In order to provide a matching output signal 332 indicating whether the input search data 312 matches the input storage data 318 of the selected row of the tag units 324(0) to 324(N) in the searchable memory 306, the dynamic tag comparison system 300 further includes The keeper circuit 432 shown in FIG. 4A and explained in more detail in FIG. 4B. The keeper circuit 432 is configured to exist between true input search bits 314(0)(T) to 314(N)(T) and true input storage bits 320(0)(T) to 320(N)(T) Match, or if there is a match between complementary input search bits 314(0)(C) to 314(N)(C) and complementary input storage bits 320(0)(C) to 320(N)(C), then maintain or The previous pre-discharge voltage of the ground node (GND) on the evaluation node 416 is "maintained". As discussed above, however, if there is a difference between the true input search bits 314(0)(T) to 314(N)(T) and the true input storage bits 320(0)(T) to 320(N)(T) Match, or there is a mismatch between complementary input search bits 314(0)(C) to 314(N)(C) and complementary input storage bits 320(0)(C) to 320(N)(C), then PFET -The dominant evaluation circuits 414(0) to 414(N) are configured to charge the evaluation node 416 to the voltage Vdd, which indicates a mismatch. Therefore, if it is determined that there is no mismatch, the evaluation node 416 pre-discharged to the ground node (GND) (i.e., logic '0') is kept pre-discharged during the pre-discharge phase. This will cause the NAND gate 434 to activate the NFET 436 in the keeper circuit 432, as shown in FIG. 4B. The NFET 436 is activated to continue to pull down the evaluation node 416 to ground in response to the enable signal 440 that activates the NFET 438 . The output of the NAND gate 434 provides a match output 334 to provide a match output signal 332 indicating whether the input search bit 314 matches the input storage bit 320. In this example, the match output signal 332, which is a logic '1', indicates a match.5 is a flowchart illustrating an exemplary process 500 of the dynamic tag comparison circuits 302(0) to 302(N) in FIGS. 4A and 4B. The dynamic tag comparison circuit uses PFET-dominant evaluation circuits 414(0) to 414(N) to perform a comparison logic function to compare the input search data 312 received in the searchable memory 306 (for example, the true input search bits 314(0)(T) to 314(N)(T)) and the complementary input search Bits 314(0)(C) to 314(N)(C)) and the received input storage data 318 (for example, true input storage bits 320(0)(T) to 320(N)(T) and complementary input The bits 320(0)(C) to 320(N)(C)) are stored to determine whether the received input search data 312 is contained in the searchable memory 306. In this regard, the process 500 first involves the pre-discharge circuit 418 that pre-discharges the evaluation node 416 during the pre-discharge phase (block 502). PFET-dominant evaluation circuits 414(0) through 414(N) receive input search data 312 on search data input 316 (block 504). The PFET-dominant evaluation circuits 414(0) through 414(N) also receive input storage data 318 on the storage data input 322 (block 506). The PFET-dominant evaluation circuits 414(0) to 414(N) compare the received input search data 312 with the received input storage data 318 (block 508). The PFET-dominant evaluation circuits 414(0) to 414(N) charge the evaluation node 416 in the evaluation phase based on the comparison of the received input search data 312 with the received input storage data 318. As previously discussed above, in the example of the dynamic tag comparison system 300 in FIGS. 4A and 4B, if there is a mismatch between the received input search data 312 and the received input storage data 318, then PFET-dominant evaluation The circuits 414(0) through 414(N) charge the evaluation node 416 (block 510). Note, however, that the PFET-dominant evaluation circuit can be provided in the dynamic tag comparison system 300, and the PFET-dominant evaluation circuit is configured if there is a match between the received input search data 312 and the received input stored data 318 , Then the evaluation node 416 is charged.The dynamic tag comparison circuit using the PFET-dominant evaluation circuit according to the aspects disclosed herein can be installed in or integrated into any processor-based device. Examples include (but are not limited to) set-top boxes, entertainment units, navigation devices, communication devices, fixed location data units, mobile location data units, mobile phones, cellular phones, computers, portable computers, desktop computers, personal digital assistants (PDAs), Monitors, computer monitors, televisions, tuners, radios, satellite radios, music players, digital music players, portable music players, digital video players, video players, digital video disc (DVD) players, and Portable digital video player.In this regard, FIG. 6 illustrates an example of a processor-based system 600 according to any of the specific aspects discussed above, which may employ dynamic logic circuits 601. In this example, the processor-based system 600 includes one or more central processing units (CPUs) 602, and each central processing unit includes one or more processors 604. As a non-limiting example, the dynamic logic circuit 601 disclosed herein may be included in the CPU 602 for a translation look-back buffer (TLB) for performing tag comparison of virtual address to real address translation. The CPU 602 may have a cache memory 606 coupled to the processor 604 for quick access to temporarily stored data. As a non-limiting example, the dynamic logic circuit 601 disclosed herein may be included in a cache memory 606 for cache entry tag comparison operations. The CPU 602 is coupled to the system bus 608, and can couple the master device and the slave device included in the processor-based system 600 to each other. As is well known, the CPU 602 communicates with these other devices by exchanging address, control, and data information via the system bus 608. For example, the CPU 602 may transmit the bus transaction request to, for example, the memory controller 610 in the memory system 612 of the slave device. Although not illustrated in FIG. 6, multiple system buses 608 may be provided, where each system bus 608 constitutes a different structure. In this example, the memory controller 610 is configured to provide memory access requests to the memory array 614 in the memory system 612. As a non-limiting example, the dynamic logic circuit 601 disclosed herein may be included in the memory system 612 (eg, the memory controller 610) for performing queries on data in the memory array 614.Other devices may be connected to the system bus 608. As illustrated in FIG. 6, as an example, these devices may include a memory system 612, one or more input devices 616, one or more output devices 618, one or more network interface devices 620, and one or more display controls器622. The input device 616 may include any type of input device, including but not limited to input keys, switches, voice processors, and the like. The output device 618 may include any type of output device, including but not limited to audio, video, other visual indicators, and the like. The network interface device 620 may be any device configured to allow the exchange of data with the network 624. The network 624 may be any type of network, including but not limited to a wired network or a wireless network, a private network or a public network, a local area network (LAN), a wide area network (WLAN), and the Internet. The network interface device 620 can be configured to support any type of communication protocol desired.The CPU 602 may also be configured to access the display controller 622 via the system bus 608 to control the information sent to one or more displays 626. The display controller 622 sends information to the display 626, which is displayed via one or more video processors 628 that process the information to be displayed into a format suitable for the display 626. The display 626 may include any type of display, including (but not limited to) a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, and so on.It should be noted that the PFETs and NFETs used in the present disclosure may include PMOSFETs and NMOSFETs which are metal oxide semiconductors (MOS). The PFETs and NFETs discussed herein may include other types of oxide layers besides metals. It should also be noted that any auxiliary circuit disclosed herein can provide either or both of the bit line and bit line complement of the bit cell disclosed herein.Those skilled in the art will further understand that the various illustrative logical blocks, modules, circuits, and algorithms described in conjunction with the various aspects disclosed herein can be implemented as electronic hardware, stored in a memory or another computer-readable medium And instructions executed by a processor or other processing device, or a combination of the two. As an example, the master device and slave device described herein can be used in any circuit, hardware component, integrated circuit (IC), or IC chip. The memory disclosed herein can be any type and size of memory, and can be configured to store any type of information required. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How this functionality is implemented depends on the specific application, design options, and/or design constraints imposed on the overall system. Those skilled in the art can implement the described functionality in different ways for each specific application, but such implementation decisions should not be construed as causing deviations from the scope of the present disclosure.Various illustrative logic blocks, modules, and circuits described in conjunction with the aspects disclosed in this article can be implemented or executed by the following items: processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field available A programmed gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors combined with a DSP core, or any other such configuration.The aspects disclosed herein can be embodied in hardware and instructions stored in the hardware, and can reside in, for example, random access memory (RAM), flash memory, read-only memory (ROM), and electrically programmable ROM ( EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM or any other form of computer readable media known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and storage medium may reside in the ASIC. The ASIC may reside in the remote station. In the alternative, the processor and storage medium may reside as discrete components in a remote station, base station, or server.It should also be noted that the operation steps described in any of the exemplary aspects herein are described for the purpose of providing examples and discussions. The described operations can be performed in a large number of different sequences in addition to the illustrated sequence. In addition, the operations described in a single operation step can actually be performed in many different steps. In addition, one or more operation steps discussed in the exemplary aspects may be combined. It should be understood that it will be easily obvious to those skilled in the art that the operation steps described in the flowchart can be subjected to many different modifications. Those skilled in the art will also understand that any of a variety of different techniques and techniques can be used to represent information and signals. For example, data, instructions, commands, information, signals, bits, symbols, and chips that can be referenced throughout the above description can be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof.The foregoing description of the present invention is provided to enable those skilled in the art to make or use the present invention. Those skilled in the art will easily understand various modifications of the present invention, and the general principles defined herein can be applied to other variations without departing from the spirit or scope of the present invention. Therefore, the present invention is not intended to be limited to the examples and designs described in this article, but should conform to the widest scope consistent with the principles and novel features disclosed in this article.
Techniques and mechanism to provide a cache of cache tags in determining an access to cached data. In an embodiment, a tag storage stores a first set including tags associated with respective data locations of a cache memory. A cache of cache tags stores a subset of tags stored by the tag storage. Where a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion. In another embodiment, any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only a first portion of the cache of cache tags. A replacement table is maintained for use in evicting or replacing cached tags based on an indicated level of activity for a set of the cache of cache tags.
CLAIMSWhat is claimed is:1. An apparatus comprising:a cache memory to couple to a processor;a tag storage, coupled to the cache memory, to store a first set including first tags each associated with a respective data location of the cache memory;a cache of cache tags to store a subset of tags stored at the tag storage, the cache of cache tags including a first portion and a second portion; anda controller coupled to the cache of cache tags, the controller to update the subset of tags based on memory access requests from the processor, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller to store all tags of the first set to the first portion, wherein any storage of tags of the first set to the cache of cache tags by the controller includes storage of the tags of the first set to only the first portion. 2. The apparatus of claim 1 , the tag storage further to store a second set including second tags each associated with a respective data location stored within the cache memory.3. The apparatus of claim 2, wherein in response to any determination that a tag of the second set is to be stored to the cache of cache tags, the controller to store all tags of the second set to the second portion, wherein any storage of tags of the second set to the cache of cache tags by the controller includes storage of the tags of the second set to only the second portion.4. The apparatus of any of claims 1 and 2, wherein, of all set of tags of the tag storage, the first portion to store only tags of the first set.5. The apparatus of any of claims 1, 2 and 4, wherein the tag storage comprises:a first plurality of sets including the first set; anda second plurality of sets including the second set;wherein, of the first plurality of sets and the second plurality of sets, the first portion is dedicated to only the first plurality of sets and the second portion is dedicated to only the second plurality of sets.6. The apparatus of claim 5, wherein the first plurality of sets correspond to odd sets of the cache memory and wherein the second plurality of sets correspond to even sets of the cache memory. 7. The apparatus of any of claims 1, 2 and 4, wherein the cache of cache tags and the processor are located on a first die.8. The apparatus of claim 7, wherein the tag storage structure is located on a second die coupled to the first die.9. A method comprising:storing at a tag storage a first set including first tags each associated with a respective data location of a cache memory coupled to a processor;storing at a cache of cache tags a subset of tags stored at the tag storage, the cache of cache tags including a first portion and a second portion; andupdating the subset of tags based on memory access requests from the processor, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion, wherein any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only the first portion.10. The method of claim 9, further comprising:storing at the tag storage a second set including second tags each associated with a respective data location stored within the cache memory; andin response to any determination that a tag of the second set is to be stored to the cache of cache tags, storing all tags of the second set to the second portion.11. The method of claim 10, wherein any storage of tags of the second set to the cache of cache tags by the controller includes storage of the tags of the second set to only the second portion.12. The method of any of claims 9 and 10, wherein, of all set of tags of the tag storage, the first portion is to store only tags of the first set.13. The method of any of claims 9, 10 and 12, wherein the tag storage comprises: a first plurality of sets including the first set; anda second plurality of sets including the second set;wherein, of the first plurality of sets and the second plurality of sets, the first portion is dedicated to only the first plurality of sets and the second portion is dedicated to only the second plurality of sets.14. The method of claim 13, wherein the first plurality of sets correspond to odd sets of the cache memory and wherein the second plurality of sets correspond to even sets of the cache memory.15. The method of any of claims 9, 10 and 12, wherein the cache of cache tags and the processor are located on a first die.16. The method of claim 15, wherein the tag storage structure is located on a second die coupled to the first die.17. An apparatus comprising:a cache of cache tags to store a subset of tags stored at a tag storage, the subset of tags each associated with a respective data location of a cache memory; anda controller including circuitry to associate a first entry of a replacement table with a first set of tags stored to the cache of cache tags, including the controller to set a first variable of the first entry to an initial value of a pre-determined plurality of values,wherein, if a first memory access request comprises a tag corresponding to one of the first set of tags, then the controller further to change the first variable to another of the pre- determined plurality of values to indicate an increase of a level of activity, otherwise, the controller to change the first variable to another of the pre-determined plurality of values to indicate a decrease of the level of activity; andwherein, in response to a failure to identify any tag of the cache of cache tags which matches a tag of a second memory access request, the controller further to select a set of tags to evict from the cache of cache tags, including the controller to search the replacement table for a variable which is equal to the initial value.18. The apparatus of claim 17, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller to store all tags of the first set to the first portion. 19. The apparatus of any of claims 17 and 18, wherein the cache of cache tags and a processor to provide the first memory access request are located on a first die.20. The apparatus of claim 19, wherein the tag storage structure is located on a second die coupled to the first die.21. The apparatus of any of claims 17, 18 and 19, wherein the cache of cache tags contains one or more of the most recently used tags stored in the tag storage structure.22. A method comprising:in response to storage of a first set of tags to a cache of cache tags, associating a first entry of a replacement table with the first set of tags, including setting a first activity variable of the first entry to an initial value of a pre-determined plurality of values, wherein a tag storage stores tags which are each associated with a respective data location of a cache memory, and wherein the cache of cache tags stores a subset of tags stored at the tag storage; andif a first memory access request comprises a tag corresponding to one of the first set of tags, then changing the first activity variable to another of the pre-determined plurality of values to indicate an increase of a level of activity of the first set, otherwise changing the first activity variable to another of the pre-determined plurality of values to indicate a decrease of the level of activity of the first set; andin response to a failure to identify any tag of the cache of cache tags which matches a tag of a second memory access request, selecting a set of tags to evict from the cache of cache tags, including searching the replacement table for an activity variable which is equal to the initial value.23. The method of claim 22, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller to store all tags of the first set to the first portion.24. The method of any of claims 22 and 23, wherein the cache of cache tags and a processor to provide the first memory access request are located on a first die.25. The method of claim 24, wherein the tag storage structure is located on a second die coupled to the first die.
METHOD, APPARATUS AND SYSTEM TO CACHE SETS OF TAGS OF AN OFF-DIECACHE MEMORYBACKGROUND1. Technical FieldThe invention relates generally to cache tag storage. More specifically, certain embodiments relates to techniques for caching sets of tags of a tag storage.2. Background ArtProcessors of all kinds have become more dependent on caches due to the relatively slow speed of memory in relation to the speed of a processor core. Numerous cache architectures have been utilized for decades. One common cache architecture is a set associative cache.Cache architectures have memory storage that stores data from system memory locations as well as a tag storage structure that stores sets of tags.In standard cache hierarchy architecture, the closer to the processor core(s) a cache is located, generally, the smaller and faster the cache becomes. The smallest and fastest cache(s) generally reside on the processor core silicon die. On the other hand, the largest cache (LLC or last level cache) or caches sometimes reside off-die from the processor core(s). Accessing data that resides in an off-die cache as opposed to an on-die cache generally creates additional latency since it takes longer for the data to be transmitted to the processor core(s).Each cache has a tag storage structure. If the processor needs data from a certain memory location, it can determine if the data is stored in a given cache by doing a comparison of the memory location address and the tag storage structure for the cache. If the tag storage structure is off-die, the latency to do a tag lookup will be greater than if the tag storage structure is on-die. Thus, although on-die tag storage structures increase the cost of the processor die because they take up valuable space, they help speed up execution by reducing the latencies of tag lookups versus off-die caches.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:FIG. 1 is a functional block diagram illustrating elements of a system to cache tag information according to an embodiment.FIG. 2 illustrates features of a tag storage and a cache of cache tags to provide access to cached data according to an embodiment. FIG. 3A is a flow diagram illustrating elements of a method to access a cache of cache tags according to an embodiment.FIG. 3B is a block diagram illustrating elements of tag information to use in providing access to cached data according to an embodiment.FIG. 3C is a block diagram illustrating elements of tag information to use in providing access to cached data according to an embodiment.FIG. 4A is a flow diagram illustrating elements of a method for maintaining a cache of cache tags according to an embodiment.FIG. 4B illustrates elements of replacement table and a state diagram to use in maintaining a cache of cache tags according to an embodiment.FIG. 4C is a block diagram illustrating elements of tag information to use in providing access to cached data according to an embodiment.FIG. 5 is a block diagram illustrating elements of a computer system to provide access to cached data according to an embodiment.FIG. 6 is a block diagram illustrating elements of a mobile device to provide access to cached data according to an embodiment.DETAILED DESCRIPTIONEmbodiments of an apparatus, system, and method to implement a cache of cache tags are described. In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known elements, specifications, and protocols have not been discussed in detail in order to avoid obscuring certain embodiments.FIG. 1 describes one embodiment of a system 100 to implement a cache of cache tags. One or more processor cores 104 may reside on a microprocessor silicon die, e.g. Die 1 102, in many embodiments. In other multiprocessor embodiments, there may be multiple processor dies coupled together, each including one or more cores per die (the architecture for processor cores on multiple dies is not shown in FIG. 1). Returning to FIG. 1, the processor core(s) may be coupled to an interconnect 105. In different embodiments, the processor core(s) 104 may be any type of central processing unit (CPU) designed for use in any form of personal computer, handheld device, server, workstation, or other computing device available today. The single interconnect 105 is shown for ease of explanation so as to not obscure the invention. In practice, this single interconnect 105 may be comprised of multiple interconnects coupling different individual devices together. Additionally, in many embodiments, more devices may be coupled to interconnect 105 that are not shown (e.g. a chipset).The processor core(s) 104 may be coupled - e.g. through interconnect 105 - to one or more on-die caches 106 physically located on the same die as the processor core(s) 104. In many embodiments, a cache has a tag storage 114 associated with it that stores tags for all cache memory locations. In many embodiments, tag storage 114 resides on a separate silicon die, e.g.Die 2 112, from the processor core(s) 104. In many embodiments, tag storage 114 is coupled to one or more off-die (non-processor die) cache(s) 116 - e.g. through interconnect 105 - and is located on the same die as off-die cache(s) 116.A cache of cache tags (CoCT) 108 may store a subset of the off-die cache tags on processor die 102. Specifically, while tag storage 114 stores all index values and associated tag sets per index value, CoCT 108, on the other hand, may not store all possible index values.Rather, to save on storage space, CoCT 108 may store merely a subset of the tags stored in tag storage 114. In some embodiments, not all index locations are represented at any given time in CoCT 108.In some embodiments, a controller 110 controlling the access to CoCT 108 determines when a memory request matches a tag that is currently located within CoCT 108 and reports this back to the processor. In different embodiments, the memory request may originate from one of a number of devices in the system, such as one of the processor cores or a bus master I/O device among other possible memory request originators. Memory access requests may each include a respective address to a specific location within system memory 122. Tag storage 114 may include all tag sets associated with specific locations in the off-die cache(s) 116.Thus, when a memory request is received by the controller 110, the controller 110 may parse out an index field (e.g. including a pointer to or identifier of a set) and a tag field in the memory request address and may then check to see if the index of the tag associated with the specific memory location is stored within the cache-of-cache tags 108. If the original index is stored, then the controller 110 may check if the original tag associated with the memory location is stored within CoCT 108 in one of the ways at the original index location. If the original tag is located in an entry of CoCT 108 associated with the original index location, then the result is that the memory request is a cache of cache tags 108 tag hit (i.e. cache hit). If the original tag is not stored at any such entry of CoCT 108, then the result is that the memory request is a cache of cache tags 108 tag miss. This is also a particular type of cache miss, referred to herein as a set miss, if the tags from all ways of a given set are cached in CoCT 108. On the other hand, if the controller 110 does not find the original index stored in CoCT 108 on initial lookup, the result is that the memory request is a cache of cache tags 108 index miss. In this case, the controller 110 must fetch and then insert the original index value from the memory request into CoCT 108 by replacing an index currently stored in CoCT 108. In some embodiments, where CoCT 108 is itself an associative cache, a replacement policy may be a least recently used policy, where the least recently used index value is replaced. In other embodiments, other standard replacement policy schemes may be utilized to replace the index value in CoCT 108.Once the new index value has been inserted into CoCT 108, then the controller 110 may determine if the specific tag associated with the memory request is currently stored in tag storage 114 at the index location. If so, then the result is a tag hit in tag storage 114 and the controller 110 may input tag information into CoCT 108 at the new index position for all ways stored in tag storage 114 at the index position.Otherwise, the result is a tag miss in tag storage 114 and the controller 110 needs to initiate the replacement of the least recently used tag (in one of the ways at the index location in tag storage 114) with the tag associated with the memory request. This replacement inputs the data located at the address of the memory request from system memory into the cache memory and inputs the original tag from the memory request into tag storage 114. Once the tag is input into the tag storage 114 from system memory 122, then, in some embodiments, the controller 110 may initiate the replacement of all ways in CoCT 108 (at the index value) with the tags from each way at the index value that are currently stored in tag storage 114.In some embodiments, the cache memory is a sectored cache. In sectored cache embodiments, the overall tag storage requirements in tag storage 114 are lessened because each tag is shared by multiple cache entries (e.g. cache sub-blocks). In these sectored cache embodiments, the storage requirements for state information is increased because, for each tag, there must be state information for each potential entry associated with the tag (state information is discussed in the background section as well as in the discussion related to FIG. 2). For example, if a tag is 14 bits, in a non-sectored cache, 2-bits of state information would be included per sector. In a sectored cache having 8 sectors per-way, there are 8 cache entries associated with each tag, thus, there would need to be 16-bits (2-bits · 8) of state information included per tag. In this example, the state information takes up more space than the tag information.In one illustrative scenario according to one embodiment, the storage requirements of a set are 8.5 Bytes, which includes tag information, state information, eviction/cache replacement policy (RP) information. In some embodiments, the cache of cache tags utilizes a replacement policy that a least recently used (LRU), or other, policy. Specifically, the following information would be stored in a cache of cache tags set:(14-bit tag + 2-bit state)- 4 ways + 4-bit RP information = 8.5 Bytes To store 2K (211) sets in CoCT 108 in such a scenario, the storage requirement would then be 17K (2K*8.5 B). The specifics of the entries in the cache of cache tags is discussed in reference to FIG. 2 below. Thus, an embodiment of a cache of cache tags can reside on the processor die to perform lookups of the most recently used tags and the burden to the die is 17K. A 17K storage size cost on-die is a much smaller storage burden than the 8.5M size of a full tag storage structure.Although certain embodiments are not limited in this regard, different portions of CoCT 108 may be dedicated - at least with respect to the caching of tags - to different respective sets of tags stored in tag storage 114. By way of illustration and not limitation, CoCT 108 may include respective portions 118, 120, and tag storage 114 may include one or more sets of tags 130 and one or more sets of tags 132. In such an embodiment, configuration state of controller 110 may define or otherwise indicate that, of portions 118, 120, any tag caching for the one or more sets 130 by CoCT 108 is to be performed only with portion 118 (for example). Similarly, controller 110 may implement caching wherein, of portions 118, 120, any tag caching for the one or more sets 132 by CoCT 108 may be performed only with portion 120.Alternatively or in addition, CoCT 108 may itself be an N-way set associative cache. In such embodiments, tags of tag storage 114 may be cached to CoCT 108 on a per-set basis - e.g. wherein, for some or all sets of tag storage 114, any caching of a tag of a given set to CoCT 108 is part of a caching of all tags of that set to CoCT 108. Similarly, for some or all sets of tag storage 114, any eviction of a tag of a given set from CoCT 108 may be part of an eviction of all tags of that set from CoCT 108.FIG. 2 illustrates features of a tag address structure, a cache of cache tags set structure and an individual tag address entry of the cache of cache tags in a N-way set associative configuration according to one embodiment.In an illustrative embodiment, a memory access request to a 40-bit (for example) address space may include the following pieces of information in a 40-bit address field: an original tag field, an original index field, and an offset field. Typically, only the original tag field is stored within a tag entry 200 stored in the tag storage structure. Using the 40-bit addressing example with a 64 Byte cache line size in a direct-mapped (1-way associative) cache of 256M, an example of the size of each field in the address might include a 12-bit original tag, a 22-bit index, and a 6-bit offset. The 22 -bit index field may be a pointer to a specific indexed location in the tag storage structure. The 12-bit original tag may be the highest 12 bits of the actual memory address. The size of the tag may be also determined by its associativity and cache line size. For example, a 256 MB 4-way set associative cache with 64 Byte cache lines may have a 20-bit index field and 4M tags (2 20 · 4), where each tag is 14 bits in size.FIG. 2 also illustrates an embodiment of a tag set 202. The tag set 202 for a 4-way set associative cache stores four tags. Each way (Way 0 - Way 3) may store a specific tag as well as a specific amount of state information related to the cache entry associated with the each tag. State information may be specific per tag, thus, there may need to be state information bits associated with each tag. Additionally, the tag set also may need to include the cache replacement policy information, such as LRU bits or other LRU-type information, to inform the controller which of the four tags is due for eviction when a new tag may need to be stored.Although certain embodiments are not limited in this regard, error correction code (ECC) bits may also be utilized per set to minimize the storage errors of the tag set.FIG. 2 also describes an embodiment of a tag set entry stored within a cache of cache tags (CoCT Tag Set Entry 204). Set associative caches are generally popular for many types of cache configurations. Thus, in many embodiments, the cache is a multi-way set associative cache. Therefore, an entry in the cache of cache tags may need to store tag information for all ways of the cache at the particular index location (Contents/Data of Tag Set 206). In these embodiments, the index field (Addressing of Tag Set 208) from the original address (e.g. the 40- bit address configuration as discussed above) may point to the location of a set of tags stored within the cache of cache tags. In some embodiments, the cache of cache tags structure itself is also stored in a set associative manner. Thus, the original index field may be divided up into a cache of cache tags tag field as well as a cache of cache tags index field to allow for fetching a set within the cache of cache tags. For example, using a 20-bit original index field from the 40- bit address, the upper 12 bits of the original index field may be utilized as the tag field in a set associative cache of cache tags. In this example, the lower 8 bits of the original index field may be utilized as the index field in a cache of cache tags.FIG. 3A illustrates elements of a method 300 for maintaining a cache of cache tags according to an embodiment. Method 300 may be performed to keep CoCT 108 up-to-date, for example, based on memory access requests from a processor such as one or more processor cores 104. In an embodiment, method 300 is performed with controller 110.Method 300 may include, at 310, storing at a tag storage a set including first tags. The first tags may each be associated with a respective data location of a cache memory - e.g. off-die cache(s) 116. By way of illustration and not limitation, FIG. 3B shows an illustrative system 350 according to one embodiment. In system 350, a tag storage 352 includes eight sets each comprising eight respective ways. Each such way may store a respective tag, wherein a tag T of an i'th way of a j'th set of tag storage 352 is indicated in herein by the label Ty.Each such tag Tyof tag storage 352 may correspond to a respective location of data stored in a cache memory (not shown). As further illustrated in FIG. 3C, a system 360 according to another embodiment may include a tag storage 365 having, for example, some or all of the features of tag storage 352. Certain embodiments are not limited with respect to the number of sets stored to a data storage and/or with respect to a total number of ways in a given set. Also, it is understood that a way of a set in a tag storage may include other information in addition to a tag - e.g. as discussed herein with respect to FIG. 2.Method 300 may further include, at 320, storing at a cache of cache tags a subset of tags stored at the tag storage. For example, a cache of cache tags CoCT 354 of system 350 may include one or more sets - as represented by the illustrative sets SO, SI, S2, S3 - each to store the tags of a respective one of the eight sets of tag storage 352. At a given time, CoCT 354 may store the tags of only some of the eight sets of tag storage 352. In the embodiment of system 360, a CoCT 370 includes one or more sets - as represented by the illustrative sets 772, 774, 776, 778 - each to store the tags of a respective one of the eight sets of tag storage 365. Similar to CoCT 354, CoCT 370 may, of the eight sets of tag storage 365, store the tags of only some of these sets.The cache of cache tags may include a first portion and a second portion including different respective cache storage locations - e.g. arranged as respective one or more sets. In such an embodiment, the first portion and the second portion may, at least with respect to storing tags of the tag storage, be dedicated to different respective sets of the tag storage. Dedicating different portions of the CoCT to different respective sets of the tag storage may reduce the total cache storage locations of the CoCT to be searched to identify the occurrence (or absence) of a hit event.For example, method 300 may further comprise, at 330, updating the subset of tags based on memory access requests from the processor, wherein any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only the first portion. By way of illustration and not limitation, system 350 may be an embodiment wherein controller logic (not shown), such as that to perform method 400, dedicates one or more sets of CoCT 354 - e.g. some or all of the illustrative sets SI, S2, S3, S4 - each to store tags of only a respective one of the eight sets of tag storage 350. Mapping information or other configuration state of such a controller may define or otherwise indicate that tag storage by SO is only for tags Too, Tlo,...T70of a set 0, that tag storage by SI is only for tags T02, T12)...T72 of a set 2, that tag storage by S2 is only for tags T04, T14)...T74 of a set 4 and/or that tag storage by S3 is only for tags T06, T16,...T76 of a set 6. Such dedication of SO (or other set of CoCT 354) to a particular set of tag storage 352 may be permanent for "static" (e.g. constant) tracking with SO of that particular set of tag storage 352.In the other illustrative embodiment of system 360, controller logic (not shown), such as that to perform method 300, provides a set of CoCT 370 - e.g. one of the illustrative sets 772, 774, 776, 778 - to store at different times tags of different sets of tag storage 365. By way of illustration and not limitation, associative mapping or other such functionality of the controller logic may provide that sets 772, 774 are each available to store tags of only a first plurality of sets of tag storage 365 - as represented by the illustrative "even" sets 0, 2, 4, 6 of tag storage 365. Alternatively or in addition, sets 776, 778 may each be available to store tags of only a second plurality of sets of tag storage 365 - e.g. the illustrative "odd" sets 1, 3, 5, 7 of tag storage 365. Different embodiments may provide any of various other associative mappings of CoCT portions (e.g. sets) to different respective sets of a tag storage.In an illustrative scenario for the embodiment of system 360, set 772 may, at a given point in time illustrated with FIG. 3C, store tags Too, Tlo,...T70and a tag value "00" indicating that set 772 is currently storing tags of the 0th set of the even sets for which set 772 is available. Concurrently, set 774 may store tags T06, T16,...T76 of a set 6 and a tag value "11" indicating that set 774 is currently storing tags of the 3rd set (which is the 6th set of all eight sets of tag storage 365) of the odd sets for which set 774 is available. Furthermore, set 776 may store tags ¾, Ti3, . . .T73and a tag value "01" indicating that tags of the 1st set of the odd sets (the 3rd of all eight sets of tag storage 365) are currently stored by set 776. Concurrently, set 778 may store tags To7, Ti7, . . .T77 and a tag value "11" indicating that tags of the 3rd set of the odd sets (the 7th of all eight sets of tag storage 365) are currently stored by set 778.In an embodiment, sets 772, 774 each further store respective replacement policy (RP) information which, for example, indicates whether the corresponding set is the most recently used (MRU) or the least recently used (LRU) of sets 772, 774. Such RP information may be used by the controller to determine - e.g. in response to a memory request targeting a tag of an uncached even set of tag storage 365 - which of sets 772, 774 is to be selected for eviction of tags and storage of tags of the uncached even set. Based on such selection, the tag information of the selected set may be updated with an identifier of the newly-cached even set, and the RP information of sets 772, 774 may be updated to reflect that the selected one of sets 772, 774 is now the MRU set.Similarly, sets 776, 778 may each further store respective RP information which indicates whether the corresponding set is the MRU or the LRU of sets 776, 778. Such RP information may be used to determine - e.g. in response to a memory request targeting a tag of an uncached odd set of tag storage 365- which of sets 776, 778 is to be selected for eviction of tags and storage of tags of the uncached odd set. Based on such selection, the tag information of the selected set may be updated with an identifier of the newly-cached odd set, and the RP information of sets 776, 778 may be updated to reflect that the selected one of sets 776, 778 is now the MRU set.In some embodiments, method 300 stores all tags of a set to the cache of cache tags on a per-set basis. For example, controller logic performing method 300 may provide that, in response to any determination that a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion of the CoCT. In an embodiment, the tag storage may further store a second set including second tags each associated with a respective data location stored within the cache memory. Controller logic performing method 300 may further provide that any storage of tags of the second set to the cache of cache tags by the controller includes storage of the tags of the second set to only the second portion of the CoCT. Alternatively or in addition, such controller logic may provide that, in response to any determination that a tag of a second set is to be stored to the cache of cache tags, all tags of the second set are stored to the second portion of the cache of cache tags.As illustrated in FIGs. 4A-4C, controller logic according to certain embodiments may maintain a data structure - referred to herein as a replacement table - for use as a reference to determine, for a plurality of sets of cached tags, which set of tags is to be selected for eviction from a cache of cache tags (e.g. to allow for subsequent storing of a replacement set of tags to the cache of cache tags). Such eviction may be performed, for example, in response to a memory access request comprising a tag which cannot be matched with any tag currently stored in the cache of cache tags and where the tag's set is not currently tracked in the CoCT.FIG. 4A illustrates elements of a method 400 for accessing a cache of cache tags according to an embodiment. Method 400 may be performed by controller logic such as that of controller 110, for example. Although certain embodiments are not limited in this regard, method 400 may be performed as part of, or in addition to, method 300 for maintaining a cache of cache tags. Method 400 may include, at 405, associating a first entry of a replacement table with a first set of tags stored to a cache of cache tags. In an embodiment, the associating at 405 may be performed in response to storage of the first set of tags to the cache of cache tags. The associating at 405 may include setting an activity variable of the first entry to an initial value of a pre-determined plurality of values.By way of illustration and not limitation, FIG. 4B illustrates a replacement table RT 450, according to one embodiment, for a cache of cache tags (not shown) which includes N tag cache (TC) sets 0, 1,..., (N-2), (N-l). Over time, the TC sets may be variously allocated each to store the tags of a respective tag set of a tag storage (not shown). FIG. 4B also shows an illustrative state diagram 454 which may be used to determine, for a given entry of RT 450, which of a predetermined plurality of values to assign to an activity variable of that entry. Although certain embodiments are not limited in this regard, state diagram 454 may include a predetermined plurality of four values 0 through 3. An activity variable of an RT entry may updated to reflect higher activity in response to successive memory requests which each hit (match) a respective tag of the corresponding cached tag set. Alternatively or in addition, such an activity variable may be updated to reflect lower activity if a memory request does not hit any tag of the cached tag set.For example, method 400 may determine, at 410, whether (or not) a tag of a first memory access request corresponds to (e.g. matches) a tag of the first set of tags. In response to the evaluation at 410 determining that the tag of the first memory access request corresponds to such a cached tag, method 400 may, at 415, change a variable of the first entry to indicate increased activity of the first set. Method 400 may further service the first request, as indicated at 420 by operations to access a location in cache memory which corresponds to the tag of the first memory access request. Otherwise, method 400 may, at 425, change the variable of the first entry to indicate decreased activity of the first set.Subsequent to the first memory access request, a second memory access request may be issued from a host processor or other such requestor logic. The second memory access request may target a memory location which is not currently represented in the cache of cache tags. For example, controller logic may evaluate a tag which is included in the second memory access request to determine whether the tag matches any tag stored or otherwise tracked in the cache of cache tags - e.g. to detect for a set miss event. In response to a failure to identify any such matching cached tag, method 400 may, at 435, select a set of tags based on an activity value of a corresponding entry of the replacement table. For example, a pointer 452 may be moved - e.g. as a background process prior to, during or in response to the determining at 410 - to successively check entries of RT 450 to identify an activity variable which indicates a sufficiently low level of activity. By way of illustration and not limitation, the selecting at 435 may include searching the replacement table for an activity variable which is equal to the initial value (e.g. a value to which the activity variable of the first entry was set at 405). As illustrated by state diagram 454, a cached set of tags may be selected for replacement where it is identified as indicating a lowest - or otherwise sufficiently low - activity level (e.g. the value 3) of the predetermined plurality of such levels. Method 400 may then evict the selected set of tags from the cache of cache tags, at 440.FIG. 4C illustrates a system 460 which, according to an embodiment, is operated according to techniques including some or all of the features of method 400. System 460 includes a tag storage 465 to store sets of tags, as represented by the illustrative eight sets SetO,..., Set7. Although certain embodiments are not limited in this regard, the eight sets of tag storage 465 may each comprise four respective ways which, in turn, each include a respective tag for a corresponding location in a cache memory (not shown).System 460 may further comprise a cache of cache tags CoCT 480 which, in an illustrative embodiment, includes three sets CSetO, CSetl, CSet2. At a given time, CSetO, CSetl, CSet2 may be variously allocated each to store the tags of a respective one of SetO,..., Set7. For example, a controller 470 of system 460 may maintain a tag cache (TC) index 474 including entries each corresponding to a different respective one of SetO,..., Set7. Based on the processing of memory access requests, controller 470 may variously store to entries of TC index 474 values each defining or otherwise indicating which, if any, of CSetO, CSetl, CSet2 currently stores the tags of the corresponding one of SetO,..., Set7. For example, at a time Tl, values 0, 1, 2 are variously stored in corresponding entries of TC index 474 to indicate that the tags of SetO, Set2 and Set5 are stored, respectively, at CSetO, CSetl, CSet2. In such an embodiment, the value 3 in an entry of TC index 474 may indicate that tags of a corresponding one of SetO,..., Set7 are not currently stored in CoCT 480.Controller 470 may further maintain a replacement table RT 472 to serve as a reference to select one of CSetO, CSetl, CSet2 for evicting tags of one set and for storing tags of a next set of tag storage 465. RT 472 may include entries each to store a respective variable indicating a level of activity of a corresponding one of CSetO, CSetl, CSet2. Entries of RT 472 may be maintained according to state diagram 454, although certain embodiments are not limited in this regard. For example, at time Tl, RT 472 indicates for CSetO, CSetl, CSet2 respective activity levels 0, 3, 1. Accordingly, CSetl may - at time Tl - qualify to be selected for eviction of the tags of Set2 form CoCT 480. Such tag eviction may be subsequently implemented in response to a memory access request which does not hit any of the tags of SetO, Set2 and Set5 currently cached to CoCT 480. For example, a next memory access request may include one of the tags To3, Ti3, T23, T33 of Set3. Results of such a memory access request are illustrated in FIG. 4C for a time T2 subsequent to time Tl.For example, at time T2, RT 472 may be updated to indicate a lower level activity ofCSetO - e.g. where an activity variable of an entry corresponding to CSetO is changed from 0 to 1. However, an entry of RT 472 corresponding to CSetl may remain at some lowest level (such as the illustrative level 3) since the newly-cached tags of Set3 are initially qualified for a subsequent eviction. Selection of CSetl for tag eviction may preclude a checking (and updating) of the entry of RE 472 which corresponds to CSet2. In an embodiment, TC index 474 is updated to reflect that CSetO, CSetl and CSet2 store tags of SetO, Set3, Set5, respectively.FIG. 5 is a block diagram of an embodiment of a computing system in which memory accesses may be implemented. System 500 represents a computing device in accordance with any embodiment described herein, and may be a laptop computer, a desktop computer, a server, a gaming or entertainment control system, a scanner, copier, printer, or other electronic device. System 500 may include processor 520, which provides processing, operation management, and execution of instructions for system 500. Processor 520 may include any type ofmicroprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing for system 500. Processor 520 controls the overall operation of system 500, and may be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.Memory subsystem 530 represents the main memory of system 500, and provides temporary storage for code to be executed by processor 520, or data values to be used in executing a routine. Memory subsystem 530 may include one or more memory devices such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM), or other memory devices, or a combination of such devices. Memory subsystem 530 stores and hosts, among other things, operating system (OS) 536 to provide a software platform for execution of instructions in system 500. Additionally, other instructions 538 are stored and executed from memory subsystem 530 to provide the logic and the processing of system 500. OS 536 and instructions 538 are executed by processor 520.Memory subsystem 530 may include memory device 532 where it stores data, instructions, programs, or other items. In one embodiment, memory subsystem includes memory controller 534, which is a memory controller in accordance with any embodiment described herein, and which provides mechanisms for accessing memory device 532. In one embodiment, memory controller 534 provides commands to access memory device 532.In some embodiments, system 500 comprises two levels of memory (alternatively referred to herein as '2LM') that include cached subsets of system disk level storage (in addition to run-time data, for example). This main memory may include a first level (alternatively referred to herein as "near memory") comprising relatively small, fast memory made of, for example, DRAM; and a second level (alternatively referred to herein as "far memory") which comprises relatively larger and slower (with respect to the near memory) volatile memory (e.g., DRAM) or nonvolatile memory storage - e.g., including phase change memory (PCM), a three dimensional cross point memory, a resistive memory, nanowire memory, ferro-electric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, spin transfer torque (STT)-MRAM and/or the like. Far memory may be presented as "main memory" to a host operating system (OS) executing with processor 520, where the near memory is a cache for the far memory that, for example, is transparent to the OS. Management of the two-level memory may be done by a combination of controller logic and modules executed via the host central processing unit (CPU).For example, memory controller 534 may control access by processor 520 to far memory - e.g. included in some or all of memory 532 may serve as a far memory for processor 520, where memory controller 534 operates as far memory control logic. In such an embodiment, processor 520 may include or couple to a near memory controller logic to access a near memory (not shown) - e.g. other than memory 532 - and 2LM controller logic coupled thereto. Such 2LM controller logic may include a CoCT and manager logic to maintain the CoCT according to techniques discussed herein. Near memory may be coupled to processor 520 via high bandwidth, low latency means for efficient processing. Far memory may be coupled to processor 520 via low bandwidth, high latency means (as compared to that of the near memory).Processor 520 and memory subsystem 530 are coupled to bus/bus system 510. Bus 510 is an abstraction that represents any one or more separate physical buses, communication lines/interfaces, and/or point-to-point connections, connected by appropriate bridges, adapters, and/or controllers. Therefore, bus 510 may include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (commonly referred to as "Firewire"). The buses of bus 510 may also correspond to interfaces in network interface 550.System 500 may also include one or more input/output (I/O) interface(s) 540, network interface 550, one or more internal mass storage device(s) 560, and peripheral interface 570 coupled to bus 510. I/O interface 540 may include one or more interface components through which a user interacts with system 500 (e.g., video, audio, and/or alphanumeric interfacing). Network interface 550 provides system 500 the ability to communicate with remote devices (e.g., servers, other computing devices) over one or more networks. Network interface 550 may include an Ethernet adapter, wireless interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces.Storage 560 may be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 560 holds code or instructions and data 562 in a persistent state (i.e., the value is retained despite interruption of power to system 500). Storage 560 may be generically considered to be a "memory," although memory 530 is the executing or operating memory to provide instructions to processor 520. Whereas storage 560 is nonvolatile, memory 530 may include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 500).Peripheral interface 570 may include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 500. A dependent connection is one where system 500 provides the software and/or hardware platform on which an operation executes, and with which a user interacts.FIG. 6 is a block diagram of an embodiment of a mobile device in which memory accesses may be implemented. Device 600 represents a mobile computing device, such as a computing tablet, a mobile phone or smartphone, a wireless-enabled e-reader, or other mobile device. It will be understood that certain of the components are shown generally, and not all components of such a device are shown in device 600.Device 600 may include processor 610, which performs the primary processing operations of device 600. Processor 610 may include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing means. The processing operations performed by processor 610 include the execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) with a human user or with other devices, operations related to power management, and/or operations related to connecting device 600 to another device. The processing operations may also include operations related to audio I/O and/or display I/O.In one embodiment, device 600 includes audio subsystem 620, which represents hardware (e.g., audio hardware and audio circuits) and software (e.g., drivers, codecs) components associated with providing audio functions to the computing device. Audio functions may include speaker and/or headphone output, as well as microphone input. Devices for such functions may be integrated into device 600, or connected to device 600. In one embodiment, a user interacts with device 600 by providing audio commands that are received and processed by processor 610.Display subsystem 630 represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the computing device. Display subsystem 630 may include display interface 632, which may include the particular screen or hardware device used to provide a display to a user. In one embodiment, display interface 632 includes logic separate from processor 610 to perform at least some processing related to the display. In one embodiment, display subsystem 630 includes a touchscreen device that provides both output and input to a user.I/O controller 640 represents hardware devices and software components related to interaction with a user. I/O controller 640 may operate to manage hardware that is part of audio subsystem 620 and/or display subsystem 630. Additionally, I/O controller 640 illustrates a connection point for additional devices that connect to device 600 through which a user might interact with the system. For example, devices that may be attached to device 600 might include microphone devices, speaker or stereo systems, video systems or other display device, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices.As mentioned above, I/O controller 640 may interact with audio subsystem 620 and/or display subsystem 630. For example, input through a microphone or other audio device may provide input or commands for one or more applications or functions of device 600.Additionally, audio output may be provided instead of or in addition to display output. In another example, if display subsystem includes a touchscreen, the display device also acts as an input device, which may be at least partially managed by I/O controller 640. There may also be additional buttons or switches on device 600 to provide I/O functions managed by I/O controller 640.In one embodiment, I/O controller 640 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that may be included in device 600. The input may be part of direct user interaction, as well as providing environmental input to the system to influence its operations (such as filtering for noise, adjusting displays for brightness detection, applying a flash for a camera, or other features).In one embodiment, device 600 includes power management 650 that manages battery power usage, charging of the battery, and features related to power saving operation. Memory subsystem 660 may include memory device(s) 662 for storing information in device 600.Memory subsystem 660 may include nonvolatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices. Memory 660 may store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of the applications and functions of system 600. In one embodiment, memory subsystem 660 includes memory controller 664 (which could also be considered part of the control of system 600, and could potentially be considered part of processor 610) to control memory 662.Connectivity 670 may include hardware devices (e.g., wireless and/or wired connectors and communication hardware) and software components (e.g., drivers, protocol stacks) to enable device 600 to communicate with external devices. The device could be separate devices, such as other computing devices, wireless access points or base stations, as well as peripherals such as headsets, printers, or other devices.Connectivity 670 may include multiple different types of connectivity. To generalize, device 600 is illustrated with cellular connectivity 672 and wireless connectivity 674. Cellular connectivity 672 refers generally to cellular network connectivity provided by wireless carriers, such as provided via GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, LTE (long term evolution - also referred to as "4G"), or other cellular service standards. Wireless connectivity 674 refers to wireless connectivity that is not cellular, and may include personal area networks (such as Bluetooth), local area networks (such as WiFi), and/or wide area networks (such as WiMax), or other wireless communication. Wireless communication refers to transfer of data through the use of modulated electromagnetic radiation through a non-solid medium. Wired communication occurs through a solid communication medium.Peripheral connections 680 include hardware interfaces and connectors, as well as software components (e.g., drivers, protocol stacks) to make peripheral connections. It will be understood that device 600 could both be a peripheral device ("to" 682) to other computing devices, as well as have peripheral devices ("from" 684) connected to it. Device 600 commonly has a "docking" connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on device 600.Additionally, a docking connector may allow device 600 to connect to certain peripherals that allow device 600 to control content output, for example, to audiovisual or other systems.In addition to a proprietary docking connector or other proprietary connection hardware, device 600 may make peripheral connections 680 via common or standards-based connectors. Common types may include a Universal Serial Bus (USB) connector (which may include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other type.In one implementation, an apparatus comprises a cache memory to couple to a processor, a tag storage, coupled to the cache memory, to store a first set including first tags each associated with a respective data location of the cache memory, and a cache of cache tags to store a subset of tags stored at the tag storage, the cache of cache tags including a first portion and a second portion. The apparatus further comprises a controller coupled to the cache of cache tags, the controller to update the subset of tags based on memory access requests from the processor, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller to store all tags of the first set to the first portion, wherein any storage of tags of the first set to the cache of cache tags by the controller includes storage of the tags of the first set to only the first portion.In an embodiment, the tag storage is further to store a second set including second tags each associated with a respective data location stored within the cache memory. In another embodiment, in response to any determination that a tag of the second set is to be stored to the cache of cache tags, the controller is to store all tags of the second set to the second portion, wherein any storage of tags of the second set to the cache of cache tags by the controller includes storage of the tags of the second set to only the second portion. In another embodiment, of all set of tags of the tag storage, the first portion is to store only tags of the first set.In another embodiment, the tag storage comprises a first plurality of sets including the first set, and a second plurality of sets including the second set, wherein, of the first plurality of sets and the second plurality of sets, the first portion is dedicated to only the first plurality of sets and the second portion is dedicated to only the second plurality of sets. In another embodiment, the first plurality of sets correspond to odd sets of the cache memory and the second plurality of sets correspond to even sets of the cache memory. In another embodiment, the cache of cache tags and the processor are located on a first die. In another embodiment, the tag storage structure is located on a second die coupled to the first die.In another implementation, a method comprises storing at a tag storage a first set including first tags each associated with a respective data location of a cache memory coupled to a processor, storing at a cache of cache tags a subset of tags stored at the tag storage, the cache of cache tags including a first portion and a second portion, and updating the subset of tags based on memory access requests from the processor, wherein in response to any determination that a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion, wherein any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only the first portion.In an embodiment, the method further comprises storing at the tag storage a second set including second tags each associated with a respective data location stored within the cache memory, and in response to any determination that a tag of the second set is to be stored to the cache of cache tags, storing all tags of the second set to the second portion. In another embodiment, any storage of tags of the second set to the cache of cache tags by the controller includes storage of the tags of the second set to only the second portion. In another embodiment, of all set of tags of the tag storage, the first portion is to store only tags of the first set.In another embodiment, wherein the tag storage comprises a first plurality of sets including the first set, and a second plurality of sets including the second set, wherein, of the first plurality of sets and the second plurality of sets, the first portion is dedicated to only the first plurality of sets and the second portion is dedicated to only the second plurality of sets. In another embodiment, the first plurality of sets correspond to odd sets of the cache memory and wherein the second plurality of sets correspond to even sets of the cache memory. In another embodiment, the cache of cache tags and the processor are located on a first die. In another embodiment, the tag storage structure is located on a second die coupled to the first die.In another implementation, an apparatus comprises a cache of cache tags to store a subset of tags stored at a tag storage, the subset of tags each associated with a respective data location of a cache memory, and a controller including circuitry to associate a first entry of a replacement table with a first set of tags stored to the cache of cache tags, including the controller to set a first variable of the first entry to an initial value of a pre-determined plurality of values. If a first memory access request comprises a tag corresponding to one of the first set of tags, then the controller is further to change the first variable to another of the pre-determined plurality of values to indicate an increase of a level of activity, otherwise, the controller to change the first variable to another of the pre-determined plurality of values to indicate a decrease of the level of activity. In response to a failure to identify any tag of the cache of cache tags which matches a tag of a second memory access request, the controller is further to select a set of tags to evict from the cache of cache tags, including the controller to search the replacement table for a variable which is equal to the initial value.In an embodiment, in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller is to store all tags of the first set to the first portion. In another embodiment, the cache of cache tags and a processor to provide the first memory access request are located on a first die. In another embodiment, the tag storage structure is located on a second die coupled to the first die. In another embodiment, the cache of cache tags contains one or more of the most recently used tags stored in the tag storage structure.In another implementation, a method comprises, in response to storage of a first set of tags to a cache of cache tags, associating a first entry of a replacement table with the first set of tags, including setting a first activity variable of the first entry to an initial value of a predetermined plurality of values, wherein a tag storage stores tags which are each associated with a respective data location of a cache memory, and wherein the cache of cache tags stores a subset of tags stored at the tag storage. The method further comprises, if a first memory access request comprises a tag corresponding to one of the first set of tags, then changing the first activity variable to another of the pre-determined plurality of values to indicate an increase of a level of activity of the first set, otherwise changing the first activity variable to another of the pre- determined plurality of values to indicate a decrease of the level of activity of the first set. The method further comprises, in response to a failure to identify any tag of the cache of cache tags which matches a tag of a second memory access request, selecting a set of tags to evict from the cache of cache tags, including searching the replacement table for an activity variable which is equal to the initial value.In an embodiment, in response to any determination that a tag of the first set is to be stored to the cache of cache tags, the controller to store all tags of the first set to the first portion. In another embodiment, the cache of cache tags and a processor to provide the first memory access request are located on a first die. In another embodiment, the tag storage structure is located on a second die coupled to the first die. In another embodiment, the cache of cache tags contains one or more of the most recently used tags stored in the tag storage structure.Techniques and architectures for providing access to cached data are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion herein, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description herein. In addition, certain embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of such embodiments as described herein.Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow.
To enable a processor to execute instructions for logical comparisons and branches on the basis of packed data and unpacked data.SOLUTION: In a computer system 100, a processor is connected to a memory. The memory stores first data and second data. The processor performs logical comparisons on the first and second data. The logical comparisons may be performed on each bit of the first and second data, or may be performed only on certain bits. At least the first data includes packed data elements. The logical comparisons are performed on the most significant bits of the packed data elements. The logical comparisons include comparisons of the same respective bits of the first and second data. In addition, the logical comparisons include logical comparisons of bits of the first data with the complements of the corresponding bits of the second data. Based on these comparisons, branch support actions are taken.SELECTED DRAWING: Figure 1a
A processor, comprising: a decoder for decoding instructions of a first sequence, wherein the instructions of the first sequence comprise instructions for performing both a comparison operation and a jump operation conditioned on a result of the comparison operation A first source register configured to store a first source value; a second source configured to store a second source value; and a second source register configured to store a second source value, A register, and an execution circuit configured to execute the comparison operation and the jump operation, wherein the execution circuit compares the first source value and the second source value, And to jump to the target address of the second sequence of instructions in response to the first result of the second sequence.The processor of claim 1, wherein the execution circuit is configured to continuously execute instructions of the first sequence in response to a second result of the comparison.The processor of claim 1, wherein the fused compare-jump instruction comprises an indication of the target address.Wherein the first result includes an indication that the first source value and the second source value are equal and the second result indicates that the first source value and the second source value are not equal 3. The processor of claim 2 including an indication of:The processor of claim 1, wherein the first source value and the second source value are 32-bit values.Wherein the one or more instructions in the first sequence or the second sequence comprise a SIMD instruction and the execution circuit comprises a vector execution circuit arranged to execute the SIMD instruction and a 512 bit And a vector register file comprising a set of vector registers and used to store operands of said SIMD instructions.The processor of claim 6, further comprising a set of vector mask registers configured to store mask values generated by instructions in the first sequence of instructions or instructions in the second sequence.The execution circuit may further include a scalar execution circuit, wherein the scalar execution circuit is configured to execute one or more scalar instructions of the first sequence of instructions or the second sequence of instructions 7. The processor of claim 6, wherein the scalar execution circuit includes a set of scalar registers that store scalar operands.The processor of claim 1, further comprising a plurality of status registers, wherein the plurality of status registers are configured to hold data associated with an execution state of the processor.A system, comprising: a system memory configured to store instructions and data; and a processor coupled to the system memory, the processor comprising: a decoder for decoding a first sequence of instructions , The instruction of the first sequence including a fused compare-jump instruction for performing both a comparison operation and a jump operation conditioned by the result of the comparison operation; a first source value A second source register configured to store a second source value, and a second source register configured to store the second source value; and to perform the comparing operation and the jumping operation Wherein the execution circuit compares the first source value with the second source value and in response to a first result of the comparison a second sequence of instructions To jump to the target address, configured, system.The system of claim 10, further comprising a storage device coupled to the processor and configured to store instructions and data.11. The system of claim 10, further comprising an input / output (I / O) interconnect configured to connect the processor to one or more input / output (I / O) devices.11. The system of claim 10, wherein the system memory comprises a dynamic random access (DRAM) memory.The system of claim 10, further comprising a graphics processor coupled to the processor and configured to perform graphics processing operations.The system of claim 10, further comprising a network processor coupled to the processor.11. The system of claim 10, further comprising an audio input / output device connected to the processor.The system of claim 10, wherein the execution circuit is configured to continuously execute instructions of the first sequence in response to a second result of the comparison.11. The system of claim 10, wherein the fused compare-jump instruction includes an indication of the target address.
Apparatus for performing logical comparison operationThis disclosure relates generally to the field of processors. In particular, the disclosure relates to using a single control signal to perform multiple logical comparison operations of multiple bits of data.In a typical computer system, a processor manipulates values represented by multiple bits (eg, 64) using instructions that output a single result. For example, execution of an add instruction sums the first 64-bit value and the second 64-bit value and stores the result as the third 64-bit value. Multimedia applications require manipulation of a large number of data. Examples include computer-supported cooperation such as 2D / 3D graphics processing, image processing, video compression / decompression, recognition algorithm, voice manipulation, or integrated electronic conference with multimedia data manipulation (CSC: computer supported cooperation). The data may be represented by a single large value (eg, 64 bits or 128 bits), or alternatively may be represented by fewer bits (eg, 8 or 16 or 32 bits). For example, graphic data is represented by 8 or 16 bits. Audio data may be represented by 8 or 16 bits. Also, the integer data may be represented by 8, 16 or 32 bits. The floating point data may then be represented by 32 or 64 bits.The processor may provide a packed data format (as well as other applications having the same characteristics) to improve the efficiency of the multimedia application. The packed data format is a format in which the bits used to represent a single value are divided into several fixed size data elements. Each element represents a different value. For example, a 128-bit register may be divided into 4 32-bit elements. Each represents a separate 32-bit value. In this way, these processors can process multimedia applications more efficiently.The present invention will be described with reference to the accompanying drawings, but it is not limited to the contents of the drawings.DETAILED DESCRIPTION OF THE INVENTION The present disclosure discloses an embodiment of a method, system, and circuit that includes processor instructions that execute a logical comparison operation of multiple bits of data in response to a single control signal. The data relating to the logical comparison operation may be packed or unpacked data. In at least one embodiment, the processor is connected to a memory. The memory stores first data and second data. The processor executes a logical comparison operation of data elements of the first data and the second data in response to reception of the instruction. In the logical comparison operation, the logical product of the data elements of the first and second data for each bit and the data element of the second data and the complement of the data element of the first data are ANDed bit by bit It is also good. At least two status flags of the processor are modified based on the result of the logical comparison operation. These two status flags may include a zero flag and a carry flag. These flags may appear structurally in the application program. And a larger number of flag values (eg structurally visible extension flag (EFLAGS) register) may be part.These and other embodiments of the present invention will be understood in accordance with the following description. In the following description, it is of course that various modifications and changes may be made without departing from the broader spirit and scope of the present invention. The specification and drawings are accordingly to be regarded in an illustrative sense rather than a restrictive sense. And the invention is defined only by the claims. [Definition]As a basis for understanding the description of embodiments of the invention, the terms are defined as follows. Bits X to Y: Define binary subfields. For example, bit zero to bit zero from bit 6 of byte 001110102 (radix 2) represents subfield 1110102. Here, "2" followed by a binary number means that the number has a radix of 2. Thus, 10002 equals 810. F 16 is equal to 1510. Rx: represents a register. The register may be any device capable of storing and providing data. Further functions of the register will be described later. Registers are not necessarily included in the same package as the same die or processor. SRC and DEST: represent storage areas (eg memory addresses, registers, etc). Source 1 - i and Result 1 - i and Destin: data.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing a computer system according to an embodiment of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing a computer system according to an embodiment of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing a computer system according to an embodiment of the present invention.FIG. 4 is a diagram showing a register file of a processor according to an embodiment of the present invention. FIG. 4 is a diagram showing a register file of a processor according to an embodiment of the present invention.FIG. 3 shows a flowchart according to at least one embodiment of a process executed by a processor to manipulate data.FIG. 7 is a diagram showing a packed data type according to another embodiment of the present invention.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates at least packed word data representations of packed bytes and registers of registers in one embodiment of the present invention.FIG. 4 shows a packed quadword data representation of a packed double word and register of a register according to at least one embodiment of the present invention.6 is a flowchart according to an embodiment of a method for performing a logical comparison (setting of zero and carry flag operation). 6 is a flowchart according to an embodiment of a method for performing a logical comparison (setting of zero and carry flag operation). 6 is a flowchart according to an embodiment of a method for performing a logical comparison (setting of zero and carry flag operation). 6 is a flowchart according to an embodiment of a method for performing a logical comparison (setting of zero and carry flag operation).FIG. 5 shows another embodiment of a circuit for performing a logical comparison (setting of zero and carry flag operation). FIG. 5 shows another embodiment of a circuit for performing a logical comparison (setting of zero and carry flag operation). FIG. 5 shows another embodiment of a circuit for performing a logical comparison (setting of zero and carry flag operation).FIG. 7 illustrates various embodiments showing block diagrams of operation code formats for processor instructions.SUMMARY This application describes embodiments of methods, apparatus and systems that include processor instructions for logical comparison operations of packed or unpacked data. More specifically, the instructions may logically compare the data and set the zero and carry flags based on the comparison. In at least one embodiment, the two logical comparison operations are performed using a single instruction, as shown in Table 1a and Table 1b below. The comparison operation includes the following processing. That is, the bitwise AND of the destination and the source operand, and the bitwise AND of the destination's complement and the source operand. Table 1a shows a simplified representation of one embodiment of the disclosed logical comparison operation. Table 1b, on the other hand, shows an example of the bit level of the embodiment of the disclosed logical compare instruction. For illustration, values are added. Although the examples illustrated in Tables 1a and 1b illustrate packed data, the source and destination operand data may be any data representation, ie not necessarily packed data. The source and / or destination operand data is a single 128-bit entry. Therefore, it can not be regarded as "packed" data. Therefore, in the present specification, it will be referred to as "unpacked" data. This means that the data may not necessarily be subdivided into component representations and may be regarded as a single data value. For simplicity, the data in Table 1a is represented as a 32-bit value. Those skilled in the art will appreciate that the concepts illustrated in Table 1a and Table 1b apply to data of any length. For example, it is a shorter data length (eg, 4, 8, and 16 bits in length) and a longer data length (eg, 64 and 128 bits in length).In at least one embodiment, the data values of the source and destination operands may represent packed data. In such an embodiment, each of the packed components of the source and destination operands may be of any data type.In Table 2a and Table 2b, components A1 to A4 and B1 to B4 each represent a binary representation of a 32-bit single precision floating point number. However, such specific examples are not meant to be limiting. Those skilled in the art will recognize that each of the components may represent any data. That is, it includes integer, floating point data format, and any data format in character string format or other type.For example, as in the packed examples shown in Table 2a and Table 2b, another embodiment may be used in which only certain bits of each packed element are manipulated during the comparison operation. For example, at least, such alternative embodiments are described below in conjunction with FIGS. 7 c, 7 d, 8 b, and 8 c.Those skilled in the art will recognize that intermediate values are shown in "Int. Result 1" and "Int. Result 2" in Tables 1a and 2a and in rows 3 and 4 of Tables 1b and 2b Will. These are for easy understanding. The descriptions in Tables 1a to 2b are not meant to imply that intermediate values are stored in the processor. However, in at least one embodiment, it may be so stored. In addition, in at least one other embodiment, such an intermediate value is not stored in the storage area but is utilized through the circuit.Table 1a, Table 1b, Table 2a and Table 2b show examples of the instruction (LCSZC: logical compare, set zero and carry flags) instruction. The LCSZC instruction executes a bitwise AND operation on each of the 128-bit source and destination operands and generates a 128-bit bitwise AND of each of 128 source operands and the complement of the value of the destination operand Executes the operation and sets zero and carry flag according to the result of the AND operation.Setting of zero and carry flag supports branch operation based on logical comparison. In at least one embodiment, the LCSZC instruction may be followed by another branch instruction indicating the desired branching operation performed by the processor based on the value of one or both of the flags (see, for example, Table 4 See pseudocode). Those skilled in the art will recognize that the setting of the status flag is not the only hardware mechanism to perform a branch operation using the comparison result. Other mechanisms may be implemented to support branching based on the result of the comparison. Although the specific embodiments described below set the zero and carry flag as a result of a logical comparison, setting of such flags to support branching is required in all embodiments Absent. Therefore, the term LCSZC as used herein should not be understood as limiting. Setting of zero and carry flags is not required in all embodiments.In one alternative embodiment, for example, a branch operation may be performed as a direct result of a modification instruction of one LCSZC instruction. That is, this is a command that combines comparisons and branches like a fused test and branch instruction. In at least one embodiment of the fused test and branch instruction, no status flag is set as a result of the logical comparison performed.In another embodiment, the number of bits of data elements and intermediate results may be varied. Also, in another embodiment, some bits of each source and destination value may be compared. In addition, other embodiments may vary the number of data elements used and the number of intermediate results generated. For example, alternative embodiments include, but are not limited to: LCSZC instructions for unsigned sources and signed destinations; LCSZC instructions for signed and unsigned destinations ; LCSZC instruction for unsigned source and unsigned destination; and LCSZC instruction for signed source and signed destination. In each of the embodiments, the source and destination may each include packed data of 8-bit, 16-bit, 32-bit, or 64-bit components. Alternatively, the source and destination data are not packed, but instead are 128-bit data elements. The source and destination packed formats need not be symmetric. And, if both are packed, the sizes of the source and destination data need not necessarily be the same. [Computer system]FIG. 1a shows a computer system 100 according to one embodiment of the present invention. Computer system 100 includes an interconnect 101 for communicating information. The interconnect 101 may include a branch bus, one or more point-to-point interconnects, or any combination of the two, or other communication hardware and / or software.FIG. 1 a shows a processor 109 for processing information connecting to an interconnect 101. Processor 109 represents a CPU of any type of architecture. For example, it includes CISC or RISC type architecture.Computer system 100 further includes random access memory (RAM) or other dynamic storage device (main memory 104). These are connected to the interconnect 101 and store instructions and information to be executed by the processor 109. Main memory 104 may be used to store temporary variables or other intermediate information during execution of instructions by processor 109.Computer system 100 further includes a read only memory (ROM) 106 and / or other static storage device, which may be coupled to interconnect 101 for storing static information and instructions for processor 109 And connected. Data storage device 107 is connected to interconnect 101 for storing information and instructions.In FIG. 1 a, the processor 109 further includes an execution unit 130, a register file 150, a cache 160, a decoder 165, and an internal bus 170. Of course, the processor 109 includes additional circuitry not necessary for understanding the present invention.Decoder 165 is for decoding instructions received by processor 109. The execution unit 130 is for executing instructions received by the processor 109. In addition to recognizing the ordinarily ordinarily performed in the main processor, the decoder 165 and the execution unit 130 are implemented as logical-compare-and-set-zero-and-carry-flags (LCSZC) Recognize the instruction to execute the operation. The decoder 165 and the execution unit 130 recognize the instructions for performing the LCSZC operation of both packed and unpacked data.The execution unit 130 is connected to the register file 150 by an internal bus 170. Also, the internal bus 170 does not necessarily have to be a branch bus. In another embodiment, it may be a point-to-point interconnect or other type of communication path.The register file 150 is a storage area for the processor 109 that stores information including data. One aspect of the present invention for performing LCSZC operations on packed or unpacked data is described. According to an aspect of the present invention, the storage area used for storing data is not limited. An example of the register file 150 will be described later with reference to FIGS. 2 a - 2 b.The execution unit 130 is connected to the cache 160 and the decoder 165. Cache 160 is used, for example, to cache data and / or control signals from main memory 104. The decoder 165 is used for decoding instructions received by the processor 109. The decoding of the instructions translates into control signals and / or microcode entry points. These control signals or microcode entry points may be transferred from the decoder 165 to the execution unit 130.In response to these control signals and / or microcode entry points, execution unit 130 performs the appropriate operation. For example, if an LCSZC instruction is received, the decoder 165 causes the execution unit 130 to perform the necessary comparison logic. For at least some embodiments (eg, embodiments not implementing fused "test and branch" operations), execution unit 130 may set the zero and carry flags (eg, logic comparison circuit 145 ). In such an embodiment, a branching unit (not shown) of the processor 109 may utilize the flag during the execution of the next branch instruction indicating the target code position.Alternatively, the execution unit 130 itself may include a branch circuit (not shown) that performs a branch based on a logical comparison. In such an embodiment, "branch support" provided with the LCSZC instruction is a control jump for the specified target code position (rather than setting the control flag). In at least one embodiment, the branch circuit performing the jump or "branch" may be part of the logic comparison circuit 145.The decoder 165 may use various different mechanisms. For example, look-up table, hardware implementation, PLA, etc. The execution of various instructions by the decoder 165 and the execution unit 130 may be represented by a series of if / then syntax. It is understood that the execution of the instruction of this if / then syntax does not require these sequential processing. Rather, it is recognized that any process that logically performs this if / then process is within the scope of the present invention.In addition, FIG. 1 a shows that a data storage device 107 (eg, magnetic disk, optical disk and / or other machine-readable medium) may be connected to the computer system 100. In addition, data storage device 107 indicates that it includes code 195 for execution by processor 109. Code 195 includes one or more implementations of LCSZC instruction 142. And to cause the processor 109 to perform a bit test with an LCSZC instruction 142 for various purposes (eg, movie video compression / decompression, image filtering, audio signal compression, filtering or combining, modulation / demodulation, etc.) .The computer system 100 may be connected via an interconnect 101 to a display device 121 for displaying information to a computer user. The display device 121 may comprise a frame buffer, a dedicated graphics rendering device, a liquid crystal display (LCD) and / or a flat panel display.The input device 122 may include alphanumeric characters and other keys and may be connected to the interconnect 101 to communicate information and command selections to the processor 109. Other types of user input devices include cursor control 123 for communication direction information and command selection for the processor 109 and for cursor movement control of the display device 121 (eg, mouse, track ball, pen, touch screen or cursor Direction key). This input device usually has two degrees of freedom for two axes, a first axis (eg, x) and a second axis (eg, y). And this allows the device to locate the plane. However, the present invention should not be limited to input devices having only two degrees of freedom.Another device that can be connected to the interconnect 101 is a hard copy device 124 that is used to print instructions, data or other information on a medium (eg, paper, film or similar type of media). In addition, a recording / reproducing device 125 (for example, an audio digitizer connected to a microphone for recording information) may be connected to the computer system 100. In addition, the device 125 may include a digital-to-analog conversion (D / A) converter connected to the speaker for reproducing the digitized sound.The computer system 100 may be a terminal of a computer network (eg, LAN). The computer system 100 is a computer subsystem of a computer network. The computer system 100 optionally includes a video digitizing device 126 and / or a communication device 190 (eg, a serial communication chip, a wireless interface, an Ethernet chip or a modem that provides communication with an external device or network) . The video digitizing device 126 may be used to capture video images that can be transmitted in addition to the computer network.In at least one embodiment, the processor 109 supports an instruction set compatible with existing processors. Examples of existing processors include Intel (R) Pentium (registered trademark) Processor, Intel (R) Pentium (registered trademark) Pro Processor manufactured by Intel Corporation of Santa Clara (California), Intel (R) Pentium Intel (R) Itanium (R) Processor, Intel (R) Pentium (R) III Processor, Intel (R) Pentium There is Intel (R) Core (TM) Duo Processor. As a result, processor 109 may support existing processor operations in addition to the operations of the present invention. Processor 109 may be suitable for manufacturing one or more process technologies. And is described in sufficient detail in a machine readable medium and is easily and properly manufactured. While the present invention is described below as being incorporated in an x86 based instruction set, another embodiment may incorporate the present invention into other instruction sets. For example, the present invention can be incorporated into a 64-bit processor using an instruction set other than an x86 instruction set base.FIG. 1 b illustrates another embodiment of a data processing system 102 that implements the principles of the present invention. One embodiment of data processing system 102 is an application processor with Intel XScale (TM) technology. The embodiments described herein are to be understood as being used in other processing systems by those skilled in the art without departing from the scope of the present invention.Computer system 102 includes a processing core 110 that can perform LCSZC operations. In one embodiment, the processing core 110 represents a processing unit of any architecture. For example, but not limited to, CISC, RISC or VLIW type architecture. The processing core 110 may be suitable for manufacturing one or more process technologies. In addition, it can be described in sufficient detail in a machine readable medium, easily and properly manufactured.The processing core 110 includes an execution unit 130, a set of register files 150 and a decoder 165. The processing core 110 includes additional circuitry (not shown) not necessary for an understanding of the present invention.An execution unit 130 is used to execute the instructions received by the processing core 110. In addition to recognizing typical processor instructions, execution unit 130 recognizes instructions for performing LCSZC operations on packed and unpacked data formats. The instruction set recognized by decoder 165 and execution unit 130 may include one or more instructions for LCSZC operation. In addition, other packed instructions may be included.The execution unit 130 is connected to the register file 150 by an internal bus (which may be any communication path including a branch bus, a point-to-point interconnect, etc. as described above). The register file 150 represents a storage area including data for storing information of the processing core 110. As described above, it is to be understood that the storage area used for storing data is not limited. The execution unit 130 is connected to the decoder 165. Decoder 165 is used to decode the instructions received by processing core 110 into control signals and / or microcode entry points. In response to these control signals and / or microcode entry points, these control signals or microcode entry points may be transferred to the execution unit 130. In response to the control signal and / or the microcode entry point, the execution unit 130 performs the appropriate operation. In at least one embodiment, for example, execution unit 130 may perform a logical comparison as described herein. In addition, the status flags described herein may be set and / or branched to the designated code location.The processing core 110 is connected to the bus 214 for communication by various other system devices. For example, a synchronous random access memory (SDRAM) control means 271, a static random access memory (SRAM) control means 272, a burst flash memory interface 273, a PCMCIA / compact flash (CF) card control means 274, a liquid crystal display (LCD) A control means 275, a direct memory access (DMA) controller 276 and an alternate bus master interface 277, but is not limited thereto.In at least one embodiment, the data processing system 102 has an I / O bridge 290 for communicating with various input / output devices via the I / O bus 295. For example, such input / output devices include a universal asynchronous receiver / transmitter (UART) 291, a universal serial bus (USB) 292, a Bluetooth (registered trademark) wireless UART 293 and an I / O expansion interface 294 But it is not limited thereto. As with the other buses described above, the I / O bus 295 may be any communication path, such as a branch bus, a point-to-point interconnect, or the like.An embodiment of at least one data processing system 102 provides a processing core 110 capable of performing LCSZC operations on a mobile network and / or wireless communication, packed and unpacked data. The processing core 110 may operate with audio, video, and communication algorithms. That is, individual conversion systems, filters or convolution integration, compression / decompression techniques such as color space conversion, motion prediction video decoding or motion compensation video encoding, modulation / demodulation (MODEM) function (eg pulse code modulation (PCM)) .FIG. 1 c illustrates another embodiment of a data processing system 103 that performs packed and unpacked data LCSZC operations. In one alternative embodiment, the data processing system 103 comprises a chip package 310 including a main processor 224 and one or more coprocessors 226. The additional coprocessor 226 is optional and is shown in dashed lines in FIG. 1 c. One or more of the coprocessors 226 may be, for example, a graphics coprocessor capable of executing SIMD instructions.In FIG. 1 c, the data processor system 103 may include a cache memory 278 and an input / output system 265 (both connected to a chip package 310). The input / output system 295 may optionally be connected to the wireless interface 296.The coprocessor 226 can perform general calculation processing and can execute SIMD operation. In at least one embodiment, coprocessor 226 may perform packed and LCSZC operations on unpacked data.In at least one embodiment, the coprocessor 226 has an execution unit 130 and a register file 209. In at least one embodiment, the main processor 224 includes a decoder 165 that recognizes and decodes instructions of an instruction set including an LCSZC instruction to be executed by the execution unit 130. In another embodiment, the coprocessor 226 has at least a portion of a decoder 166 that decodes instructions of the instruction set including the LCSZC instruction. Data processing system 103 also includes additional circuitry (not shown) that is not necessary for an understanding of the present invention.In processing, the main processor 224 executes a stream of data processing instructions that control a general type of data processing operation, including interaction at the cache memory 278 and the input / output system 295. Coprocessor instructions are embedded within the stream of data processing instructions. The decoder 165 of the main processor 224 recognizes that these coprocessor instructions are of a type that must be executed by the coprocessor 226 to which they are attached. Accordingly, the main processor 224 sends out coprocessor instructions (or control signals representing coprocessor instructions) to the interconnect 236. The coprocessor obtains instructions from the interconnect 236. In one coprocessor embodiment illustrated in FIG. 1 c, coprocessor 226 receives and executes any coprocessor instructions received for that purpose. The coprocessor interconnect may be any communication path, such as a branch bus, a point-to-point interconnect, etc.Data may be received via wireless interface 296 for processing by coprocessor instructions. In one embodiment, the voice communication may be received in the form of a digital signal. This digital signal may be processed by coprocessor instructions to reproduce digital audio sample values of voice communication. In other embodiments, the compressed audio or video may be received in the form of a digital bit stream. It may be processed by coprocessor instructions to reproduce digital audio samples and / or motion video frames.In at least one alternative embodiment, main processor 224 and co-processor 226 may be integrated into a single processing core having execution unit 130, register file 209 and decoder 165. The execution unit 130 recognizes and executes the instruction set of the instruction set including the LCSZC instruction.FIG. 2a illustrates a register file of a processor in one embodiment of the present invention. Register file 150 may be used to store information. Information includes control means / status information, integer data, floating point data and packed data. It will be apparent to those skilled in the art that the above illustrative list of information and data is not exhaustive.In the illustrated embodiment of FIG. 2 a, the register file 150 includes an integer register 201, a register 209, a status register 208 and an instruction pointer register 211. Status register 208 indicates the status of processor 109 and may include various status registers (eg, zero flag and carry flag). The instruction pointer register 211 stores the address of the next instruction to be executed. All of the integer register 201, the register 209, the status register 208 and the instruction pointer register 211 are connected to the internal bus 170. Additional registers may be connected to the internal bus 170. The internal bus 170 may be a branched bus, but this need not be the case. The internal bus 170 may be any communication path including a point-to-point interconnection.In one embodiment, register 209 may be used for packed data and floating point data. In this embodiment, the processor 109 may treat the register 209 as a floating point register that is stack referenced, or as a packed data register that is not stack referenced at any time. In this embodiment, the processor 109 has a mechanism for handling, processing, or switching the register 209 as a stack referenced floating point register, or as a packed data register not being stack referenced, and processing. In another embodiment, the processor 109 may use the register 209 as a floating point and packed data register that is not simultaneously stack-referenced. In another embodiment, these same registers may be used to store integer data.Of course, in other embodiments more registers or fewer registers may be implemented. For example, another embodiment may have a separate set of floating point registers for storing floating point data. In another embodiment, each includes a first set of registers for storing control means / status information and a second set register for storing integer, floating point and packed data, respectively. More specifically, the register of the embodiment is of a specific circuit type and its meaning should not be restricted. Rather, the example registers store and provide data only, and the functions to perform are described herein.Various sets of registers (eg, integer registers 201, registers 209) may be implemented containing registers of different numbers and / or different sizes. For example, in an embodiment integer register 201 is implemented to store 32 bits. Register 209 is implemented to store 80 bits (all 80 bits are used for stored floating point data and 64 bits are used for packed data). In addition, register 209 may include eight registers (R 0212 a through R 7212 h, R 1212 b, R 2212 c and R 3212 d are examples of individual registers of register 209). The 32-bit register of the register 209 may be moved to the integer register of the integer register 201. Similarly, the value of the integer register may be moved to the 32-bit register of register 209. In another embodiment, the integer registers 201 each comprise 64 bits. Then, the 64-bit data may move between the integer register 201 and the register 209. In another alternative embodiment, register 209 each has 64 bits and register 209 contains 16 registers. In yet another alternative embodiment, register 209 includes 32 registers.FIG. 2 b illustrates a register file of a processor in one embodiment of the invention. Register file 150 may be used to store information. Information includes control means / status information, integer data, floating point data and packed data. In the embodiment shown in FIG. 2 b, the register file 150 includes an integer register 201, a register 209, a status register 208, an extension register 210 and an instruction pointer register 211. The status register 208, the instruction pointer register 211, the integer register 201, and the register 209 are all connected to the internal bus 170. In addition, the extension register 210 is also connected to the internal bus 170. The internal bus 170 may be a branched bus, but this need not be the case. The internal bus 170 may be any communication path including a point-to-point interconnection.In at least one embodiment, an extension register 210 is used for packed integer data and packed floating point data. In another embodiment, the extension register 210 may be used for scalar data, packed boolean data, packed integer data and / or packed floating point data. Of course, without departing from the broad scope of the present invention, alternative embodiments may have more registers, or fewer registers, with more or fewer registers for each set Well, each register may have more or less storage bits.In at least one embodiment, integer register 201 is implemented to store 32 bits and register 209 is implemented to store 80 bits (all 80 bits are used to store floating point data and 64 bits Are used for packed data). Extension register 210 is implemented to store 128 bits. In addition, extension register 210 may include eight registers (XR 0213 a through X R 7213 h, X R 0213 a, X R 1213 b, and X R 2213 c are examples of individual registers of register 210). In another embodiment, the integer registers 201 each comprise 64 bits. Extension registers 210 each comprise 64 bits. Extension register 210 has 16 registers. In an embodiment, the two registers of the extension register 210 may be implemented as a pair. In yet another alternative embodiment, the extension register 210 may include 32 registers.FIG. 3 illustrates a flowchart of a process 300 for manipulating data in one embodiment of the present invention. FIG. 3 illustrates, for example, the processing of a processor 109 (see for example FIG. 1a). That is, it performs the LCSZC operation of the packed data to perform the LCSZC operation of the unpacked data, or performs some other operation. Process 300 and the other processes disclosed herein may be performed by a general purpose machine or by a processing block having dedicated hardware or software or executable firmware operation code by a special purpose machine or a combination of both .In FIG. 3, the process for the method starts from "start" and proceeds to processing block 301. At processing block 301, decoder 165 (see, for example, FIG. 1a) receives control signals from cache 160 (see, eg, FIG. 1a) or interconnect 101 (see, eg, FIG. 1a). The control signal received at block 301 may be, in at least one embodiment, a type of control signal, commonly referred to as a software "instruction". Decoder 165 decodes the control signal to determine the operation to be performed. Processing proceeds from processing block 301 to processing block 302.At processing block 302, decoder 165 accesses the location of register file 150 (FIG. 1 a) or memory (see, eg, main memory 104 or cache memory 160 of FIG. 1 a). The memory location of the register or memory of the register file 150 is accessed according to the register address specified by the control signal. For example, control signals for operation may include SRC 1, SRC 2 and DEST register address. SRC 1 is the address of the register of the first source. SRC 2 is the address of the register of the second source. In some cases, the SRC 2 address is arbitrary as not all operations require two source addresses. If the SRC 2 address is not required for operation, only the SRC 1 address is used. DEST is the address of the destination register where the result data is stored. In at least one embodiment, SRC 1 or SRC 2 may be used as DEST of at least one of the control signals recognized by decoder 165.The data stored in the corresponding register are called Source 1, Source 2 and Result, respectively. In an embodiment, each of these data is 64 bits in length. In another embodiment, one or more of these data may be of other length (eg, 128 bits in length).In other embodiments, SRC 1, SRC 2 and DEST may define a memory location for addressable memory space of processor 109 (FIG. 1 a) or processing core 110 (FIG. 1 b). For example, SRC 1 may identify the memory location of main memory 104. On the other hand, SRC 2 identifies the first register of integer register 201. Then, DEST identifies the second register of the register 209. For simplicity of explanation, the present invention will be described as accessing the register file 150 in the present specification. However, those skilled in the art will recognize that these described accesses may be made to memory instead.Processing proceeds from block 302 to processing block 303. At processing block 303, the execution unit 130 (see, for example, FIG. 1a) performs the operation of the accessed data.Processing proceeds from processing block 303 to processing block 304. At processing block 304, the result is stored in register file 150 or memory according to the condition of the control signal. Processing ends at "stop". [Data storage format]FIG. 4 shows a packed data type in one embodiment of the present invention. Four packed and one unpacked data format are illustrated. That is, it includes a packed byte 421, a packed half 422, a packed single 423, a packed double 424 and an unpacked double quadword 412.In at least one embodiment, packed byte format 421 includes 16 data elements (B 0 -B 15) and is 128 bits in length. Each data element (B 0 to B 15) is 1 byte in length (eg, 8 bits).The packed half format 422, in at least one embodiment, includes eight data elements (HaIf 0 to Half 7) and is 128 bits in length. Each of the data elements (HaIf 0 to Half 7) may hold 16 bits of information. Each of these 16 bit data elements may be referred to as a "half word" or "short word" or simply "word".The packed single format 423 may be 128 bits in length in at least one embodiment. And may hold four 423 data elements (Single 0 to Single 3). Each of the data elements (Single 0 to Single 3) may hold 32 bits of information. Each of the 32 bit data elements may be referred to as "dword" or "double word". Each of the data elements (Single 0 to Single 3) holds, for example, a 32-bit single precision floating point value. Therefore it is also called "packed single" format.Packed double format 424 may be 128 bits in length in at least one embodiment. And may hold two data elements. Each data element (Double 0, Double 1) of the packed double format 424 may hold information of 64 bits in length. Each of the 64-bit data elements may be referred to as "qword" or "quadword". Each of the data elements (Double 0, Double 1) represents, for example, a 64-bit double precision floating point value. Therefore it is also called "packed double" format.The unpacked double quadword format 412 may hold up to 128 bits of data. The data need not necessarily be packed data. In at least one embodiment, for example, 128 bits of information in unpacked double quadword format 412 may be a single scalar data (eg, character, integer, floating point value or binary bit mask value) May be represented. Alternatively, the 128 bits of unpacked double quadword format 412 may represent an aggregation of irrelevant bits (eg, status register value where each bit or set of bits represents a different flag), etc.In at least one embodiment of the present invention, the packed single 423 and packed double 424 format data elements may be packed floating point data elements as described above. In another embodiment of the present invention, the packed single 423 and packed double 424 format data elements may be packed integers, Boolean variables, packed floating point data elements. In another alternative embodiment of the invention, the data elements in packed byte 421, packed half 422, packed single 423 and packed double 424 format may be packed integers or packed boolean data elements. In another embodiment of the invention, all of the data formats of packed byte 421, packed half 422, packed single 423 and packed double 424 are not necessarily allowed or may not be supported .5 and 6 illustrate a packed data storage representation of a register, at least in one embodiment of the present invention.FIG. 5 shows the unsigned, signed packed byte register formats 510 and 511, respectively. An unsigned packed byte register representation 510 shows the storage of unsigned packed byte data, for example, XR 0213 a through X R 7213 h (see FIG. 2 b, for example) of the 28 bit long extension register. In each element of the 16 byte data, the information is stored as follows. Bits 7 to 0 as byte 0, bits 15 to 8 as byte 1, bits 23 to 15 as byte 2, bits 31 to 24 as byte 3, bits 39 to 32 as byte 4, bits 47 Bit 55 from bit 56 as byte 6, bits 63 to 56 as byte 7, bits 71 to 64 as byte 8, bits 79 to 72 as byte 9, bits 87 to 80 as byte 10, byte 10 Bit 11 to bit 88 as byte 12, bits 103 to 96 as byte 12, bits 111 to 104 as byte 13, bits 119 to 112 as byte 14, and bits 127 to 120 as byte 15In this way, all available bits are used in registers. The use method of the storage device makes it efficient to use the storage area of the processor. Also, one operation may be performed on 16 data elements at the same time, accessing 16 data elements.The signed packed byte register representation 511 indicates the storage of signed packed bytes. Note that the eighth (MSB) bit of every byte data element is the sign indicator (s).FIG. 5 shows representations 512 and 513 of unsigned, signed packed word registers, respectively.The unsigned packed word register representation 512 shows how the extension register 210 stores eight word (16 bit each) data elements. Word 0 is stored in bits 0 through 15 of the register. Word 1 is saved from bit 30 to bit 16 of the register. Word 2 is saved from bit 32 to bit 47 of the register. Word 3 is saved from bit 48 to bit 63 of the register. Word 4 is saved from bit 60 to bit 79 of the register. Word 5 is saved from bit 90 to bit 80 of the register. Word 6 is saved from bit 96 to bit 111 of the register. Word 7 is saved from bit 112 to bit 127 of the register.Signed packed word register representation 511 is noted that a sign bit (s) resembling an unsigned packed word register representation 512 is stored in the sixteenth bit (MSB) of each word data element I want to.FIG. 6 shows the unsigned and signed packed doubleword register formats 514 and 515, respectively. An unsigned packed double word register representation 514 illustrates how extension register 210 stores four double words (32 bits each) data elements. Doubleword 0 is stored in bit 31 to bit 0 of the register. Double word 1 is stored in bit 30 to bit 63 of the register. Double word 2 is stored in bits 90 through 60 of the register. Double word 3 is stored from bit 96 to bit 127 of the register.The signed packed double word register representation 515 is similar to the unsigned packed quadword register representation 516. Note that the sign bit (s) is the 32 th bit (MSB) of each double word data element.FIG. 6 shows unsigned, signed packed quadword in-register formats 516 and 517, respectively. Unsigned packed quadword register representation 516 shows how extension register 210 stores two quadwords (64 bits each) data elements. Quadword 0 is saved from bit 63 to bit 0 of the register. Quadword 1 is stored from bit 60 to bit 127 of the register.The signed packed quadword register representation 517 is similar to the unsigned packed quadword register representation 516. Note that the sign bit (s) is the sixty-fourth bit (MSB) of each quadword data element. [Logical comparison, zero and carry flag setting operation]In at least one embodiment, the SRC 1 register holds packed data or unpacked double quadword data (Source 1) and the DEST register likewise stores packed data or unpacked double quadword data (Dest) is held. The Dest of the DEST register and the value of the Source 1 of the SRC 1 register may hold the bit mask value of the unpacked double quadword data in at least one embodiment.Normally, in the first step of the LCSZC instruction, two comparison operations are performed. The first intermediate result is generated by executing each bit independent logic comparison (bitwise AND operation) of Dest corresponding to each bit of Source 1. The second intermediate result is generated by executing independent logic comparison (bitwise AND operation) of each bit of Source 1 corresponding to the complement of each bit of Dest. These intermediate results may be stored in temporary storage locations (eg registers), or may not be stored by any processor.FIG. 7 a is a flowchart of a general method 700 for performing LCSZC operations at least in one embodiment of the present invention. Process 700 and the other processes disclosed herein may be performed by a general purpose machine or by a special purpose machine or by a combination of both in a processing block containing dedicated hardware or software or executable firmware operation code For example. 7a to 7d will be described below with reference to FIG. 7a.FIG. 7 a shows that process 700 initially begins with "start" and proceeds to process block 701. At processing block 701, the decoder 165 decodes the control signal received by the processor 109. In this way, the decoder 165 decodes the operation code of the LCSZC instruction. Processing then proceeds from processing block 701 to processing block 702.At processing block 702, via the internal bus 170, the decoder 165 accesses the register 209 in the register file 150, given the SRC 1 and the DEST address, of the coded instruction. In at least one embodiment, the address of each coded instruction indicates an extension register (see, for example, extension register 210 in FIG. 2 b). In such an embodiment, at block 702, the indicated extension register 210 is used to provide the execution unit 130 with the data stored in the SRC 1 register (source 1) and the data stored in DEST (Dest) Accessed. In at least one embodiment, extension register 210 communicates data to execution unit 130 via internal bus 170.Processing proceeds from processing block 702 to processing block 703. At processing block 703, the decoder 165 enables the execution unit 130 to execute instructions. In at least one embodiment, in such a process 703, one or more control signals to the execution unit are sent to indicate the desired operation (LCZCS). Processing proceeds from block 703 to processing blocks 714 and 715. Although blocks 714 and 715 are shown in parallel, such operations need to be performed exactly concurrently, so long as they are performed in the same cycle or a set of cycles. Alternatively, those skilled in the art will recognize that in at least one alternative embodiment, the processing of blocks 714 and 715 may be performed serially. In different embodiments, thus parallel blocks 714 and 715 may be processed in parallel, in series, or by a partial combination of parallel and serial operations.In processing block 714, the following processing is executed. All or some bits of Source 1 are ANDed with the corresponding corresponding bits of the value of Dest as well. Similarly, at processing block 715, all or some bits of Source 1 are ANDed with the complements of the corresponding corresponding bits of the Dest value.Processing proceeds from block 714 to block 720. Also, processing proceeds from block 715 to block 721.At processing block 720, the state of the processor is modified based on the result of the comparison performed at processing block 714. Similarly, at processing block 721, the state of the processor is modified based on the result of the comparison performed at processing block 715. Those skilled in the art should note that the process 700 illustrated in FIG. 7 a is non-destructive. That is, Source1 and Dest operand values are not changed as a result of the LCSZC operation. Instead, the zero flag is modified in block 720 and the carry flag is modified in block 721.At processing block 720, if all bits of Intermediate Result 1 are equal to zero (eg, logically low value), the value of the zero flag is set to a true value (eg, logic high). However, in block 720, if at least one bit of Intermediate Result 1 is a logic high value, the value of the zero flag is set to a false value (eg, logic low).In processing block 721, if all the bits of Intermediate Result 2 are equal to zero (eg, logic Low), the value of the carry flag is set to a true value (eg, logic high). However, in block 721, if at least one bit of Intermediate Result 2 is a logic high value, the carry flag is set to a false value (eg, logic low).Only the processing blocks 714 and 720 may be mounted, and the processing blocks 715 and 721 may not be mounted. Alternatively, only processing blocks 715 and 721 may be implemented and processing blocks 714 and 720 may not be implemented. Also, in other embodiments of process 700, additional processing blocks may be implemented to support additional variations of the LCSZC instruction.From blocks 720 and 721, the process may optionally proceed to block 722. At block 722, other status bits within the processor may be modified. In at least one embodiment, these status bits may include, for example, one or more other design-recognizable status flag values. These flags may be 1 or 2 bit values. Examples of these are parity (PF), auxiliary carry (AF), sign (SF), trap (TF), interrupt enable / disable (IF), direction (DF), overflow (OF), I / O privilege level (IOPL), nested task (NT), resume (RF), virtual 8086 mode (VM), alignment check (AC), virtual interrupt (VIF), virtual interrupt pending (FIP), and CPU identifier . Of course, the above list of specific flags is for illustrative purposes. Other embodiments may include fewer or more different flags.Following block 722 is "end". In an embodiment that does not include an optional block 722, the process is "done" after the processing of blocks 720 and 721.FIG. 7 b shows a flowchart for at least one specific embodiment 700 b of the general process 700 illustrated in FIG. 7 a. In the particular embodiment 700 b illustrated in FIG. 7 b, the LCSZC operation is performed on 128 bit long Source 1 and Dest data values. This may be either packed data or unpacked data. (Of course, those skilled in the art will recognize that the operations illustrated in Figure 7b may be performed with data values of other lengths, including those shorter than or longer than 128 bits)The processing by the processing blocks 701 b to 703 b of the method 700 b is basically the same operation as the processing blocks 701 to 703 described in connection with the method 700 illustrated in FIG. 7 a. When the decoder 165 permits the execution unit 130 to execute the instructions of block 703 c, the LCSZC instruction performs a logical AND comparison of the respective bits of Source 1 and Dest values. (See signed packed double word register representation 515 shown in FIG. 6) Such instructions are used by the application programmer as the instruction simplified storage symbol, eg "PTEST". Processing proceeds from block 703 c to blocks 714 c and 715 c. Also, although the processes 714 c and 715 c may be executed in parallel, they are not necessarily required to be executed as such.Processing proceeds from processing block 703 b to processing blocks 714 b and 715 b. Note that as shown in processing blocks 714 and 715 of FIG. 7 a, processing 714 b and 715 b are illustrated in FIG. 7 b as being performed in parallel, it should be noted that the present invention is not limited in this regard. Instead, in different embodiments, parallel blocks 714b and 715b may perform processing in serial, or a partial combination of parallel and serial operations.At processing block 714 b, the following portion is executed. All bits of Source 1 are logically ANDed with respective bits of the same corresponding Dest value. That is, the bit [127: 0] of Intermediate Result 1 is assigned the result of bitwise AND operation of the respective bits of Dest [127: 0] and Source 1 [127: 0].Similarly, in processing block 715 b, all of the Source 1 bits are logically ANDed with the complements of the corresponding bits of the Dest value. That is, the bits [127: 0] of Intermediate Result 2 hold the result of bitwise AND operation of the bit of each complement of Dest [127: 0] and the bit of Source 1 [127: 0].Processing proceeds from block 714 b to block 720 b. Processing then proceeds from block 715 b to block 721 b.At processing block 720 b, the state of the processor is modified based on the result of the comparison performed at processing block 714 b. Similarly, at processing block 721 b, the state of the processor is modified based on the result of the comparison made at processing block 715 b. Those skilled in the art should note that the process 700 b illustrated in FIG. 7 b is non-destructive. That is, Source1 and Dest operand values are not changed as a result of the LCSZC operation. Instead, the zero flag is modified in block 720 b and the carry flag is modified in block 721 b.In processing block 720 b, if all bits of Intermediate Result 1 (eg, bits [127: 0] of Intermediate Result 1) are equal to zero (eg, logical low), the value of the zero flag is a true value (eg, logical high) Is set. However, if one bit of Intermediate Result 1 is also logic high at block 720 b, the zero flag is set to a false value (eg, logic low).In processing block 721 b, if all bits of Intermediate Result 2 (eg, bits [127: 0] of Intermediate Result 2) are equal to zero (eg, logical low), the carry flag value is a true value (eg, logical value High ). However, in block 721 b, if even one bit of Intermediate Result 2 is a logical value High, the carry flag is set to a false value (for example, logical value Low).In another embodiment of method 700b, only processing blocks 714b and 720b may be implemented and processing blocks 715b and 721b may not be implemented. Alternatively, processing blocks 714 b and 720 b may not be implemented with only processing blocks 715 b and 721 b. Alternate embodiments of method 700b may implement processing blocks attached to support additional variations of the LCSZC instruction.From blocks 720 b and 721 b, the process may optionally proceed to block 722 b. At block 722 b, other status bits within the processor may be modified. In the embodiment illustrated in FIG. 7 b, the AF (auxiliary carry), OF (overflow), PF (parity) and SF (sine) flags are assigned to the logic Low in block 722 b.Processing "terminates" with optional block 722 b. In an embodiment that does not include the optional block 722 b, the process "ends" after processing at blocks 720 b and 721 b.It goes without saying that the processing blocks 714, 714 b, 715 or 715 b of both embodiments may perform logical comparison operations on signed or unsigned data elements, or a combination of both.FIG. 7 c shows a flowchart of at least one other specific embodiment 700 c of the general method 700 illustrated in FIG. 7 a. In the particular embodiment 700 c illustrated in FIG. 7 c, the LCSZC operation is performed on Source 1 and Dest data values that are 128 bits in length. The source operand, the destination operand, or both can be packed. That is, the source operand indicates that the 128-bit data value represents four packed 32-bit ("double word") data elements. For example, the data elements may each be a 32-bit signed single precision floating point.Of course, those skilled in the art will recognize that the operation illustrated in FIG. 7 c may be performed for data values of other lengths. For example, data elements longer than or shorter than 128 bits, or even bytes (8 bits) and / or short words (16 bits).The processing of the processing blocks 701 c to 703 c of the method 700 c basically performs the same operation as the processing blocks 701 to 703 described in the method 700 illustrated in FIG. 7 a. The exception to the above explanation is that in processing block 703 c, when the decoder 165 instructs the execution unit 130 to execute instructions, the instruction is a logical AND comparison of the MSB of each 32 bit double word of Source 1 and Destination value Is to execute the LCSZC instruction. (See signed packed double word register representation 515 illustrated in FIG. 6) Such instructions are referred to as instruction mnemonic symbols used by the programmer, eg, "TESTPS". Here, "PS" indicates a packed single single precision data element.Processing proceeds from block 703 c to blocks 714 c and 715 c. Also, although the processes 714 c and 715 c may be executed in parallel, they need not necessarily be so.At processing block 714 c, the following contents are executed. All of the bits of Source 1 are logically ANDed with the bits of the corresponding Dest value likewise. That is, the bit [127: 0] of Intermediate Result 1 is assigned the result of bitwise AND operation of the respective bits of Dest [127: 0] and Source 1 [127: 0].Similarly, at processing block 715 c, all of the Source 1 bits are logically multiplied with the complement of the corresponding Dest value bit. That is, the bit [127: 0] of Intermediate Result 2 is assigned the result of bitwise AND operation of the complement of each bit of Dest [127: 0] and the bit of Source 1 [127: 0].Processing proceeds from block 714 c to block 720 c. Also, processing proceeds from block 715 c to block 721 c.At block 720 c, the MSB of each 32 bit double word of the first intermediate value (Intermediate Value 1) is determined. In block 720 c, if bits 127, 95, 63, and 31 of Intermediate Value 1 are equal to zero, the zero flag is set to a logical high value. Otherwise in block 720 c, the zero flag is set to a logic low.Similarly, at block 721 c, the MSB of each 32 bit double word of the second intermediate value (Intermediate Value 2) is determined. In block 721 c, if bits 127, 95, 63 and 31 of Intermediate Value 2 are equal to zero, the carry flag is set to the logical high value. Otherwise in block 721 c, the carry flag is set to a logic low value. The original value (Source 1) of the source register (SRC 1) and the original value (Dest) of the destination register (DEST) are not modified by the result of the processing of the method 700 c.Processing "ends" at blocks 720c and 721c, or proceeds to optional processing block 722c. At block 722 c, other status bits within the processor may be modified. In the embodiment illustrated in FIG. 7 c, at block 722 c, the AF (auxiliary carry), OF (overflow), PF (parity) and SF (sign) flags are assigned a logical Low.In an embodiment that does not include optional block 722 c, the process "ends" after processing at blocks 720 c and 721 c. In an embodiment including optional block 722 c, the process ends after completion of process block 722 c.FIG. 7 d shows a flowchart for at least one other specific embodiment 700 d of the general method 700 illustrated in FIG. 7 a. In the specific embodiment 700 d illustrated in FIG. 7 d, the LCSZC operation is performed on the 128 bit long Source 1 and Dest data values. Source or destination operand or both may be packed. That is, the source operand of the 128-bit data value represents two packed 64-bit data elements. The data element may represent, for example, a 64-bit signed double precision floating point value, respectively.Of course, those skilled in the art will recognize that the operations illustrated in FIG. 7 d may be performed for data values of other lengths. The above includes other sizes of data elements that are longer or shorter than 128 bits. It also contains bytes (8 bits) and / or short words (16 bits).Operation according to 701 d to 703 d of method 700 c performs basically the same operation as processing block 701 through processing block 703 described in connection with the method 700 illustrated in FIG. 7 a. The exception to the above description is that in processing block 703 d, when the decoder 165 causes the execution unit 130 to execute instructions, the instructions are used to perform a logical AND comparison of the MSB of each 64 bit double word of Source 1 and Destination values It is LCSZC instruction. (See signed packed quadword register representation 517 shown in FIG. 6). Such an instruction is referred to as the instruction mnemonic memory symbol used by the programmer, for example "TESTPD". Here, "PD" indicates a packed double double precision data element.Processing proceeds from block 703 d to blocks 714 d and 715 d. Also, the blocks 714 d and 715 d may be executed in parallel, but do not necessarily have to be so executed.At processing block 714 d, the following processing is performed. All of the bits of Source 1 are logically ANDed with the bits of the corresponding Dest value likewise. That is, the bit [127: 0] of Intermediate Result 1 is assigned the result of bitwise AND operation of the respective bits of Dest [127: 0] and Source 1 [127: 0].Similarly, in processing block 715 d, all of the Source 1 bits are logically ANDed with the complements of the corresponding Dest values. That is, bits [127: 0] of Intermediate Result 2 are assigned the result of bitwise AND operation of the bit of each complement of Dest [127: 0] and the bit of Source 1 [127: 0].Processing proceeds from block 714 d to block 720 d. Also. Processing proceeds from block 715 d to 741 d.At block 720 d, the MSB of each 64-bit quadword of the first intermediate value (Intermediate Value 1) is determined. In block 720 d, if bits 127 and 63 of Intermediate Value 1 are equal to zero, the zero flag is set to a logical high value. In block 720 d, otherwise, the zero flag is set to a logic low.Similarly, at block 721 d, the MSB of each 64-bit quadword of the second intermediate value (Intermediate Value 2) is determined. If bits 127 and 63 of Intermediate Value 2 are equal to zero at block 721 d, the carry flag is set to a logic high value. At block 721 d, in other cases, the carry flag is set to the logic value Low. The original value (source 1) of the source register (SRC 1) and the original value (Dest) of the destination register (DEST) are not modified as a result of the processing of the method 700 d.Processing "ends" at blocks 720 d and 721 d, or proceeds to optional processing block 722 d. At block 722 d, other status bits within the processor may be modified. In the embodiment illustrated in FIG. 7 d, the AF (Auxiliary Carry), OF (Overflow), PF (Parity) and SF (Sign) flags are assigned a logical Low in block 722 d.In an embodiment that does not include any block 722 d, the process "ends" after the processing of blocks 720 d and 721 d. In an embodiment including optional block 722 c, the process ends after completion of process block 722 c. [Logical comparison, zero and carry flag setting circuit]In at least some embodiments, the various LCSZC instructions (TESTPS and TESTPD, above) for packed data are executed in the same number of clock cycles as a comparison operation against unpacked data for multiple data elements as well it can. Parallelism may be used to perform in the same number of clock cycles. That is, the elements of the processor (eg, register and execution unit) may be instructed to simultaneously perform the LCSZC operation of the data element. This parallel operation will be described in more detail below. With reference to FIGS. 8 a and 8 b, reference is made to FIG. 1 a.FIG. 8a shows a circuit 801 for performing LCSZC operation of packed data at least in one embodiment of the present invention. Circuit 801 may be all or part of the logic comparison circuit 145 illustrated in FIG. 1 a in at least one embodiment.FIG. 8 a represents the source operand Source 1 [127: 0] 831 and the destination operand Dest [127: 0] 833. In at least one embodiment, the source and destination are N- bit length SIMD registers. For example, it is stored in a 128-bit Intel (R) SSE 2 XMM register (see for example the extension register 210 in FIG. 2 b).The specific embodiment illustrated in FIG. 8a shows a double quadword (128 bit) embodiment of the LCSZC instruction. Here, each bit of the 128-bit source is compared with each bit of the destination operand. In such an embodiment, as each bit is compared, the operation is functionally unaware of any nature of 128 bits of the source and destination operands. Either or both of the source and destination operands may be packed data, unpacked scalar data, signed data or unsigned data. In particular embodiments, the packed data source 831 and the destination 833 have 128 bits, but the principles disclosed herein may be used with other existing lengths such as 80 bits, 128 bits or 256 bits It goes without saying that it may be extended to.The operation control means 800 outputs a signal on the enable 880 to control the operation performed by the circuit 801. One embodiment of the operation control means 800 may comprise, for example, a decoder 165 and an instruction pointer register 211. Of course, the operation control means 800 may have additional circuitry not necessary for understanding the present invention. The LCSZC circuit 801 includes two sets of AND gates (825, 827). Each set contains one AND gate for each bit of the source operand. Thus, in embodiments where the source and destination have 128 bits, the first set 825 includes 128 AND gates 819 and the second set 827 includes 128 AND gates 820. Each of the 128 bit values of the source and destination operands (see, eg, the bit value 854 of FIG. 8 a) is the input to one of the AND gates 819 of the first set 825, and the It is also the input to one of the AND gates 820. It should be noted that the second set AND gate 827 receives the input after the destination operand 833 has been inverted and complemented (see inverter logic 844).The output of each of the AND gates 819 of the first set 825 is input to the NAND gate 854. At least one purpose of the NAND gate 854 is to determine if the bit is all zero (logic Low) as a result of the AND of the source and the destination. If so, the logic value High is set to the zero flag 858.The output of each of the AND gates 820 of the second set 827 is input to a NAND gate 856. At least one purpose of the NAND gate 856 is to determine whether the bits are all zero (logic Low) as a result of the AND of the source 831 bit and the complement of the destination bit 833. If so, the logic value High is set to the carry flag 860.Another example of a double quadword LCSZC instruction may include an operation for unsigned double quadword values of the source and destination and an operation for signed double quadword values of the source and destination Good. It should be noted that the present invention is not limited to the above. Another alternative embodiment of the LCSZC instruction may include the operation of signed or unsigned data elements of other sizes. (See, for example, FIG. 8b for the signed doubleword embodiment and FIG. 8c for the signed quadword embodiment)FIG. 8 b shows at least one example of a circuit 801 b for performing the LCSZC operation of packed data in one embodiment of the invention. The operation control means 800 processes the control signal for the packed LCSZC command. Such a packed LCSZC instruction may be a "TESTPS" instruction which, in an embodiment, indicates that the LCSZC operation is to be performed on four packed 32-bit values. Each packed 32 bit value may represent, for example, a single precision floating point value. In such an embodiment, it should be understood that only one of the operands (eg, source 831 or destination 833) includes packed single precision floating point values. Other operands may include, for example, bit masks.8a, the operation control means 800 outputs a signal on the enable 880 in order to control the LCSZC circuit 801b. [One skilled in the art will recognize that the LCSZC circuit 801 b illustrated in FIG. 8 b may be implemented by activating a subset of the logic elements of the LCSZC circuit 801 illustrated in FIG. 8 a]LCSZC circuit 801b includes two sets of AND gates, where each set includes one AND gate for comparing the corresponding bit of the destination operand with each bit of the source operand. In the embodiment illustrated in FIG. 8 b, the most significant bits for each of the four 32-bit ("double word") data elements are compared. Thus, the first set of AND gates includes gates from 8191 to 8194, and the second set of AND gates includes the gates from 8201 to 8204.FIG. 8 b shows that the MSB values of each of the four 32-bit data elements of the source operand 831 and of each of the four 32-bit data elements of the destination operand 833 corresponds to one of the first set of AND gates 819 As shown in FIG. More specifically, in FIG. 8 b, bit 127 of the source operand 831 and the destination operand 833 are both inputs to the gate 8191. Bit 93 of the source operand 831 and the destination operand 833 are both inputs to the gate 8192. Bit 63 of the source operand 831 and the destination operand 833 is both inputs to the gate 8193. And it indicates that bit 31 of the source operand 831 and the destination operand 833 are both inputs to the gate 8194.FIG. 8 b shows that the value of the MSB of each of the four 32-bit data elements of the source operand 831 and of each of the four 32-bit data elements of the destination operand 833 is 820 and is input to one of the second set AND gates As shown in FIG. It should be noted that the second set of AND gates 8201 to 8204 receive the inputs after the MSB of each doubleword of the destination operand 833 is inverted and the complement is computed (the inverters 844 a - 844 d reference).More specifically, FIG. 8 b shows that the complement of bit 127 of source operand 831 and bit 127 of destination operand 833 are both inputs to gate 8201. Bit 93 of source operand 831 and complement bit 93 of destination operand 833 are both inputs to gate 8202. Bit 63 of source operand 831 and complement of bit 63 of destination operand 833 are both inputs to gate 8203. The bit 31 of the source operand 831 and the complement of the bit 31 of the destination operand 833 are both inputs to the gate 8204.Each of the outputs of AND gates 8191 through 8194 is an input to NAND gate 855. At least one purpose of the NAND gate 855 is to determine whether the result of the AND of the source and the most significant bit of the destination is zero (logic Low) for all four doublewords. If so, it inputs the logical value High to the zero flag 858.The outputs of each of the AND gates 8204 to 8201 are input to the NAND gate 859. At least one purpose of the NAND gate 859 is to determine if the result of the complement AND of the source and destination is zero (logic Low) for each of the four doublewords. If so, the logic value High is input to the carry flag 860.Another embodiment of the packed LCSZC instruction to compare the MSB for each of the four doublewords is the operation of the packed signed doubleword value of one operand and the operation for the bit mask of the other operand, source and destination An unsigned double word value, and a source and destination signed double word value, or a combination thereof. It should be noted that the present invention is not limited to the above. Another alternative embodiment of the LCSZC instruction may include operations that apply to other sizes of signed or unsigned data elements.FIG. 8 c shows at least one example of a circuit 810 c for performing LCSZC operation of packed data in another embodiment. The operation control means 800 processes the control signal for the packed LCSZC command. Such a packed LCSZC instruction may be a "TESTPD" instruction, which in an embodiment is to be executed on two packed double precision (64 bit) floating point values that the LCSZC operation evaluates. The operation control means 800 outputs a signal on the enable 880 to control the LCSZC circuit 801 c. [Those skilled in the art will recognize that the LCSZC circuit 801 c illustrated in FIG. 8 c may be implemented by activating a subset of the logic elements of the LCSZC circuit 801 illustrated in FIG. 8 a]As described in FIG. 8 b, like the circuit 801 b, the LCSZC circuit 801 c includes two sets of AND gates. Wherein each set includes one AND gate for comparing each bit of the source operand with the bit of the corresponding destination operand. In the embodiment illustrated in Figure 8c, the most significant bits for each of the two 64 bit ("quadword") data elements are compared. Thus, the first set of AND gates includes gates 8191 through 8193. The second set of AND gates then includes gates 8201 through 8203.FIG. 8 c shows that the MSB values of each of the two 64-bit data elements of the source operand 831 and of each of the two 64-bit data elements of the destination operand 833 are compared to the first set of AND gates (8191 and 8193 ). In FIG. 8c, more particularly, bit 127 of the source operand 831 and the destination operand 833 is the input to the gate 8191, and that bit 63 of the source operand 831 and the destination operand 833 is the input to the gate 8193 .FIG. 8 c shows that the MSB values of each of the two 64-bit data elements of the source operand 831 and of each of the two 64-bit data elements of the destination operand 833 are stored in a second set of AND gates (8201 and 8203 ). Note that the second set of AND gates (8201 and 8203) receives that value after the MSB of each quadword in the destination operand 833 is inverted and the complement is determined.More specifically, FIG. 8 c shows that the complement of that bit 127 of the source operand 831 and the bit 127 of the destination operand 833 are both inputs to the gate 8201. And it indicates that the complement of bit 63 of the source operand 831 and bit 63 of the destination operand 833 are both inputs to the gate 8203.The outputs of AND gates 8191 and 8193 are input to NAND gate 853. At least one purpose of NAND gate 853 is to determine whether the result of the AND of the most significant bits of each of the two source and destination quadwords are both zero (logic Low) . If so, the logical value High is input to the zero flag 858.The outputs of AND gates 8201 and 8203 are input to NAND gate 857. At least one purpose of the NAND gate 857 is to determine whether the result of the AND of the most significant bits of each of the two source and destination quadwords are both zero logic low. If so, the logical value High is input to the carry flag 860.Another embodiment of the packed LCSZC instruction to compare the MSB for each of the two quadwords is the operation of the source and destination unsigned quadword values, the operation for the signed quadword value of the source and the destination , Or an operation for a combination of both. However, it is not limited to this. Other alternative embodiments of the LCSZC instruction may include operations that apply to other sizes of signed or unsigned data elements.As mentioned above, the decoder 165 may recognize and decode control signals received by the processor 109. The control signal may be an operation code for the LCSZC instruction. In this way, the decoder 165 decodes the operation code for the LCSZC instruction.Referring to FIG. 9, various embodiments of the operation code utilized to encode the control signal (operation code) for the LCSZC instruction are shown. FIG. 9 shows the format of an instruction 900 in one embodiment of the present invention. The instruction format 900 includes various fields. The operand specifier field is arbitrary. That is, mod R / M, scale-index-base 940, displacement 950, immediate 960.Those skilled in the art will appreciate that the format 900 described in FIG. 9 is exemplary. Other configurations of data in the instruction code may then be utilized in accordance with the disclosed embodiments. For example, the fields 910, 920, 930, 940, 950, 960 need not be in the order shown. Each may be rearranged to another location. There is no need to adjoin. Also, the field lengths mentioned in this specification shall not be regarded as limiting. The field described as being a specific part of a byte may be implemented as a longer or shorter field in another embodiment. The term "byte" is used herein to refer to 8-bit grouping, but in other embodiments grouping of any other size including 4 bits 16 bits and 32 bits is included.As used herein for a particular example, the opcode of an instruction to indicate a desired operation (eg, LCSZC instruction) may include a particular value in the field of instruction format 200. Such instructions are often instantiated as "effective instructions". Bit values for effective instructions are often referred to herein as "instruction codes".For each instruction code, the corresponding decoded instruction code uniquely identifies the execution unit whose operation responds to the instruction code (for example, 130 in FIG. 1 a). The decoded instruction code may include one or more micro-operations.The contents of opcode field 920 identifies the operation. In at least one embodiment, the opcode field 920 for the embodiment of the LCSZC instruction described herein is 3 bytes in length. The opcode field 920 may include information of 1, 2 or 3 bytes. In at least one embodiment, the three byte escape opcode value, the two byte escape field 118c of the opcode field 920 is combined with the opcode field 920 of the third byte 925 to identify the LCSZC operation. The third byte 925 is referred to herein as an instruction specific operation code.FIG. 9 shows a second embodiment 928 of the instruction format for the LCSZC instruction. The three byte escape opcode value in the two byte field 118c of the opcode field 920 is combined with the contents of the prefix field 910 and the contents of the opcode field 925 specific to the instruction of the opcode field 920 to identify the LCSZC operation Will be performed.In at least one embodiment, the prefix value 0x66 is placed in the prefix field 910. And is used as part of the instruction opcode to define the desired operation. That is, the value of the prefix 910 field was decoded as part of the opcode rather than simply indicating that the opcode will follow. In at least one embodiment, for example, the prefix value 0x66 is used to indicate that the destination and source operands of the LCSZC instruction are in the 128 bit Intel (R) SSE 2 XMM register. Other prefixes may be used in the same way. However, in at least some embodiments of the LCSZC instruction, the prefix may instead be used as a conventional role to enhance the opcode or to qualify the opcode under certain operational conditions.The first embodiment 926 and the second embodiment 928 of the instruction format both contain a three byte escape opcode field 118 c and an instruction specific opcode field 925. The three byte escape opcode field 118c is, in at least one embodiment, two bytes in length. Instruction format 926 uses one of four special escape opcodes. This is called a 3-byte escape opcode. The three byte escape opcode is two bytes in length. This informs the decoder hardware that the instruction utilizes the third byte of opcode field 920 to define the instruction. The three byte escape opcode field 118c may be located anywhere in the instruction opcode and does not necessarily have to be in the highest order or lowest order field within the range of the instruction .In at least one embodiment, at least four 3-byte escape opcode values are defined as follows: 0x0 F3y where y is 0x8, 0x9, 0xA or 0xB. A specific embodiment of an LCSZC instruction opcode containing the value "0x0F38" as a three byte escape opcode value is disclosed herein. It should be noted that such disclosure should not be construed as limiting. Other embodiments may utilize other escape opcode values.Table 3 below shows an example of LCSZC instruction code using prefix and 3 byte escape opcode.In at least one embodiment, the value of the source or destination operand may be used as a mask. The choice of the programmer as to whether to use the source or destination operand as the mask value may be determined based at least in part on the desired operation. Using, for example, a second operand (source) as a mask value, the resulting operation is as follows: "Set ZF if everything under the mask is" 0 "; set everything under the mask" 1 ", set CF" Meanwhile, as the mask value the first argument (destination) When you use, the resulting action is as follows: "Set ZF if everything under the mask is" 1 "; if everything under the mask is" 0 "then CF Set "Additional instructions are needed to execute at least some embodiments of the packed LCSZC instructions described above in connection with FIGS. 7c, 7d, 8b and 8c. This adds machine cycle latency to the operation. For example, the pseudocode set described in Table 4 shows that the instruction set using the PTEST instruction can save the number of instructions than the instruction set not including the PTEST instruction.The pseudocode described in Table 4 shows that the example in which the LCSZC instruction is described can be used to improve the performance of the software code. As a result, the LCSZC instruction can be used for general processors to improve the performance of many algorithms than prior art instructions. [Another Embodiment]The above described embodiment makes a comparison of the 64-bit data elements for the MSB for 32-bit data elements and the packed embodiment of LCSZC instructions. Alternative embodiments may use different size inputs, different sized data elements and / or different bits (eg LSBs of data elements). In addition, in the embodiment described above, Source 1 and Dest each contain 128 bits of data. Alternative embodiments can handle packed data with more or less data. For example, one alternative embodiment handles packed data having 64 bits of data. Also, the bits compared by the LCSZC instruction need not be the same bit position of each packed data element.While the invention has been described by way of several embodiments, those skilled in the art will recognize that the invention is not limited to the described embodiments. The method and apparatus of the present invention can be modified in various ways within the scope of the appended claims. The specification is thus to be regarded as illustrative rather than limiting on the present invention.The foregoing description is intended to illustrate preferred embodiments of the invention. As is clear from the above consideration, in particular, this technical field is developing rapidly. And further technical improvements can not be easily predicted. The present invention may be modified within the scope of the appended claims by one skilled in the art without departing from the principle of the present invention and its details may be modified.
Techniques and mechanisms for interconnecting circuitry disposed on a transparent substrate. In an embodiment, a multilayer circuit is bonded to the transparent substrate, the multilayer circuit including conductive traces that are variously offset at different respective levels from a side of the transparent substrate. Circuit components, such as packaged or unpackaged integrated circuit devices, are coupled each to respective input and/or output (IO) contacts of the multilayer circuit, where the conductive traces and the IO contacts interconnect the circuit components with each other. In another embodiment, the multilayer circuit is a flexible circuit that is bent to interconnect circuit components which are disposed on opposite respective sides of the transparent substrate.
CLAIMSWhat is claimed is:1. A device comprising:a transparent substrate;a first multilayer circuit bonded at a surface of the transparent substrate;a first circuit component coupled, via first traces at a first surface portion of the transparent substrate, to first input/output (10) contacts of the first multilayer circuit; anda second circuit component coupled, via second traces at a second surface portion of the transparent substrate, to second 10 contacts of the first multilayer circuit.2. The device of claim 1, wherein the first surface portion and the second surface portion are on opposite respective sides of the transparent substrate.3. The device of claim 1, wherein the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate.4. The device of claim 3, wherein the dielectric material comprises a polyamide-imide compound.5. The device of claim 1, wherein the first IO contacts or the second 10 contacts are at a first side of the first multilayer circuit, the first multilayer circuit further comprising one or more other 10 contacts at a second side of the first multilayer circuit opposite the first side.6. The device of claim 5, further comprising a third circuit component coupled to the one or more 10 contacts, wherein respective ones of the first 10 contacts and the second 10 contacts are coupled to each other via the third circuit component and the one or more 10 contacts.7. The device of claim 1, further comprising:a second multilayer circuit bonded at the surface of the transparent substrate, the second multilayer circuit coupled to the first multilayer circuit via third traces at a third surface portion of the transparent substrate.8. The device of claim 1, further comprising:a third circuit component coupled, via third traces at a third surface portion, of the transparent substrate to third 10 contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.9. A system comprising:a transparent substrate; a first multilayer circuit bonded at a surface of the transparent substrate;a first circuit component coupled, via first traces at a first surface portion of the transparent substrate, to first input/output (10) contacts of the first multilayer circuit;a second circuit component coupled, via second traces at a second surface portion of the transparent substrate, to second 10 contacts of the first multilayer circuit; andone or more display elements disposed on the transparent substrate, the one or more display elements coupled to the first circuit component and the second circuit component to generate a display based on a signal communicated via the first multilayer circuit.10. The system of claim 9, wherein the first surface portion and the second surface portion are on opposite respective sides of the transparent substrate.11. The system of claim 9, wherein the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate.12. The system of claim 11, wherein the dielectric material comprises a polyamide-imide compound.13. The system of claim 9, wherein the first IO contacts or the second 10 contacts are at a first side of the first multilayer circuit, the first multilayer circuit further comprising one or more other 10 contacts at a second side of the first multilayer circuit opposite the first side.14. The system of claim 13, further comprising a third circuit component coupled to the one or more 10 contacts, wherein respective ones of the first 10 contacts and the second 10 contacts are coupled to each other via the third circuit component and the one or more 10 contacts.15. The system of claim 9, further comprising:a second multilayer circuit bonded at the surface of the transparent substrate, the second multilayer circuit coupled to the first multilayer circuit via third traces at a third surface portion of the transparent substrate.16. The system of claim 9, further comprising:a third circuit component coupled, via third traces at a third surface portion, of the transparent substrate to third 10 contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.17. A method comprising:bonding a first multilayer circuit at a transparent substrate;coupling a first circuit component, via first traces at a first surface portion of the transparent substrate, to first input/output (10) contacts of the first multilayer circuit; and coupling a second circuit component, via second traces at a second surface portion of the transparent substrate, to second 10 contacts of the first multilayer circuit.18. The method of claim 17, wherein the first surface portion and the second surface portion are coupled to the first multilayer circuit at opposite respective sides of the transparent substrate.19. The method of claim 17, wherein the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate.20. The method of claim 19, wherein the dielectric material comprises a polyamide-imide compound.21. The method of claim 17, further comprising coupling a third circuit component to one or more IO contacts of the first multilayer circuit, wherein respective ones of the first IO contacts and the second IO contacts are coupled to each other via the third circuit component and the one or more IO contacts.22. The method of claim 17, further comprising:bonding a second multilayer circuit at the surface of the transparent substrate; andcoupling the second multilayer circuit to the first multilayer circuit via third traces at a third surface portion of the transparent substrate.23. The method of claim 17, further comprising:coupling a third circuit component, via third traces at a third surface portion of the transparent substrate, to third IO contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.
DEVICE, SYSTEM AND METHOD TO INTERCONNECT CIRCUIT COMPONENTSON A TRANSPARENT SUBSTRATEBACKGROUND1. Technical FieldEmbodiments of the present invention relate to the field of integrated circuit devices, and more particularly, to the interconnection of circuit components on a transparent substrate.2. Background ArtSuccessive generations of processors, microcontrollers, drivers and other microelectronic devices continue to scale in size, while supporting increasing levels of computation and input/output (10) capability. These advancements pose new challenges at least with respect to effective interconnection and communication of components with each other. One area where these challenges are faced is Chip-on-Glass (COG) technology, which is often used for smartphone and small tablet display solutions.COG technologies typically bond unpackaged (bare die) integrated circuit (IC) components directly onto a display glass, where the IC components operate to control display functionality. A common interconnect technique for COG is to couple components at opposite ends of traces that are printed on a glass surface. However, these printed traces, which usually comprise Indium Tin Oxide (ITO), are prone to signal degradation problems. As a result, the distances of printed interconnects in COG systems tend to be somewhat limited (typically not more than a few millimeters). The number and variety of COG systems continues to grow with increasing demand for wearables, smartphones, tablets and the like. Due to this growth, there is expected to be an increasing premium placed on improvements to the interconnection of components in COG systems.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:FIG. 1 shows cross-sectional views of a system to interconnect circuitry on a transparent substrate according to an embodiment.FIG. 2 is a flow diagram illustrating elements of a method for interconnecting circuitry according to an embodiment.FIG. 3 shows a cross-sectional side view of a system to interconnect circuitry according to an embodiment.FIG. 4 shows cross-sectional side views of processing to interconnect circuitry with a multilayer circuit according to an embodiment. FIG. 5 is a cross-sectional side view of a system to interconnect circuitry on a transparent substrate according to an embodiment.FIG. 6 is a cross-sectional side view of a system to interconnect circuitry with a multilayer circuit according to an embodiment.FIG. 7 shows plan views of respective devices including interconnect structures according to an embodiment.FIG. 8 is a functional block diagram illustrating elements of a computer device according to an embodiment.FIG. 9 is a functional block diagram illustrating elements of a computer system according to an embodiment.DETAILED DESCRIPTIONEmbodiments discussed herein variously provide mechanisms and/or techniques to facilitate communication between circuit components that are coupled to or otherwise disposed, directly or indirectly, on a transparent substrate. In an embodiment, a device (referred to herein as a "multilayer circuit") is bonded to a transparent substrate - e.g., bonded at a side of the transparent substrate - where the device includes conductive traces that are variously offset at different respective levels from a side of the transparent substrate. For example, such traces may extend through an insulating dielectric of the multilayer circuit, wherein the dielectric is adhered directly to at least one side of a glass, plastic or other transparent substrate. In one embodiment, the multilayer circuit is flexible (e.g., at least prior to being bonded at a glass surface) and may bend around an edge of a substrate to interconnect components on opposite sides of that substrate.Although some embodiments are not limited in this regard, a multilayer circuit may function as a bridge between ITO traces (or other such interconnect structures) that are variously patterned on one or more substrate surfaces. As compared to ITO traces, traces of the multilayer circuit in an embodiment may exhibit better signal communication characteristics - e.g., due to trace materials and/or dimensions, electromagnetic shielding provided by the multilayer circuit and/or the like. Due in part to such characteristics, some embodiments allow for signaling via multilayer circuit traces that are relatively long (e.g., 1 cm or more) - e.g., as compared to the length of printed ITO traces used in existing COG interconnect solutions.FIG. 1 illustrates elements of a system 100 to provide coupling between circuit components according to an embodiment. System 100 is one example of an embodiment wherein interconnect structures are bonded to at least one surface of a transparent substrate. Certain features of various embodiments are described herein with reference to interconnect structures of a chip-on-glass system including microelectronic devices variously coupled to a glass substrate. However, such description may be extended to additionally or alternatively apply to interconnect structures bonded to any of a variety of other transparent substrates.In the illustrative embodiment shown, system 100 includes transparent substrate 110 and a source 120 and sink 130 each coupled to a respective surface portions 112, 114 of transparent substrate 110. Surface portions 112, 114 may be on the same side of transparent substrate 110 or, for example, on opposite respective sides of transparent substrate 110. Source 120 represents any of a variety of microelectronic devices that may create, relay or otherwise provide one or more signals or voltages to be communicated along a surface of transparent substrate 110. By way of illustration and not limitation, source 120 may include an unpackaged (or packaged) microelectronic device such as a processor, controller, memory device, system-on-chip and/or the like. Correspondingly, sink 130 may include any of a variety of microelectronic devices configured to receive such voltages and/or signals from source 120. In one illustrative embodiment, sink 130 is a device to drive elements of a light emitting diode (LED) - e.g., organic LED (OLED) - display or other type of display device.System 100 may include a multilayer circuit 140 to facilitate communication between source 120 and sink 130 - e.g., wherein interconnect structures 122 provides connectivity between source 120 and multilayer circuit 140 and wherein interconnect structures 132 provide connectivity between sink 130 and multilayer circuit 140. In one embodiment, interconnect structures 122 include one or more conductive traces directly deposited on a transparent media of transparent substrate 110 - e.g., wherein the one or more traces include patterned ITO structures variously extending each from a respective input and/or output (IO) contact of source 120 to a corresponding IO contact of multilayer circuit 140. Alternatively or in addition, interconnect structures 132 may include one or more such traces - e.g., including one or more ITO traces variously coupling IO contacts of sink 130 each to a corresponding IO contact of multilayer circuit 140.In an embodiment, multilayer circuit 140 includes an insulating dielectric that is soldered, adhered and/or otherwise bonded to one or more surfaces of transparent substrate 110 (e.g., via the illustrative adhesive 150 shown). For example, the cross-sectional perspective view shown at inset 145 illustrates one example implementation of multilayer circuit 140 in an embodiment. As shown in inset 145, multilayer circuit 140 may include dielectric body 160 comprising a polyamide-imide (PAI) compound and/or any of a variety of other insulator materials.Interconnect structures, such as the illustrative traces 172, 182 shown, may be variously disposed between opposing sides 162, 164 of dielectric body 160 some or all such interconnect structures may variously include or extend to (or otherwise be coupled to) respective 10 structures variously disposed each at a respective one of sides 162, 164. In the illustrative embodiment shown, trace 172 includes or couples to a via structure that extends to a contact 170 at side 164 - e.g., wherein trace 182 includes or couples to another via structure extending to a contact 180 at side 164. In such an embodiment, interconnect structures may be variously disposed at different respective levels of dielectric body 160. By way of illustration and not limitation, trace 172 may extend to communicate a signal or voltage in parallel with the x-y plane shown, wherein trace 182 is to communicate another signal or voltage substantially along a second plane in parallel with the first plane. In such an embodiment, trace 172 may be at a height (along the z-axis shown) which is different than that of trace 182 - wherein trace 172 is closer to side 162 than is trace 182. In one example embodiment, trace 172 is to communicate a reference potential (for example a ground signal) that contributes to shielding that protects signal integrity of a communication sent via trace 182. Dielectric body 160 may include more, fewer and/or differently arranged interconnect structures formed therein, in different embodiments.FIG. 2 illustrates operations of a method 200 to facilitate connectivity via circuit structures on a transparent substrate according to an embodiment. Method 200 may include operations to manufacture and/or operate having features such as those of system 100. For example, operations 205 of method 200 may provide for coupling of a circuit component to another such circuit component via a multilayer circuit.In an embodiment, operations 205 include, at 210, bonding the multilayer circuit at the transparent substrate. The bonding at 210 may include adhering a first end of multilayer circuit to - e.g., with an anisotropic conductive film (ACF) - a first portion of a surface of the transparent substrate (for brevity, "first surface portion"). Such bonding may further comprise adhering a second end of the multilayer circuit to a second surface portion of the transparent substrate - e.g., wherein the first end and the second end are opposite respective ends of a flexible circuit. The bonding at 210 may include any of a variety of adhesive materials adapted from conventional COG techniques for securing components to a transparent substrate.Although some embodiments are not limited in this regard, the first surface portion and the second surface portion may be on different respective sides - e.g., opposite sides - of the transparent substrate. In one embodiment, the first IO contacts and the second IO contacts are electrically coupled to one another - e.g., wherein a signal trace of the multilayer circuit is directly coupled to each of a respective one of the first IO contacts and a respective one of the second IO contacts. In another embodiment, one of the first IO contacts and one of the second IO contacts are coupled each to a different respective one of other IO contacts of the multilayer circuit. For example, the multilayer circuit may accommodate coupling to another device via such other 10 contacts, wherein the other device - e.g., including a passive circuit element and/or active circuitry - is thereby coupled between the first 10 contacts and the second IO contacts.In some embodiments, operations 205 further include, at 220, coupling a first circuit component (e.g., including a first IC die or other microelectronic device) via first traces at a first surface portion of the transparent substrate to first 10 contacts of the multilayer circuit.Similarly, operations 205 may include, at 230, coupling a second circuit component (e.g., including a second IC die or other microelectronic device) via second traces at a second surface portion of the transparent substrate to second IO contacts of the multilayer circuit. The coupling at 220, 230 may interconnect the first circuit component and the second circuit component with one another, and may include bonding a circuit component directly to a transparent substrate. For example, the bonding at 220, 230 may include bonding IO contacts of the circuit component to IO contacts of the multilayer circuit with ACF or other such conductive adhesive material. Although some embodiments are not limited in this regard, method 200 may additionally or alternatively include operations of a device that is manufactured at least in part by operations 205. For example, method 200 may further comprise, at 240, communicating a first signal or voltage via one of the first IO contacts and the second IO contacts. The communicating at 240 may include communicating a signal or voltage between the first circuit component and the second circuit component.FIG. 3 shows a cross sectional side view of a system 300 to provide interconnect structures on a transparent substrate according to an embodiment. System 300 may include features of system 100, for example. Manufacture and/or operation of system 300 may be according to method 200.In the illustrative embodiment of system 300, a multilayer circuit 330 is bonded to a surface portion of a transparent substrate 310 via the illustrative adhesive 340 shown (e.g., the adhesive 340 including an ACF). Multilayer circuit 330 may include dielectric 332 (e.g., comprising a flexible dielectric material such as PAI) and layers of metallization extending therein. By way of illustration and not limitation, IO contacts 334, 336 of multilayer circuit 330 may each be coupled to a different respective conductive trace within dielectric 332. Such traces may each extend along different respective ones of parallel planes at various offsets from a surface of transparent substrate 310.In an embodiment, system 300 further comprises, or is to couple to, a microelectronic device 320 that is to operate with multilayer circuit 330. For example, conductive traces 324 (e.g., ITO traces) may be disposed on the surface of transparent substrate 310, the traces 324 to variously couple contacts 334, 336 each to a respective one of hardware interface 322 of microelectronic device 320. Hardware interface 322 may include conductive bumps, balls, pads or other such contacts that, for example, are coupled to respective ones of traces 324 via an ACF or other such adhesive (not shown). Microelectronic device 320 may comprise one or more IC dies which, for example, are to provide processor, controller, memory and/or other functionality. However, some embodiments are not limited with respect to a particular functionality that may be provided by microelectronic device 320. In an embodiment, multilayer circuit 330 is to facilitate connectivity of microelectronic device 320 to another microelectronic device (not shown) that is adhered and/or otherwise bonded - directly or indirectly - to transparent substrate 310.FIG. 4 illustrates stages 400-403 of processing to interconnect circuitry on a transparent substrate according to an embodiment. The processing illustrated by stages 400-403 may be according to method 200, for example. In an embodiment, such processing may interconnect structures of system 100, system 300 or the like. The particular order by which devices are variously coupled to one another during stages 400-403 is merely illustrative, and may be different in other embodiments.As shown at stage 400, a transparent substrate 410 may include surface portions 412, 416 and another surface portion 414 between surface portions 412, 416. In the illustrative embodiment shown, surface portions 412, 414, 416 are all on the same side of transparent substrate 410. However, other embodiments are not limited in this regard. Surface portions 412, 416 may have respective conductive traces (e.g., comprising patterned ITO) variously disposed thereon. Surface portion 414 may facilitate coupling of a multilayer circuit to transparent substrate 410, the multilayer circuit to variously interconnect such traces with one another.For example, as shown at stage 401, an adhesive 420 may be deposited on surface portion 414 and a multilayer circuit 430 aligned over surface portion 414 and adhesive 420. Adhesive 420 may include an ACF and/or a patterned combination of conductive adhesive structures and nonconductive adhesive structures - e.g., where conductive structures thereof facilitate electrical interconnection of IO contacts 432, 434 of multilayer circuit 430 each to a respective trace at surface portion 412. Alternatively or in addition, conductive adhesive structures may facilitate electrical interconnection of contacts 436, 438 of multilayer circuit 430 each to a respective trace at surface portion 416. By way of illustration and not limitation, multilayer circuit 430 may be bonded to transparent substrate 410 (as shown at stage 402) via adhesive 420 to provide a bridge which interconnects respective traces on surface portions 412, 416. In some embodiments, one or more microelectronic devices may be coupled to multilayer circuit 430 - e.g., directly or via transparent substrate 410. For example, as shown at stage 403, microelectronic devices 440, 450 may be coupled to transparent substrate 410 at surface portions 412, 416, respectively.FIG. 5 illustrates elements of a system 500 to provide interconnect structures on a transparent substrate according to an embodiment. System 500 may include one or more features of system 100, system 300 or the like. In an embodiment, manufacture or operation of system 500 is according to method 200.In the illustrative embodiment of system 500, transparent substrate 510 has a side 512 on which is bonded a portion of a multilayer circuit 540. An opposite side 514 of transparent substrate 510 has bonded thereon another portion (e.g., an opposite end) of multilayer circuit 540. For example, multilayer circuit 540 may include a flexible circuit that is bent to extend around a side 516 of transparent substrate 510. In an embodiment, multilayer circuit 540 interconnects microelectronic devices which are variously disposed on the opposite respective sides 512, 514 of transparent substrate 510. By way of illustration and not limitation, microelectronic devices 520, 530 of system 500 may be variously adhered or otherwise bonded to sides 512, 514 respectively. In one illustrative embodiment, microelectronic devices 520, 530 provide controller, driver, graphics processor and/or other integrated circuit functionality to operate one or more display elements (such as the illustrative LED display elements 518 shown) that are disposed on substrate 510.Conductive traces (e.g., including ITO traces) variously disposed on sides 512, 514 may facilitate connection between IO contacts of multilayer circuit 540 to respective hardware interfaces 522, 532 of microelectronic devices 520, 530. Such IO contacts of multilayer circuit 540 may be variously coupled to one another by interconnects including, for example, the illustrative traces 542, 544 shown. In an embodiment, interconnects of multilayer circuit 540 extend at different respective levels, e.g., wherein a portion of trace 544 is closer to transparent substrate 510 than an overlapping portion of trace 542. The multilayer arrangement of such traces may facilitate a longer traces and/or more closely spaced traces, as compared to conventional chip-on-glass interconnect techniques. Alternatively or in addition, such a multilayer arrangement of traces may facilitate shielding (e.g., with a ground or other reference potential) to help protect signal integrity.FIG. 6 illustrates features of a system 600 to interconnect circuitry on a transparent substrate, according to an embodiment. System 600 may include one or more features of system 100, system 300 or the like. In an embodiment, system 600 is manufactured and/or operated according to method 200. In the illustrative embodiment shown, a side 612 of a transparent substrate 610 has formed thereon various ITO traces, wherein microelectronic devices 620, 630 and a multilayer circuit 640 of system 600 are coupled to side 612. In another embodiment, microelectronic devices 620, 630 are disposed on opposite sides of transparent substrate 610. IO contacts of multilayer circuit 640 may be variously coupled to respective IO contacts of microelectronic device 620 and microelectronic device 630. In the illustrative embodiment shown, ITO traces patterned on side 612 variously couple respective IO contacts of microelectronic devices 620, 630 each with a corresponding IO contact of multilayer circuit 640. In other embodiments, microelectronic device 620 and/or microelectronic device 630 may instead be mounted directly onto multilayer circuit 640.In one embodiment, opposite sides of multilayer circuit 640 have respective IO contacts variously disposed therein or thereon. By way of illustration and not limitation, regions 642, 644 may have formed therein respective via structures to variously connect IO contacts each with a respective metallization layer of multilayer circuit 640. Such interconnect structures may provide for coupling of one or more circuit components (such as the illustrative circuit components 650, 652) directly onto multilayer circuit 640. In the illustrative embodiment shown, microelectronic devices 620, 630 are coupled to one another via IO contacts at various sides of multilayer circuit 640, via traces extending in multilayer circuit 640, and via one or both of circuit components 650, 652. Circuit components 650, 652 may include one or more passive circuit elements such as capacitors, inductors and/or the like. Alternatively or in addition, circuit components 650, 652 may include active circuit components - e.g., where circuit components 650, 652 include an IC device.FIG. 7 illustrates various top plan views of devices 700, 730 each to provide respective interconnect structures on a transparent substrate according to a corresponding embodiment. Devices 700, 730 may include respective features of system 100, system 300, system 500, system 600 or the like - e.g., wherein such features are provided according to method 200.In the illustrative embodiment of the device 700, conductive traces 706, 708 are variously patterned on a side of a transparent substrate 710. Traces 706 may variously extend between IO contacts of a multilayer circuit 720 and IO contacts of a hardware interface 702. Alternatively or in addition, traces 708 may variously extend between other IO contacts of multilayer circuit 720 and IO contacts of a hardware interface 704. In the example embodiment of device 700, multilayer circuit 720 may variously interconnect hardware interfaces 702, 704 each with a respective one or more other microelectronic devices (not shown). For example, multilayer circuit 720 may only indirectly interconnect hardware interfaces 702, 704 with one another via another IC device (not shown) that is disposed on substrate 710. The other IC device may be coupled to multilayer circuit 720 via respective traces that are formed at a surface of substrate 710.In the illustrative embodiment of device 730, traces 736, 738 are variously patterned on a side of a transparent substrate 740. Furthermore, multilayer circuit 750 and multilayer circuit 754 may be variously bonded each to a side of transparent substrate 740. Traces 736 may variously couple IO contacts of multilayer circuit 750 to a hardware interface 732 disposed on transparent substrate 710 - e.g., wherein traces 738 variously couple other IO contacts of multilayer circuit 750 each to a hardware interface 734 on transparent substrate 740. In such an embodiment, the side of transparent substrate 740 has further disposed thereon traces 752 to couple still other IO contacts of multilayer circuit 750 each to a corresponding IO contact of multilayer circuit 754. Accordingly, hardware interfaces 732, 734 may be variouslyinterconnected each to a respective one or more other microelectronic devices (not shown) that are included in, or are to couple to, device 730. Alternatively or in addition, hardware interfaces 732, 734 may be interconnected to one another via multilayer circuit 750 (and, in some embodiments, via multilayer circuit 754).In the illustrative embodiments shown by FIG. 7, multilayer circuits are variously shown as having respective footprints which are substantially rectilinear. However, a multilayer circuit bonded to a transparent substrate may have any of a variety of shapes (e.g., one of a T-shape, L- shape, U-shape or the like), in different embodiments. Moreover, one end of multilayer circuit 720 is shown as providing connectivity to two hardware interfaces 702, 704 (where an end of multilayer circuit 750 is shown as providing connectivity to two hardware interfaces 732, 734). However, such a multilayer circuit may provide such connectivity to more, fewer and/or differently arranged hardware interfaces, in different embodiments.FIG. 8 illustrates a computing device 800 in accordance with one embodiment. The computing device 800 houses a board 802. The board 802 may include a number ofcomponents, including but not limited to a processor 804 and at least one communication chip 806. The processor 804 is physically and electrically coupled to the board 802. In some implementations the at least one communication chip 806 is also physically and electrically coupled to the board 802. In further implementations, the communication chip 806 is part of the processor 804.Depending on its applications, computing device 800 may include other components that may or may not be physically and electrically coupled to the board 802. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 806 enables wireless communications for the transfer of data to and from the computing device 800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 806 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 800 may include a plurality of communication chips 806. For instance, a first communication chip 806 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 806 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 804 of the computing device 800 includes an integrated circuit die packaged within the processor 804. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 806 also includes an integrated circuit die packaged within the communication chip 806.In various implementations, the computing device 800 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set- top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 800 may be any other electronic device that processes data.Some embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to an embodiment. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine- readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., infrared signals, digital signals, etc.)), etc.FIG. 9 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies described herein, may be executed. In alternativeembodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.The exemplary computer system 900 includes a processor 902, a main memory 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 918 (e.g., a data storage device), which communicate with each other via a bus 930.Processor 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 902 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 902 is configured to execute the processing logic 926 for performing the operations described herein.The computer system 900 may further include a network interface device 908. The computer system 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD), a light emitting diode display (LED), or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), and a signal generation device 916 (e.g., a speaker).The secondary memory 918 may include a machine-accessible storage medium (or more specifically a computer-readable storage medium) 932 on which is stored one or more sets of instructions (e.g., software 922) embodying any one or more of the methodologies or functions described herein. The software 922 may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900, the main memory 904 and the processor 902 also constituting machine -readable storage media. The software 922 may further be transmitted or received over a network 920 via the network interface device 908.While the machine-accessible storage medium 932 is shown in an exemplary embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any of one or more embodiments. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.In one implementation, a device comprises a transparent substrate, a first multilayer circuit bonded at a surface of the transparent substrate, a first circuit component coupled, via first traces at a first surface portion of the transparent substrate, to first input/output (IO) contacts of the first multilayer circuit, and a second circuit component coupled, via second traces at a second surface portion of the transparent substrate, to second IO contacts of the first multilayer circuit.In one embodiment, the first surface portion and the second surface portion are on opposite respective sides of the transparent substrate. In another embodiment, the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate. In another embodiment, the dielectric material comprises a polyamide-imide compound. In another embodiment, the first IO contacts or the second IO contacts are at a first side of the first multilayer circuit, the first multilayer circuit further comprising one or more other 10 contacts at a second side of the first multilayer circuit opposite the first side. In another embodiment, the further comprises a third circuit component coupled to the one or more 10 contacts, wherein respective ones of the first 10 contacts and the second 10 contacts are coupled to each other via the third circuit component and the one or more 10 contacts. In another embodiment, the device further comprises a second multilayer circuit bonded at the surface of the transparent substrate, the second multilayer circuit coupled to the first multilayer circuit via third traces at a third surface portion of the transparent substrate. In another embodiment, the device further comprises a third circuit component coupled, via third traces at a third surface portion, of the transparent substrate to third 10 contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.In another implementation, a system comprises a transparent substrate, a first multilayer circuit bonded at a surface of the transparent substrate, a first circuit component coupled, via first traces at a first surface portion of the transparent substrate, to first input/output (10) contacts of the first multilayer circuit, a second circuit component coupled, via second traces at a second surface portion of the transparent substrate, to second 10 contacts of the first multilayer circuit, and one or more display elements disposed on the transparent substrate, the one or more display elements coupled to the first circuit component and the second circuit component to generate a display based on a signal communicated via the first multilayer circuit.In one embodiment, the first surface portion and the second surface portion are on opposite respective sides of the transparent substrate. In another embodiment, the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate. In another embodiment, the dielectric material comprises a polyamide-imide compound. In another embodiment, the first 10 contacts or the second 10 contacts are at a first side of the first multilayer circuit, the first multilayer circuit further comprising one or more other 10 contacts at a second side of the first multilayer circuit opposite the first side. In another embodiment, the system further comprises a third circuit component coupled to the one or more 10 contacts, wherein respective ones of the first 10 contacts and the second 10 contacts are coupled to each other via the third circuit component and the one or more 10 contacts. In another embodiment, the system further comprises a second multilayer circuit bonded at the surface of the transparent substrate, the second multilayer circuit coupled to the first multilayer circuit via third traces at a third surface portion of the transparent substrate. In another embodiment, the system further comprises a third circuit component coupled, via third traces at a third surface portion, of the transparent substrate to third 10 contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.In another implementation, a method comprises bonding a first multilayer circuit at a transparent substrate, coupling a first circuit component, via first traces at a first surface portion of the transparent substrate, to first input/output (10) contacts of the first multilayer circuit, and coupling a second circuit component, via second traces at a second surface portion of the transparent substrate, to second 10 contacts of the first multilayer circuit.In one embodiment, the first surface portion and the second surface portion are coupled to the first multilayer circuit at opposite respective sides of the transparent substrate. In another embodiment, the first multilayer circuit includes traces each extending in a body of a flexible dielectric material adhered directly to a surface of the transparent substrate. In another embodiment, the dielectric material comprises a polyamide-imide compound. In another embodiment, the method further comprises coupling a third circuit component to one or more 10 contacts of the first multilayer circuit, wherein respective ones of the first 10 contacts and the second 10 contacts are coupled to each other via the third circuit component and the one or more 10 contacts. In another embodiment, the method further comprises bonding a second multilayer circuit at the surface of the transparent substrate, and coupling the second multilayer circuit to the first multilayer circuit via third traces at a third surface portion of the transparent substrate. In another embodiment, the method further comprises coupling a third circuit component, via third traces at a third surface portion of the transparent substrate, to third 10 contacts of the first multilayer circuit, wherein the third circuit component is interconnected with one of the first circuit component and the second circuit component via the first multilayer circuit.Techniques and architectures for interconnecting circuitry on a transparent substrate are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion herein, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description herein. In addition, certain embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of such embodiments as described herein.Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow.
An emulation controller (12) connected at a pin boundary of an integrated circuit (14) can be provided with concurrent access to concurrent debug signal activity of first and second data processing cores (core 2, core 1) embedded within the integrated circuit. A first signal path is provided from the first data processing core to a first pin (39) of the integrated circuit, for carrying a selected debug signal of the first data processing core to the first pin. A second signal path is provided from the second data processing core to the first pin of the integrated circuit for carrying a selected debug signal of the second data processing core to the first pin. A third signal path is provided from the second data processing core to a second pin (41) of the integrated circuit for carrying the selected debug signal of the second data processing core to the second pin. <IMAGE>
An integrated circuit comprising: a plurality of embedded data processing cores (Core 1, Core 2, Core 3) for performing data processing operations, each embedded data processing core having at least one debug output signal line arranged to output a debug signal (Function Data); a plurality of debug output port pins (30, 35) arranged to provide to an external emulation system (10, 12) concurrent access to concurrent debug signal activities originating from the plurality embedded data processing cores; a plurality of multiplexers (31, 33), one associated with each of the debug output port pins, each multiplexer having a plurality of inputs coupled to the debug output signal lines of at least one of the plurality of data processing cores (core 1, core 2, core 3), a control input arranged to receive a control signal and an output connected to a corresponding one of said debug output port pins (30, 35), each multiplexer (31, 33) being arranged to selectively route a debug signal at one of its plurality of inputs to said output in dependence upon the signal at said control input and wherein the at least one debug signal lines of each data processing core is coupled to the input of more than one of the plurality of multiplexers; a register (50) arranged to receive control data from the external emulation system (10, 12) and to apply a signal to the control inputs of a selected multiplexer to route a selected debug signal from one of the plurality of data processing cores (core 1, core 2, core 3) to its output (30, 35), characterized in thatthe circuit further comprises a plurality of tri-state elements (32, 33), each element having an input coupled to the output of one of the plurality of multiplexers, an output coupled to the debug output port pin (30, 35) associated with that multiplexer and a control input coupled to the register (50) wherein the register is arranged to store data and to selectively apply a tri-state control signal to a predetermined element to produce a tri-state high output (Z). The integrated circuit of claim 1, wherein at least one of the plurality of multiplexers has an input connected to the debug output signal lines of at least two data processing cores. The integrated circuit of claim 1, wherein at least one of the plurality of multiplexers has a first input connected to a first debug output signal line of a predetermined data processing core and a second input connected to a second debug output signal line of said predetermined data processing core. The integrated circuit of claim 1, wherein a first multiplexer of the plurality of multiplexers has a first input connected to a first debug output signal line of a predetermined data processing core and a second multiplexer of the plurality of multiplexers has a first input connected to said first debug output signal line of said predetermined data processing core. The integrated circuit of any preceding Claim, wherein said selected debug signal of one of the plurality of data processing cores is a trace signal. The integrated circuit of any preceding Claim, wherein the selected debug signal of one of the plurality of data processing cores is a trigger signal. The integrated circuit of any preceding Claim, wherein at least one of said processing cores is a DSP core. The integrated circuit of any of Claims 1 to 6, wherein at least one of said data processing cores is a microprocessor core.
FIELD OF THE INVENTION The invention relates generally to electronic data processing and, more particularly, to emulation, simulation and test capabilities of electronic data processing devices and systems. BACKGROUND OF THE INVENTION Advanced wafer lithography and surface-mount packaging technology are integrating increasingly complex functions at both the silicon and printed circuit board level of electronic design. Diminished physical access is an unfortunate consequence of denser designs and shrinking interconnect pitch. Designed-in testability is needed, so that the finished product is still both controllable and observable during test and debug. Any manufacturing defect is preferably detectable during final test before a product is shipped. This basic necessity is difficult to achieve for complex designs without taking testability into account in the logic design phase, so that automatic test equipment can test the product. In addition to testing for functionality and for manufacturing defects, application software development requires a similar level of simulation, observability and controllability in the system or sub-system design phase. The emulation phase of design should ensure that an IC (integrated circuit), or set of ICs, functions correctly in the end equipment or application when linked with the software programs. With the increasing use of ICs in the automotive industry, telecommunications, defense systems, and life support systems, thorough testing and extensive realtime debug becomes a critical need. Functional testing, wherein a designer is responsible for generating test vectors that are intended to ensure conformance to specification, still remains a widely used test methodology. For very large systems this method proves inadequate in providing a high level of detectable fault coverage. Automatically generated test patters would be desirable for full testability, and controllability and observability are key goals that span the full hierarchy of test (from the system level to the transistor level). Another problem in large designs is the long time and substantial expense involved. It would be desirable to have testability circuitry, system and methods that are consistent with a concept of design-for-reusability. In this way, subsequent devices and systems can have a low marginal design cost for testability, simulation and emulation by reusing the testability, simulation and emulation circuitry, systems and methods that are implemented in an initial device. Without a proactive testability, simulation and emulation approach, a large of subsequent design time is expended on test pattern creation and upgrading. Even if a significant investment were made to design a module to be reusable and to fully create and grade its test patterns, subsequent use of the module may bury it in application specific logic, and make its access difficult or impossible. Consequently, it is desirable to avoid this pitfall. The advances of IC design, for example, are accompanied by decreased internal visibility and control, reduced fault coverage and reduced ability to toggle states, more test development and verification problems, increased complexity of design simulation and continually increasing cost of CAD (computer aided design) tools. In the board design the side effects include decreased register visibility and control, complicated debug and simulation in design verification, loss of conventional emulation due to loss of physical access by packaging many circuits in one package, increased routing complexity on the board, increased costs of design tools, mixed-mode packaging, and design for produceability. In application development, some side effects are decreased visibility of states, high speed emulation difficulties, scaled time simulation, increased debugging complexity, and increased costs of emulators. Production side effects involve decreased visibility and control, complications in test vectors and models, increased test complexity, mixed-mode packaging, continually increasing costs of automatic test equipment even into the 7-figure range, and tighter tolerances. Emulation technology utilizing scan based emulation and multiprocessing debug was introduced over 10 years ago. In 1988, the change from conventional in circuit emulation to scan based emulation was motivated by design cycle time pressures and newly available space for on-chip emulation. Design cycle time pressure was created by three factors: higher integration levels - such as on-chip memory; increasing clock rates -caused electrical intrusiveness by emulation support logic; and more sophisticated packaging - created emulator connectivity issues. Today these same factors, with new twists, are challenging a scan based emulator's ability to deliver the system debug facilities needed by today's complex, higher clock rate, highly integrated designs. The resulting systems are smaller, faster, and cheaper. They are higher performance with footprints that are increasingly dense. Each of these positive system trends adversely affects the observation of system activity, the key enabler for rapid system development. The effect is called "vanishing visibility". Application developers prefer visibility and control of all relevant system activity. The steady progression of integration levels and increases in clock rates steadily decrease the visibility and control available over time. These forces create a visibility and control gap, the difference between the desired visibility and control level and the actual level available. Over time, this gap is sure to widen. Application development tool vendors are striving to minimize the gap growth rate. Development tools software and associated hardware components must do more with less and in different ways; tackling the ease of use challenge is amplified by these forces. With today's highly integrated System-On-a-Chip (SOC) technology, the visibility and control gap has widened dramatically. Traditional debug options such as logic analyzers and partitioned prototype systems are unable to keep pace with the integration levels and ever increasing clock rates of today's systems. As integration levels increase, system buses connecting numerous subsystem components move on chip, denying traditional logic analyzers access to these buses. With limited or no significant bus visibility, tools like logic analyzers cannot be used to view system activity or provide the trigger mechanisms needed to control the system under development. A loss of control accompanies this loss in visibility, as it is difficult to control things that are not accessible. To combat this trend, system designers have worked to keep these buses exposed, building system components in way that enabled the construction of prototyping systems with exposed buses. This approach is also under siege from the ever-increasing march of system clock rates. As CPU clock rates increase, chip to chip interface speeds are not keeping pace. Developers find that a partitioned system's performance does not keep pace with its integrated counterpart, due to interface wait states added to compensate for lagging chip to chip communication rates. At some point, this performance degradation reaches intolerable levels and the partitioned prototype system is no longer a viable debug option. We have entered an era where production devices must serve as the platform for application development. Increasing CPU clock rates are also accelerating the demise of other simple visibility mechanisms. Since the CPU clock rates can exceed maximum I/O state rates, visibility ports exporting information in native form can no longer keep up with the CPU. On-chip subsystems are also operated at clock rates that are slower than the CPU clock rate. This approach may be used to simplify system design and reduce power consumption. These developments mean simple visibility ports can no longer be counted on to deliver a clear view of CPU activity. As visibility and control diminish, the development tools used to develop the application become less productive. The tools also appear harder to use due to the increasing tool complexity required to maintain visibility and control. The visibility, control, and ease of use issues created by systems-on-a-chip are poised to lengthen product development cycles. Even as the integration trends present developers with a difficult debug environment, they also present hope that new approaches to debug problems will emerge. The increased densities and clock rates that create development cycle time pressures also create opportunities to solve them. On-chip, debug facilities are more affordable than ever before. As high speed, high performance chips are increasingly dominated by very large memory structures, the system cost associated with the random logic accompanying the CPU and memory subsystems is dropping as a percentage of total system cost. The cost of a several thousand gates is at an all time low, and can in some cases be tucked into a corner of today's chip designs. Cost per pin in today's high density packages has also dropped, making it easier to allocate more pins for debug. The combination of affordable gates and pins enables the deployment of new, on-chip emulation facilities needed to address the challenges created by systems-on-a-chip. When production devices also serve as the application debug platform, they must provide sufficient debug capabilities to support time to market objectives. Since the debugging requirements vary with different applications, it is highly desirable to be able to adjust the on-chip debug facilities to balance time to market and cost needs. Since these on-chip capabilities affect the chip's recurring cost, the scalability of any solution is of primary importance. "Pay only for what you need" should be the guiding principle for on-chip tools deployment. In this new paradigm, the system architect may also specify the on-chip debug facilities along with the remainder of functionality, balancing chip cost constraints and the debug needs of the product development team. International Patent Application WO-A-9966337 describes an integrated circuit component having one or more external test connection contact pins through which signals that are to be measured or analyzed are selectively fed by means of a multiplex circuit. The signals may be connected by means of routes located internally in the component from switch points that are not directly accessible such as points inside the chip or covered contact points. European Patent Application No. 0942375 describes a computer system comprising a microprocessor on a single integrated circuit chip connected to an external computer integrated circuit chip via an adapter device. The integrated circuit chip has an on-chip CPU with a plurality of registers and an external communication port connected to a communication bus providing a parallel communication path between the CPU and a local memory. The port has an internal connection to the bus and an external connection to the adapter unit with a first format and to the external computer with a second format having a higher latency than the first format. The emulation technology of the present invention uses the debug upside opportunities noted above to provide developers with an arsenal of debug capability aimed at narrowing the control and visibility gap. This emulation technology delivers solutions to the complex debug problems of today's highly integrated embedded real-time systems. This technology attacks the loss of visibility, control, and ease of use issues described in the preceding section while expanding the feature set of current emulators. The on-chip debug component of the present invention provides a means for optimizing the cost and debug capabilities. The architecture allows for flexible combinations of emulation components or peripherals tailored to meet system cost and time to market constraints. The scalability aspect makes it feasible to include them in production devices with manageable cost and limited performance overhead. The present invention provides an integrated circuit as set out in the accompanying claims. BRIEF DESCRIPTION OF THE DRAWINGS The present invention will now be further described, by way of example, with reference to the preferred and exemplary embodiments illustrated in the figures of the accompanying drawings in which: FIGURE 1 illustrates exemplary embodiments of an emulation system according to the invention. FIGURE 2 illustrates in tabular format exemplary pin assignments according to the invention for debug signals associated with a data processing core embedded in the target device of FIGURE 1 . FIGURE 3 diagrammatically illustrates pertinent portions of the target device of FIGURE 1 . FIGURES 4 and 5 illustrate exemplary manners in which the arrangement of FIGURE 3 permits concurrent support of debug functions associated with multiple embedded cores of a target device. DETAILED DESCRIPTION Emulation, debug, and simulation tools of the present invention are described herein. The emulation and debug solutions described herein are based on the premise that, over time, some if not most debug functions traditionally performed off chip must be integrated into the production device if they are to remain in the developer's debug arsenal. To support the migration of debug functions on chip, the present invention provides a powerful and scalable portfolio of debug capabilities for on-chip deployment. This technology preserves all the gains of initial JTAG technology while adding capabilities that directly assault the visibility, control, and ease of use issues created by the vanishing visibility trend. Four significant architectural infrastructure components spearhead the assault on the control and visibility gap described earlier herein: 1. Real-time Emulation (RTE); 2. Real-time Data Exchange (RTDX); 3. Trace; and 4. Advanced Analysis. These components address visibility and control needs as shown in Table 1. Table 1. Emulation System Architecture and Usage RTE Static view of the CPU and memory state after background program is stopped. Interrupt driven code continues to execute. Analysis components are used to stop execution of background program. Basic debug Computational problems Code design problems RTDX Debugger software interacts with the application code to exchange commands and data while the application continues to execute. Analysis components are used to identify observation points and interrupt program flow to collect data. Dynamic instrumentation Dynamic variable adjustments Dynamic data collection Trace Bus snooper hardware Analysis components Prog. Flow corruption debug collects selective program flow and data transactions for export without interacting with the application. are used to define program segments and bus transactio ns that are to be recorded for export. Memory corruption Benchmarking Code Coverage Path Coverage Program timing problems Analysis Allows observation of occurrences of events or event sequences. Measure elapsed time between events. Generate external triggers. Alter program flow after the detection of events or event sequences. Benchmarking Event/sequence identification Ext. trigger generation Stop program execution Activate Trace and RTDX Real-Time Emulation (RTE) provides a base set of fixed capabilities for real-time execution control (run, step, halt, etc.) and register/memory visibility. This component allows the user to debug application code while real-time interrupts continue to be serviced. Registers and memory may be accessed in real-time with no impact to interrupt processing. Users may distinguish between real-time and non real-time interrupts, and mark code that must not be disturbed by real-time debug memory accesses. This base emulation capability includes hardware that can be configured as two single point hardware breakpoints, a single data watchpoint, an event counter, or a data logging mechanism. The EMU pin capability includes trigger I/Os for multiprocessor event processing and a uni-directional (target to host) data logging mechanism. RTDX ™provides real-time data transfers between an emulator host and target application. This component offers both bi-directional and uni-directional DSP target/host data transfers facilitated by the emulator. The DSP (or target) application may collect target data to be transferred to the host or receive data from the host, while emulation hardware (within the DSP and the emulator) manages the actual transfer. Several RTDX transfer mechanisms are supported, each providing different levels of bandwidth and pin utilization allowing the trade off of gates and pin availability against bandwidth requirements. Trace is a non-intrusive mechanism of providing visibility of the application activity. Trace is used to monitor CPU related activity such as program flow and memory accesses, system activity such as ASIC state machines, data streams and CPU collected data. Historical trace technology also used logic analyzer like collection and special emulation (SEs) devices with more pins than a production device. The logic analyzer or like device processed native representations of the data using a state machine like programming interface (filter mechanism). This trace model relied on all activity being exported with external triggering selecting the data that needed to be stored, viewed and analyzed. Existing logic analyzer like technology does not, however, provide a solution to decreasing visibility due to higher integration levels, increasing clock rates and more sophisticated packaging. In this model, the production device must provide visibility through a limited number of pins. The data exported is encoded or compressed to reduce the export bandwidth required. The recording mechanism becomes a pure recording device, packing exported data into a deep trace memory. Trace software is used to convert the recorded data into a record of system activity. On-chip Trace with high speed serial data export, in combination with Advanced Analysis provides a solution for SOC designs. Trace is used to monitor CPU related activity such as program flow and memory accesses, system activity such as ASIC state machines, data streams etc. and CPU collected data. This creates four different classes of trace data: ■ Program flow and timing provided by the DSP core (PC trace); ■ Memory data references made by the DSP core or chip level peripherals (Data reads and writes); ■ Application specific signals and data (ASIC activity); and ■ CPU collected data. Collection mechanisms for the four classes of trace data are modular allowing the trade off of functionality verses gates and pins required to meet desired bandwidth requirements. The RTDX and Trace functions provide similar, but different forms of visibility. They differ in terms of how data is collected, and the circumstances under which they would be most effective. A brief explanation is included below for clarity: RTDX ™(Real Time Data eXchange) is a CPU assisted solution for exchanging information; the data to be exchanged have a well-defined behavior in relation to the program flow. For example, RTDX can be used to record the input or output buffers from a DSP algorithm. RTDX requires CPU assistance in collecting data hence there is definite, but small, CPU bandwidth required to accomplish this. Thus, RTDX is an application intrusive mechanism of providing visibility with low recurring overhead cost. Trace is a non-intrusive, hardware-assisted collection mechanism (such as, bus snoopers) with very high bandwidth (BW) data export. Trace is used when there is a need to export data at a very high data rate or when the behavior of the information to be traced is not known, or is random in nature or associated with an address. Program flow is a typical example where it is not possible to know the behavior a priori. The bandwidth required to export this class of information is high. Data trace of specified addresses is another example. The bandwidth required to export data trace is very high. Trace data is unidirectional, going from target to host only. RTDX can exchange data in either direction although unidirectional forms of RTDX are supported (data logging). The Trace data path can also be used to provide very high speed uni-directional RTDX (CPU collected trace data). The high level features of Trace and RTDX are outlined in Table 2. Table 2. RTDX and Trace Features Bandwidth/pin Low High Intrusiveness Intrusive Non-intrusive Data Exchange Bi-directional or uni-directional Export only Data collection CPU assisted CPU or Hardware assisted Data transfer No extra hardware for minimum BW Hardware assisted (optional hardware for higher BW) Cost Relatively low recurring cost Relatively high recurring cost Advanced analysis provides a non-intrusive on-chip event detection and trigger generation mechanism. The trigger outputs created by advanced analysis control other infrastructure components such as Trace and RTDX. Historical trace technology used bus activity exported to a logic analyzer to generate triggers that controlled trace within the logic analyzer unit or generated triggers which were supplied to the device to halt execution. This usually involved a chip that had more pins than the production device (an SE or special emulation device). This analysis model does not work well in the System-on-a-Chip (SOC) era as the integration levels and clock rates of today's devices preclude full visibility bus export. Advanced analysis provides affordable on-chip instruction and data bus comparators, sequencers and state machines, and event counters to recreate the most important portions of the triggering function historically found off chip. Advanced analysis provides the control aspect of debug triggering mechanism for Trace, RTDX and Real-Time Emulation. This architectural component identifies events, tracks event sequences, and assigns actions based on their occurrence (break execution, enable/disable trace, count, enable/disable RTDX, etc.). The modular building blocks for this capability include bus comparators, external event generators, state machines or state sequencers, and trigger generators. The modularity of the advanced analysis system allows the trade off of functionality versus gates. Emulator capability is created by the interaction of four emulator components: 1. debugger application program; 2. host computer; 3. emulation controller; and 4. on-chip debug facilities. These components are connected as shown in Figure 1 . The host computer 10 is connected to an emulation controller 12 (external to the host) with the emulation controller (also referred to herein as the emulator or the controller) also connected to the target system 16. The user preferably controls the target application through a debugger application program, running on the host computer, for example, Texas Instruments' Code Composer Studio program. A typical debug system is shown in Figure 1 . This system uses a host computer 10 (generally a PC) to access the debug capabilities through an emulator 12. The debugger application program presents the debug capabilities in a user-friendly form via the host computer. The debug resources are allocated by debug software on an as needed basis, relieving the user of this burden. Source level debug utilizes the debug resources, hiding their complexity from the user. The debugger together with the on-chip Trace and triggering facilities provide a means to select, record, and display chip activity of interest. Trace displays are automatically correlated to the source code that generated the trace log. The emulator provides both the debug control and trace recording function. The debug facilities are programmed using standard emulator debug accesses through the target chips' JTAG or similar serial debug interface. Since pins are at a premium, the technology provides for the sharing of the debug pin pool by trace, trigger, and other debug functions with a small increment in silicon cost. Fixed pin formats are also supported. When the sharing of pins option is deployed, the debug pin utilization is determined at the beginning of each debug session (before the chip is directed to run the application program), maximizing the trace export bandwidth. Trace bandwidth is maximized by allocating the maximum number of pins to trace. The debug capability and building blocks within a system may vary. The emulator software therefore establishes the configuration at run-time. This approach requires the hardware blocks to meet a set of constraints dealing with configuration and register organization. Other components provide a hardware search capability designed to locate the blocks and other peripherals in the system memory map. The emulator software uses a search facility to locate the resources. The address where the modules are located and a type ID uniquely identifies each block found. Once the IDs are found, a design database may be used to ascertain the exact configuration and all system inputs and outputs. The host computer is generally a PC with at least 64 Mbytes of memory and capable of running at least Windows95, SR-2, Windows NT, or later versions of Windows. The PC must support one of the communications interfaces required by the emulator, for example: ■ Ethernet 10T and 100T, TCP/IP protocol; ■ Universal Serial Bus (USB), rev 1.x; ■ Firewire, IEEE 1394; and/or ■ Parallel Port (SPP, EPP, and ECP). The emulation controller 12 provides a bridge between the host computer 10 and target system 16, handling all debug information passed between the debugger application running on the host computer and a target application executing on a DSP (or other target processor) 14. One exemplary emulator configuration supports all of the following capabilities: ■ Real-time Emulation; ■ RTDX; ■ Trace; and ■ Advanced Analysis. Additionally, the emulator-to-target interface supports: ■ Input and output triggers; ■ Bit I/O; and ■ Managing special extended operating modes. The emulation controller 12 accesses Real-time Emulation capabilities (execution control, memory, and register access) via a 3, 4, or 5 bit scan based interface. RTDX capabilities can be accessed by scan or by using three higher bandwidth RTDX formats that use direct target-to-emulator connections other than scan. The input and output triggers allow other system components to signal the chip with debug events and vice-versa. The emulator 12 is partitioned into communication and emulation sections. The communication section supports communication with the host 10 on host communication links while the emulation section interfaces to the target, managing target debug functions and the device debug port. The emulator 12 communicates with the host computer 10 using e.g., one of the aforementioned industry standards communication links at 15. The host-to-emulator connection can be established with off the shelf cabling technology. Host-to-emulator separation is governed by the standards applied to the interface used. The emulation controller 12 communicates with the target system 16 through a target cable or cables at 17. Debug, Trace, Triggers, and RTDX capabilities share the target cable, and in some cases, the same device pins. More than one target cable may be required when the target system deploys a trace width that cannot be accommodated in a single cable. All trace, RTDX, and debug communication occurs over this link. Many SOC devices have embedded therein a plurality of data processing cores, such as microprocessor cores and/or DSP cores, along with memory and other peripheral logic. Exemplary embodiments of the present invention provide a system developer with concurrent access to debug/emulation functions associated with multiple data processing cores embedded within a target chip. This can be accomplished according to the present invention by, for example, multiplexing selected debug/emulation signals from each core to more than one pin of the target chip's debug port. This concept is illustrated in exemplary FIGURE 2 . FIGURE 2 illustrates in tabular format exemplary pin assignments of selected debug signals associated with a given embedded data processing core in a target device such as shown in FIGURE 1 . Missing entries for pins EMU1-EMU9 correspond to signals from other cores or unused multiplexer selections. As shown in FIGURE 2 , for example, trace signal T4 is multiplexed to six pins, trace signal T5 is multiplexed to five pins, and trace signal T6 is multiplexed to four pins. Accordingly, the trace signals T4, T5 and T6 are respectively available on the following sets of pins of the debug port: EMU5, EMU4 and EMU3; EMU4, EMU3 and EMU2; EMU3, EMU2 and EMU1; and EMU2, EMU1 and EMU0. As an example, if the system developer wishes to access the trace signals T4, T5 and T6 at a point in time when debug port pins EMU5, EMU4 and EMU3 are already occupied, for example by trace signal activity multiplexed to those pins from another embedded core in the target chip, signals T4, T5 and T6 may nevertheless be available to the system developer via debug port pins EMU2, EMU1 and EMU0. Considering, for example, pin EMU3, this pin has a pin multiplexer associated therewith for multiplexing various internal signals of the target chip onto pin EMU3. Multiplexer selection 0 permits the EMU3 pin to be tri-stated (z in FIGURE 2 ), multiplexer selection 1 permits a logic 0 to be driven to pin EMU3, multiplexer selection 2 permits signal T4 to drive pin EMU3, multiplexer selection 3 permits signal T5 to drive pin EMU3, etc. Each illustrated pin has a similar pin multiplexer associated therewith for selectively routing the illustrated signals thereto. FIGURE 3 diagrammatically illustrates pertinent portions of exemplary embodiments of the target chip of FIGURE 1 . Two exemplary pins of the target chip's debug port are illustrated at 30 and 35 in FIGURE 3 . Pins 30 and 35 are respectively driven by pin multiplexers 31 and 33. The pin multiplexers 31 and 33 multiplex onto their respective pins signals received from three data processing cores embedded in the target chip, designated as core 1, core 2 and core 3. A plurality of debug signals from core 2 are multiplexed onto pins 30 and 35, and are also applied to other pin multiplexers associated with other pins of the debug port. A pair of signals from core 1 are also multiplexed onto pins 30 and 35, and are also multiplexed onto other pins of the debug port. A debug signal from core 3 is multiplexed onto pins 30 and 35, and is also multiplexed onto at least one other pin of the device. Also as shown in FIGURE 3 , debug signals from core 1, core 2 and core 3 which are not multiplexed onto pins 30 and 35 are multiplexed onto other pins of the debug port. The illustrated combination of (1) multiplexing debug signals from a plurality of embedded cores onto a single pin, (2) multiplexing a plurality of signals from a single core onto a single pin, and (3) multiplexing each of one or more signals form a single core onto more than one pin advantageously provides flexibility in the process of gaining access to desired debug signals. This flexibility can increase the likelihood that, for example, trace signal activity from core 2 can be routed to a set of debug port pins without disturbing trace activity of core 1 that may already be routed to another set of debug port pins. Examples of this routing flexibility are illustrated in FIGURES 4 and 5 . In the FIGURE 4 example, trace signals from core 1 are multiplexed onto the same set of debug port pins as are trace signals of core 2. However, because the trace signals of core 1 are also multiplexed onto another, separate set of debug port pins, the desired trace activity of core 1 can be accessed via the pins designated at 28 at a point in time when core 2 trace activity is already active on the debug port pins designated at 27. FIGURE 5 illustrates another example of the flexibility provided by the arrangement of FIGURE 3 . As shown in FIGURE 5 , two core 1 trace signals are routed to debug port pins 39 and 40, as are two trace signals from core 2. In addition, the same core 1 trace signals multiplexed to pins 39 and 40 are also multiplexed respectively to pins 41 and 42, while the same core 2 trace signals that are multiplexed to pins 39 and 40 are also multiplexed to pins 43 and 44. Also, a core 3 trigger designated as X1 is multiplexed to pins 41 and 49. In one exemplary scenario with the pin assignment configuration of FIGURE 5 , assume that core 2 trace activity is already underway on the pins designated at 46. At this point in time, core 1 trace activity cannot be accessed at the pins designated at 47 without interrupting the core 2 trace activity due to the fact that pins 39 and 40 of the core 2 trace activity at 46 would overlap with the core 1 trace activity at 47. However, because the core 1 trace signals multiplexed to pins 39 and 40 are also multiplexed respectively to pins 41 and 42, the desired core 1 trace activity can be accessed via the pins designated at 45. Thereafter, with core 1 trace activity underway on the pins designated at 45, if access to core 3 trigger X1 is desired, such access would not be available at pin 41 without interrupting the core 1 trace activity on the pins at 45. However, because the core 3 trigger X1 is also multiplexed to pin 49, that trigger can be accessed at pin 49 without interrupting the core 1 trace activity at pins 45. Software in the emulator 12 of FIGURE 1 can access a database model of the on-chip routing and multiplexing of the various signals from the various cores to the various pins of the debug port. In the example of FIGURE 5 , if the emulator software is attempting to access core 1 trace activity at pins 47, but recognizes that core 2 trace activity is already underway on pins 46, then the software can continue searching the signal routing database and ultimately discover that the core 1 trace signals routed to pins 39 and 40 are also routed to pins 41 and 42. Accordingly, the emulator software can cause appropriate control codes to be loaded into the register 50 of FIGURE 3 for controlling the pin multiplexers of the target device such that the core 1 trace activity is routed to the pins at 45 in FIGURE 5 , without disturbing the core 2 trace activity already underway on pins 46. Similarly, when it is desired to add the core 3 trigger X1, the emulator software will discover that pin 41 is already utilized for core 1 trace activity, and will thereafter discover from database searching that the core 3 trigger X1 is also available on pin 49. At this point, the emulator software will cause the appropriate data to be loaded into register 50 of FIGURE 3 for routing the core 3 trigger X1 to pin 49 of FIGURE 5 without disturbing the core 1 trace activity already underway on pins 45. It will be evident to workers in the art from the foregoing description that the present invention provides advantageous flexibility in debug port pin assignments such that concurrent signal activities originating from multiple embedded data processing cores can be accessed concurrently in real time and without interrupting the access of a given core's activity in order to access the activity of another core.
To reduce switching noise, the power supply terminals of an integrated circuit die can be coupled to the respective terminals of at least one embedded capacitor in a multiplayer ceramic substrate. In one embodiment, the capacitor is formed of at least one high permittivity layer. In another embodiment, several high permittivity layers are interleaved with conductive layers. Alternatively, the capacitor can comprise at least one embedded discrete capacitor. Also described are an electronic system, a data processing system, and various methods of manufacture.
A substrate (210; 320; 410) to package a die (200; 300; 400) comprising:a plurality of power and ground vias (215, 227; 315, 327; 405, 419) in a core region of the substrate;an embedded capacitor (230; 330; 430, 440) having first and second terminals; anda plurality of power lands (212; 312; 402) coupled to the first terminal through the plurality of power vias, and a plurality of ground lands (213; 313; 403) coupled to the second terminal through the plurality of ground vias;wherein the plurality of power lands and the plurality of ground lands are positioned to be coupled to corresponding power and ground nodes (202, 203; 302, 303; 402; 403) of the die.The substrate recited in claim 1, wherein the substrate is a multilayer ceramic substrate.The substrate recited in claim 1, wherein at least one of the power vias does not go entirely through the substrate.The substrate recited in claim 1, wherein at least one of the ground vias does not go entirely through the substrate.The substrate recited in claim 1 or 2 wherein the embedded capacitor comprises at least one high permittivity layer (228; 340).The substrate recited in claim 1 or 2 wherein the embedded capacitor comprises a plurality of high permittivity layers (228; 340).The substrate recited in claim 6 wherein the embedded capacitor comprises a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second terminals, respectively.The substrate recited in claim 1 or 2 wherein the embedded capacitor comprises at least one embedded discrete capacitor (430, 440).An electronic assembly comprising the die and substrate of any preceding claim.A method for making a substrate (210; 310; 410) to package a die (200; 300; 400), the method comprising:forming a plurality of power and ground vias (215, 227; 315, 327; 405, 419) in a core region of the substrate;forming in the substrate at least one capacitor (230; 330; 430, 440) having first and second terminals; andforming a plurality of lands (212, 213; 312, 313; 402, 403) on a surface of the substrate, including a first land coupled to the first terminal through one of the power vias, and a second land coupled to the second terminal through one of the ground vias, wherein the first and second lands are positioned to be coupled to corresponding power and ground nodes of the die.The method recited in claim 10 wherein the at least one capacitor is formed of a plurality of high permittivity layers (228; 340).The method recited in claim 11 wherein the at least one capacitor is formed of a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second lands, respectively.The method recited in claim 10 wherein the at least one capacitor is formed of at least one embedded discrete capacitor (430, 440).
Technical Field of the InventionThe present invention relates generally to electronics packaging. More particularly, the present invention relates to an electronic assembly that includes a substrate having one or more embedded capacitors to reduce switching noise in a high speed integrated circuit, and to manufacturing methods related thereto.Background of the InventionIntegrated circuits (ICs) are typically assembled into packages by physically and electrically coupling them to a substrate made of organic or ceramic material. One or more such IC packages can be physically and electrically coup01ed to a printed circuit board (PCB) or card to form an "electronic assembly". The "electronic assembly" can be part of an "electronic system". An "electronic system" is broadly defined herein as any product comprising an "electronic assembly". Examples of electronic systems include computers (e.g., desktop, laptop, hand-held, server, etc.), wireless communications devices (e.g., cellular phones, cordless phones, pagers, etc.), computer-related peripherals (e.g., printers, scanners, monitors, etc.), entertainment devices (e.g., televisions, radios, stereos, tape and compact disc players, video cassette recorders, MP3 (Motion Picture Experts Group, Audio Layer 3) players, etc.), and the like.In the field of electronic systems there is an incessant competitive pressure among manufacturers to drive the performance of their equipment up while driving down production costs. This is particularly true regarding the packaging of ICs on substrates, where each new generation of packaging must provide increased performance while generally being smaller or more compact in size.An IC substrate may comprise a number of insulated metal layers selectively patterned to provide metal interconnect lines (referred to herein as "traces"), and one or more electronic components mounted on one or more surfaces of the substrate. The electronic component or components are functionally connected to other elements of an electronic system through a hierarchy of conductive paths that includes the substrate traces. The substrate traces typically carry signals that are transmitted between the electronic components, such as ICs, of the system. Some ICs have a relatively large number of input/output (I/O) terminals, as well as a large number of power and ground terminals. The large number of I/O, power, and ground terminals requires that the substrate contain a relatively large number of traces. Some substrates require multiple layers of traces to accommodate all of the system interconnections.Traces located within different layers are typically connected electrically by vias (also called "plated through-holes") formed in the board. A via can be made by making a hole through some or all layers of a substrate and then plating the interior hole surface or filling the hole with an electrically conductive material, such as copper or tungsten.One of the conventional methods for mounting an IC on a substrate is called "controlled collapse chip connect" (C4). In fabricating a C4 package, the electrically conductive terminations or lands (generally referred to as "electrical contacts") of an IC component are soldered directly to corresponding lands on the surface of the substrate using reflowable solder bumps or balls. The C4 process is widely used because of its robustness and simplicity.As the internal circuitry of ICs, such as processors, operates at higher and higher clock frequencies, and as ICs operate at higher and higher power levels, switching noise can increase to unacceptable levels.For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a significant need in the art for a method and apparatus for packaging an IC on a substrate that minimizes problems, such as switching noise, associated with high clock frequencies and high power delivery.Brief Description of the DrawingsFIG. 1 is a block diagram of an electronic system incorporating at least one electronic assembly with embedded capacitors in accordance with one embodiment of the invention;FIG. 2 shows a cross-sectional representation of a multilayer substrate with embedded capacitors in accordance with one embodiment of the invention;FIG. 3 shows a cross-sectional representation of a multilayer substrate with embedded capacitors in accordance with another embodiment of the invention;FIG. 4 shows a cross-sectional representation of a multilayer substrate with embedded discrete capacitors in accordance with an alternate embodiment of the invention;FIG. 5 shows a graphical representation of capacitance versus area for various dielectric materials that can be used in a substrate with an embedded capacitor in accordance with one embodiment of the invention;FIG. 6 is a flow diagram of a method of fabricating a substrate comprising an embedded capacitor, in accordance with one embodiment of the invention; andFIG. 7 is a flow diagram of a method of fabricating an electronic assembly having a substrate comprising an embedded capacitor, in accordance with one embodiment of the invention.Detailed Description of Embodiments of the InventionIn the following detailed description of embodiments of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific preferred embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the spirit and scope of the present inventions. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.The present invention provides a solution to power delivery problems that are associated with prior art packaging of integrated circuits that operate at high clock speeds and high power levels by embedding one or more decoupling capacitors in a multilayer substrate. Various embodiments are illustrated and described herein. In one embodiment, the IC die is directly mounted to the multilayer substrate, which contains embedded capacitors. The embedded capacitors can be discrete capacitors, or they can be one or more layers of capacitive material.FIG. 1 is a block diagram of an electronic system 1 incorporating at least one electronic assembly 4 with embedded capacitors in accordance with one embodiment of the invention. Electronic system 1 is merely one example of an electronic system in which the present invention can be used. In this example, electronic system 1 comprises a data processing system that includes a system bus 2 to couple the various components of the system. System bus 2 provides communications links among the various components of the electronic system 1 and can be implemented as a single bus, as a combination of busses, or in any other suitable manner.Electronic assembly 4 is coupled to system bus 2. Electronic assembly 4 can include any circuit or combination of circuits. In one embodiment, electronic assembly 4 includes a processor 6 which can be of any type. As used herein, "processor" means any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), or any other type of processor or processing circuit.Other types of circuits that can be included in electronic assembly 4 are a custom circuit, an application-specific integrated circuit (ASIC), or the like, such as, for example, one or more circuits (such as a communications circuit 7) for use in wireless devices like cellular telephones, pagers, portable computers, two-way radios, and similar electronic systems. The IC can perform any other type of function.Electronic system 1 can also include an external memory 10, which in turn can include one or more memory elements suitable to the particular application, such as a main memory 12 in the form of random access memory (RAM), one or more hard drives 14, and/or one or more drives that handle removable media 16 such as floppy diskettes, compact disks (CDs), digital video disk (DVD), and the like.Electronic system 1 can also include a display device 8, a loudspeaker 9, and a keyboard and/or controller 20, which can include a mouse, trackball, game controller, voice-recognition device, or any other device that permits a system user to input information into and/or receive information from electronic system 1.FIG. 2 shows a cross-sectional representation of a multilayer substrate 210 with embedded capacitors in accordance with one embodiment ofthe invention. Substrate 210 has a plurality of lands 211-213 on one surface thereof that can be coupled to leads or conductive areas 201-203, respectively, on IC die 200 via solder balls or bumps 208. Leads 201 are coupled to signal lines of IC die 200, lead 202 is coupled to Vcc, and lead 203 is coupled to Vss. It will be understood that, although identical reference numbers have been used for the two conductive paths carrying signal levels, i.e. the paths comprising the structure identified by reference numbers 201, 208, 211, and 221-223, these signals can be different. Signal path structure can include various signal conductors illustrated as conductive layers within ceramic substrate 210, such as signal conductors 235-237.Signal leads or bumps, such as signal bumps 201, are typically arranged at the periphery of the die in an arrangement that is, for example, several rows deep (only one row being shown on each side of die 200 for the sake of simplicity).Substrate 210 can include multiple Vcc, Vss, and signal conductors, only a few of which are illustrated for the sake of simplicity.Substrate 210 comprises a pair of embedded capacitors. Each capacitor 230 comprises a pair of capacitive plates 226 and 229, with high permittivity (Dk) layers 228 between the capacitive plates 226 and 229, and between capacitors 230. One capacitive plate 226 of each capacitor 230 can be coupled to a Vss terminal 203 on die 200 via conductor 215, land 213, and solder ball 208. Another capacitive plate 229 of each capacitor 230 can be coupled to a Vcc terminal 202 on die 200 via conductor 227, land 212, and solder ball 208.The expression "high permittivity layer" as used herein means a layer of high permittivity material such as a high permittivity ceramic ply such as titanate particles; a high permittivity dielectric film such as a titanate film that is deposited, for example, by Sol-Gel or metal-organic chemical vapor deposition (MOCVD) techniques; or a layer of any other type of high permittivity material.Substrate 210 can be provided with one or more embedded capacitors 230.Die 200 and substrate 210 can be of any type. In one embodiment, die 200 is a processor, and substrate 210 is a multilayer ceramic substrate.In the embodiment shown in FIG. 2, metallized power vias 215 and 227 can connect the Vss and Vcc capacitive plates 226 and 229, respectively, of capacitor 230 to the corresponding regions of the die, which can comprise a relatively large number of Vss and Vcc die bumps 203 and 202, respectively, distributed in the core regions of the die 200. This large parallel connectivity ensures very low inductance (e.g. < 1 pico-Henry) and enhances the current carrying ability of the overall IC packaging structure.The signal traces may occur other than at the periphery of the die.It will be understood that while the pitch of power vias 215 and 227 in FIG. 2 is shown to be the same as the die bump pitch, the pitch of power vias 215 and 227 could be different from that of the die bump pitch. Likewise, while the pitch of signal vias 223 is shown to be wider than that of the die bump pitch, it could be the same in another embodiment. The geometry of the vias, including via pitch, can be varied in any suitable manner in accordance with design parameters known to those skilled in the art.Various embodiments can be implemented using ceramic substrate technology.One important purpose of the substrate with embedded capacitor(s) is to provide relatively high capacitance relatively close to the die in order to reduce the effect of reactive inductive coupling when the IC is operating, particularly at high clock speeds.FIG. 3 shows a cross-sectional representation of a substrate 310 with embedded capacitors in accordance with another embodiment of the invention. In the embodiment illustrated in FIG. 3, substrate 310 can be coupled to a further substrate 320. Substrate 320 can be similar to substrate 310, optionally having an IC die (not shown) on the opposite surface thereof, or it can be a printed circuit board (PCB) or other type of substrate. Leads or conductive areas 334, 339, and 319 of substrate 320 can be coupled to corresponding lands 331, 332, and 317 of substrate 310 via solder balls 338.The internal structure of substrate 310 can be similar to that described above regarding substrate 210 (FIG. 2). Thus, substrate 310 has a plurality of lands 311-313 on one surface thereof that can be coupled to leads or conductive areas 301-303, respectively, on IC die 300 via solder balls 308. Leads 301 are coupled to signal lines of IC die 300, lead 302 is coupled to Vcc, and lead 303 is coupled to Vss. It will be understood that, although identical reference numbers have been used for the two conductive paths carrying signal levels, i.e. the paths comprising the structure identified by reference numbers 301, 308, 311, and 321-323, these signals can be different. Signal path structure can include various signal conductors illustrated as conductive layers within substrate 310, such as signal conductors 335-337.Substrate 310 can include multiple Vcc, Vss, and signal conductors, only a few of which are illustrated for the sake of simplicity.Substrate 310 can comprise a pair of embedded capacitors 330, each comprising a pair of capacitive plates 326 and 329, with high Dk layers 340 between the capacitive plates 326 and 329, and between capacitors 330. One capacitive plate 326 of each capacitor 330 can be coupled to a Vss terminal 303 on die 300 through via segment 315, land 313, and solder ball 308. Plate 326 can also be coupled to a Vss terminal 319 on substrate 320 by means of via segment 316, land 317, and solder ball 338. Another capacitive plate 329 of each capacitor 330 can be coupled to a Vcc terminal 302 on die 300 through via segment 327, land 312, and solder ball 308. Plate 329 can also be coupled to a Vcc terminal 339 on substrate 320 by means of via segment 328, land 332, and solder ball 338.Substrate 310 can be provided with one or more embedded capacitors 330.Die 300 and substrates 310 and 320 can be of any type. In one embodiment, die 300 is a processor, substrate 310 is a multilayer ceramic substrate, and substrate 320 is a PCB. In another embodiment, substrate 320 is a ceramic substrate.In the embodiment shown in FIG. 3, metallized vias 315, 316 (it should be noted that various via segments illustrated in FIGS. 2 and 3, such as via segments 315, 316 and 327, 328, can be either separate vias or one continuous via) and 327, 328 can connect the Vss and Vcc capacitive plates 326 and 329, respectively, of capacitor 310 to the corresponding regions of the die, which can comprise a relatively large number of Vss and Vcc die bumps 303 and 302, respectively, distributed at the core regions of the die 300. This large parallel connectivity ensures very low inductance (e.g. < 1 pico-Henry).Various embodiments of substrates 310 and 320 can be implemented using ceramic substrate technology. The structure, including types of materials used, dimensions, number of layers, layout of power and signal conductors, and so forth, of substrates 310 and 320 can be similar or different, depending upon the requirements of the electronic assembly of which they form a part.It will be understood that the land/bump pitch of the top of substrate 310 needs to match the bump pitch of die 300, and that the land/bump pitch of the bottom of substrate 310 needs to match the pad pitch of substrate 320. While in the embodiment shown in FIG. 3 the pitch of the power vias 315 and 327 is the same on the top and bottom of substrate 320, and the pitch of the signal vias 323 is wider on the bottom of substrate 320 than on the top of substrate 320, the pitch relationship could be altered in any suitable fashion to satisfy design constraints and objectives.FIG. 4 shows a cross-sectional representation of a multilayer substrate 410 with two embedded discrete capacitors 430 and 440 in accordance with an alternate embodiment of the invention. Substrate 410, which can include multiple layers of Vcc, Vss, and signal conductors, is intended to be used to mount a die 400 thereon. Lands 402 of substrate 410 are intended to be at Vcc potential and can be coupled via certain ones of solder balls 401 to corresponding conductive areas (not shown) on IC die 400. Likewise, lands 403 are intended to be at Vss potential and can be coupled via other solder balls 401 to corresponding areas (not shown) on IC die 400.Discrete capacitors 430 and 440 can be of any suitable type. In one embodiment, each discrete capacitor 430 and 440 comprises a pair of upper terminals 426 and 428 and a pair of lower terminals 423 and 425. However, discrete capacitors with more or fewer terminals and/or with terminals coupled only to the upper portions of substrate 410 may also be used. For example, in the Related Invention mentioned above, in one embodiment a single discrete capacitor embedded within an interposer has two terminals that are coupled only to the upper part of the interposer. A similar capacitive structure could likewise be employed in an embodiment of the present invention, i.e. having terminals that are coupled only to the upper part of substrate 410.Lands 402 are coupled to upper terminal 426 of embedded capacitor 430 by a route that includes power vias 404, conductive layer 406, and power via 412. Lands 403 are coupled to the other upper terminal 428 of embedded capacitor 430 by a route that includes power vias 405, conductive layer 407, and power via 413.Lands 431 are coupled to lower terminal 423 of embedded capacitor 430 by a route that includes power vias 418, conductive layer 416, and power via 422. Lands 432 are coupled to the other lower terminal 425 of embedded capacitor 430 by a route that includes power vias 419, conductive layer 417, and power via 424.As illustrated in FIG. 4, similar Vcc and Vss connections can be made to the terminals of capacitor 440 as were described with respect to capacitor 430.Various signal routing (not illustrated for the sake of simplicity, but comprising signal areas of IC die 400, certain solder balls 401, appropriate lands on substrate 410 such as lands 408 and 434, and signal planes and signal vias within substrate 410 such as signal via 409) can also be provided within substrate 410, as will be understood by those of ordinary skill.Embedded capacitors 430 and 440 can be of any suitable construction. In one embodiment, they are ceramic chip capacitors that are fabricated using conventional ceramic chip capacitor technology. While two capacitors 430 and 440 are illustrated, for the sake of simplicity of illustration and description, a different number of capacitors could be used in the embodiment illustrated in FIG. 4, including only one capacitor.FIGS. 2-4 are merely representational and are not drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. FIGS. 2-4 are intended to illustrate various implementations of the invention, which can be understood and appropriately carried out by those of ordinary skill in the art.FabricationMultilayer ceramic substrates can be fabricated by conventional techniques, such as but not limited to high temperature co-fired ceramic (HTCC) technology, high thermal coefficient of expansion (HITCE) technology, or glass ceramic technology.Although it is known in ceramic technology to embed low Dk capacitors in ceramic substrates, by sandwiching thin (e.g. 2 mils) films of conventional ceramic such as Al2O3between metal planes, in the present invention multilayer stacks of high Dk ply are used in one embodiment. High Dk ply is commercially available for fabricating ceramic chip capacitors, for example. Suitable high Dk materials, such as titanate particles, can be inserted into the conventional ceramic matrix. Multilayer stacks of high Dk ply, such as BaTiO3, in the present invention can provide capacitances as high as 10 _F/sq. cm., compared to capacitances in the range of only nano-Farads/sq. cm. for low Dk ply.In an alternative embodiment, a high Dk layer, such as a titanate film, e.g. (BaXSr1-X)TiO3(BST) or PbZrTiO3(PZT) or Ta2O5or SrTiO3, can be formed in the ceramic substrate by known techniques such as a metal-organic chemical vapor deposition (MOCVD) process, or a Sol-Gel process, in which a sol, which is a colloidal suspension of solid particles in a liquid, transforms into a gel due to growth and interconnection of the solid particles.In either case, high Dk material can be embedded at temperature ranges that are compatible with ceramic technology (e.g. 600-1000 degrees Centigrade).Regarding the embodiment illustrated in FIG. 4, wherein discrete capacitors 430 and 440 are embedded in the substrate 410, access to capacitors 430 and 440 can be made by any conventional technique, such as punching or laser ablation, and the Vcc and Vss conductors of substrate 410 can be coupled to the terminals of capacitors 430 and 440 by any appropriate metallization technique that is consistent with the temperature requirements of the process.Estimation of CapacitanceCapacitance values for the embodiment shown in FIG. 3 can be estimated via Equation 1. Equation (1) C = A * _r * _0 /d   where:A = capacitor size (square meters)_r = permittivity constant 8.854 x 10-12 Farads/meter_0 = dielectric constant of insulatord = dielectric layer thickness (meters)FIG. 5 shows a graphical representation of capacitance (in nano-Farads) versus a side dimension of the capacitor (in microns) for various dielectric materials that can be used in a substrate with an embedded capacitor in accordance with one embodiment of the invention. Shown in FIG. 5 are plots for the following dielectric materials: line 501 for PZT (Dk=2000), line 502 for BaTiO3(Dk= 1000), line 503 for BST (Dk=500), line 504 for SrTiOX(Dk=200), and line 505 for TaOX(Dk=25).FIG. 5. summarizes the approximate range of capacitance available with the various titanates and oxide materials indicated. When using high permittivity ceramic ply (such as ceramic ply impregnated with BaTiO3), the indicated values correspond to the maximum capacitance generally achievable with a 10 micron thick ply between Vcc and Vss layers in a stack containing 40 such layers.In the case of dielectric formed by Sol-Gel or MOCVD embodiments (e.g., PZT, BST, SrTiO3or Ta2O5), the computed values correspond to a 0.25 micron film of the indicated dielectric.To satisfy the capacitance requirements of any given embodiment, multiple layers of capacitors could be stacked as necessary.FIG. 6 is a flow diagram of a method of fabricating a substrate comprising an embedded capacitor, in accordance with one embodiment of the invention. The method begins at 601.In 603, at least one capacitor having first and second terminals is formed within a substrate structure. In one embodiment, the structure is a multilayer ceramic structure, although in other embodiments the structure could be formed of a material other than a ceramic material. The capacitor comprises (1) at least one high permittivity layer sandwiched between conductive layers; alternatively, the capacitor is (2) a discrete capacitor.In 605, first and second power supply nodes are formed in the substrate structure. As used herein, the term "power supply node" refers to either a ground node (e.g. Vss) or to a power node at a potential different from ground (e.g. Vcc).In 607, a plurality of lands are formed on a surface of the substrate structure, including a first land coupled to the first terminal(s) of the capacitor(s) and to the first power supply node, and a second land coupled to the second terminal(s) of the capacitor(s) and to the second power supply node. The first and second lands are positioned to be coupled to first and second power supply nodes of a die (e.g. IC die 200, FIG. 2) that is to be juxtaposed to a surface of the substrate structure and physically affixed thereto. The method ends at 609.FIG. 7 is a flow diagram of a method of fabricating an electronic assembly having a substrate comprising an embedded capacitor, in accordance with one embodiment of the invention. The method begins at 701.In 703, a die is provided that has first and second power supply nodes.In 705, a substrate is provided that has third and fourth power supply nodes. The substrate comprises at least one capacitor having first and second terminals. The capacitor comprises (1) at least one high permittivity layer sandwiched between conductive layers; alternatively, the capacitor is a discrete capacitor. The substrate further comprises a plurality of lands on a surface thereof, including a first land coupled to the first terminal(s) of the capacitor(s) and to the third power supply node, and a second land coupled to the second terminal(s) of the capacitor(s) and to the fourth power supply node.In 707, the first and second lands are coupled to the first and second power supply nodes, respectively, of the die. The method ends at 709.The operations described above with respect to the methods illustrated in FIGS. 6 and 7 can be performed in a different order from those described herein.ConclusionThe present invention provides for an electronic assembly and methods of manufacture thereof that minimize problems, such as switching noise, associated with high clock frequencies and high power delivery. The present invention provides scalable high capacitance (e.g. >10 mF/square centimeter) by employing embedded decoupling capacitors having low inductance which can satisfy the power delivery requirements of, for example, high performance processors. An electronic system that incorporates the present invention can operate at higher clock frequencies and is therefore more commercially attractive.As shown herein, the present invention can be implemented in a number of different embodiments, including a substrate, an electronic assembly, an electronic system, a data processing system, a method for making a substrate, and a method for making an electronic assembly. Other embodiments will be readily apparent to those of ordinary skill in the art. The capacitive elements, choice of materials, geometries, and capacitances can all be varied to suit particular packaging requirements. The particular geometry of the embedded capacitors is very flexible in terms of their orientation, size, number, location, and composition of their constituent elements.While embodiments have been shown in which signal traces are provided around the periphery, and in which Vcc and Vss traces are provided at the die core, the signal traces may occur other than at the periphery.Further, the present invention is not to be construed as limited to use in C4 packages, and it can be used with any other type of IC package where the herein-described features of the present invention provide an advantage.Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for thespecific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims.Further aspects of the invention are defined in the following clauses:1. A multilayer ceramic substrate for mounting a die comprising:an embedded capacitor having first and second terminals; anda first plurality of lands on a first surface thereof, including a first land coupled to the first terminal and a second land coupled to the second terminal, wherein the first and second lands are positioned to be coupled to corresponding power supply nodes of the die.2. The multilayer ceramic substrate recited in clause 1 and further comprising a second plurality of lands on a second surface thereof, including a third land coupled to the first terminal and a fourth land coupled to the second terminal.3. The multilayer ceramic substrate recited in clause 2, wherein the third and fourth nodes are positioned to be coupled to corresponding power supply nodes of an additional substrate subjacent to the multilayer ceramic substrate.4. The multilayer ceramic substrate recited in clause 1 wherein the capacitor comprises at least one high permittivity layer.5. The multilayer ceramic substrate recited in clause 1 wherein the capacitor comprises a plurality of high permittivity layers.6. The multilayer ceramic substrate recited in clause 5 wherein the capacitor comprises a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second lands, respectively.7. The multilayer ceramic substrate recited in clause 1 wherein the capacitor comprises at least one embedded discrete capacitor.8. An electronic assembly comprising:a die comprising first and second power supply nodes;a multilayer ceramic substrate comprising:third and fourth power supply nodes coupled to the first and second power supply nodes, respectively; anda capacitor having a first terminal coupled to the third power supply node and a second terminal coupled to the fourth power supply node.9. The electronic assembly recited in clause 8 wherein the capacitor comprises at least one high permittivity layer.10. The electronic assembly recited in clause 8 wherein the capacitor comprises a plurality of high permittivity layers.11. The electronic assembly recited in clause 10 wherein the capacitor comprises a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second lands, respectively.12. The electronic assembly recited in clause 8 wherein the capacitor comprises at least one embedded discrete capacitor.13. An electronic system comprising an electronic assembly having a die coupled to a multilayer ceramic substrate, the substrate comprising at least one embedded capacitor having first and second terminals coupled to first and second power supply nodes of the die.14. The electronic system recited in clause 13 wherein the capacitor comprises at least one high permittivity layer.15. The electronic system recited in clause 13 wherein the capacitor comprises a plurality of high permittivity layers.16. The electronic system recited in clause 15 wherein the capacitor comprises a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the third and fourth power supply nodes, respectively.17. The electronic system recited in clause 13 wherein the capacitor comprises at least one embedded discrete capacitor.18. A data processing system comprising:a bus coupling components in the data processing system;a display coupled to the bus;external memory coupled to the bus; anda processor coupled to the bus and comprising an electronic assembly including:a die comprising first and second power supply nodes; anda multilayer ceramic substrate comprising a capacitor having a first terminal coupled to the first power supply node and a second terminal coupled to the second power supply node.19. The data processing system recited in clause 18 wherein the capacitor comprises at least one high permittivity layer.20. The data processing system recited in clause 18 wherein the capacitor comprises a plurality of high permittivity layers.21. The data processing system recited in clause 20 wherein the capacitor comprises a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second terminals, respectively.22. The data processing system recited in clause 18 wherein the capacitor comprises at least one embedded discrete capacitor.23. A method for making a multilayer ceramic substrate to package a die, the method comprising:forming in the substrate at least one capacitor having first and second terminals;forming in the substrate first and second power supply nodes; andforming a plurality of lands on a surface of the substrate, including a first land coupled to the first terminal and to the first power supply node, and a second land coupled to the second terminal and to the second power supply node, wherein the first and second lands are positioned to be coupled to first and second power supply nodes of the die.24. The method recited in clause 23 wherein the at least one capacitor is formed of a plurality of high permittivity layers.25. The method recited in clause 24 wherein the at least one capacitor is formed of a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second lands, respectively.26. The method recited in clause 23 wherein the at least one capacitor is formed of at least one embedded discrete capacitor.27. A method of making an electronic assembly comprising:providing a die having first and second power supply nodes;providing a substrate comprising:third and fourth power supply nodes;at least one capacitor having first and second terminals; anda plurality of lands on a surface thereof including a first land coupled to the first terminal and to the third power supply node, and a second land coupled to the second terminal and to the fourth power supply node; andcoupling the first and second lands to the first and second power supply nodes.28. The method recited in clause 27 wherein the at least one capacitor is formed of a plurality of high permittivity layers.29. The method recited in clause 28 wherein the at least one capacitor is formed of a plurality of conductive layers interleaved with the high permittivity layers, such that alternating conductive layers are coupled to the first and second lands, respectively.30. The method recited in clause 27 wherein the at least one capacitor is formed of at least one embedded discrete capacitor.
The present invention provides metal-containing compounds that include at least one ß-diketiminate ligand, and methods of making and using the same. In some embodiments, the metal-containing compounds are homoleptic complexes that include unsymmetrical ß-diketiminate ligands. In other embodiments, the metal-containing compounds are heteroleptic complexes including at least one ß-diketiminate ligand. The compounds can be used to deposit metal-containing layers using vapor deposition methods. Vapor deposition systems including the compounds are also provided. Sources for ß-diketiminate ligands are also provided.
What is claimed is: 1. A method of forming a metal-containing layer on a substrate, the method comprising: providing a substrate; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>; and contacting the vapor comprising the at least one compound of Formula I with the substrate to form a metal-containing layer on at least one surface of the substrate using a vapor deposition process. 2. The method of claim 1 wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 3. The method of claim 2 wherein R<1> = isopropyl; and R<5> = tert-butyl. 4. The method of claim 2 wherein R<2> = R<4> = methyl; and R<3> = H. 5. The method of claim 4 wherein R<1> = isopropyl; and R<5> = tert-butyl. 6. The method of claim 1 wherein at least one L is selected from the group consisting of a halide, an alkoxide group, an amide group, a mercaptide group, cyanide, an alkyl group, an amidinate group, a guanidinate group, an isoureate group, a [beta]-diketonate group, a [beta]-iminoketonate group, a [beta]-diketiminate group, and combinations thereof. 7. The method of claim 6 wherein the at least one L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. 8. The method of claim 6 wherein the at least one L is a [beta]-diketiminate group having a structure that is different than that of the [beta]-diketiminate ligand shown in Formula I. 9. The method of claim 8 wherein the at least one L is a symmetric [beta]- diketiminate group. 10. The method of claim 8 wherein the at least one L is an unsymmetric [beta]- diketiminate group. 11. The method of claim 1 wherein at least one Y is selected from the group consisting of a carbonyl, a nitrosyl, ammonia, an amine, nitrogen, a phosphine, an alcohol, water, tetrahydrofuran, and combinations thereof. 12. A method of manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>; and directing the vapor comprising the at least one compound of Formula I to the semiconductor substrate or substrate assembly to form a metal-containing layer on at least one surface of the semiconductor substrate or substrate assembly using a vapor deposition process. 13. The method of claim 12 further comprising providing a vapor comprising at least one metal-containing compound different than Formula I, and directing the vapor comprising the at least one metal-containing compound different than Formula I to the semiconductor substrate or substrate assembly. 14. The method of claim 13 wherein the metal of the at least one metal- containing compound different than Formula I is selected from the group consisting of Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. 15. The method of claim 12 further comprising providing at least one reaction gas. 16. The method of claim 12 wherein the vapor deposition process is a chemical vapor deposition process. 17. The method of claim 12 wherein the vapor deposition process is an atomic layer deposition process comprising a plurality of deposition cycles. 18. A method of forming a metal-containing layer on a substrate, the method comprising: providing a substrate; providing a vapor comprising at least one compound of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal;each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]-diketiminate ligands shown in Formula II have different structures; and contacting the vapor comprising the at least one compound of Formula II with the substrate to form a metal-containing layer on at least one surface of the substrate using a vapor deposition process. 19. The method of claim 18 wherein each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 20. The method of claim 19 wherein R<1> = R<5> = fert-butyl; and R<6> = R<10> = isopropyl. 21. The method of claim 19 wherein R<2> = R<4> = R<7> = R<9> = methyl; and R<3> = R<8> = H. 22. The method of claim 21 wherein R<1> = R<5> = tert-butyl; and R<6> = R<10> = isopropyl. 23. A method of manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly; providing a vapor comprising at least one compound of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]-diketiminate ligands shown in Formula II have different structures; and directing the vapor comprising the at least one compound of Formula II to the semiconductor substrate or substrate assembly to form a metal-containing layer on at least one surface of the semiconductor substrate or substrate assembly using a vapor deposition process. 24. The method of claim 23 further comprising providing a vapor comprising at least one metal-containing compound different than Formula II, and directing the vapor comprising the at least one metal-containing compound different than Formula II to the semiconductor substrate or substrate assembly. 25. The method of claim 24 wherein the metal of the at least one metal- containing compound different than Formula II is selected from the group consisting of Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. 26. The method of claim 23 further comprising providing at least one reaction gas. 27. The method of claim 23 wherein the vapor deposition process is a chemical vapor deposition process. 28. The method of claim 23 wherein the vapor deposition process is an atomic layer deposition process comprising a plurality of deposition cycles. 29. A compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5>is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. 30. The compound of claim 29 wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 31. The compound of claim 30 wherein R<1> = isopropyl; and R<5> = tert-b[tau][Lambda]yl. 32. The compound of claim 30 wherein R<2> = R<4> = methyl; and R<3> = H. 33. The compound of claim 32 wherein R<1> = isopropyl; and R<5> = tert-but[gamma]l. 34. The compound of claim 29 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 35. The compound of claim 29 wherein at least one L is selected from the group consisting of a halide, an alkoxide group, an amide group, a mercaptide group, cyanide, an alkyl group, an amidinate group, a guanidinate group, an isoureate group, a [beta]-diketonate group, a [beta]-iminoketonate group, a [beta]-diketiminate group, and combinations thereof. 36. The compound of claim 35 wherein the at least one L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. 37. The compound of claim 35 wherein the at least one L is a [beta]-diketiminate group having a structure that is different than that of the [beta]-diketiminate ligand shown in Formula I. 38. The compound of claim 37 wherein the at least one L is a symmetric [beta]- diketiminate group. 39. The compound of claim 37 wherein the at least one L is an unsymmetric [beta]- diketiminate group. 40. The compound of claim 29 wherein at least one Y is selected from the group consisting of a carbonyl, a nitrosyl, ammonia, an amine, nitrogen, a phosphine, an alcohol, water, tetrahydrofuran, and combinations thereof. 41. A method of making a metal-containing compound, the method comprising combining components comprising: a ligand source of the formula (Formula III): or a tautomer thereof; optionally a source for an anionic ligand L; optionally a source for a neutral ligand Y; and a metal (M) source; wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group, with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>; and wherein the metal (M) source is selected from the group consisting of a Group 2 metal source, a Group 3 metal source, a Lanthanide metal source, and combinations thereof, under conditions sufficient to provide a metal-containing compound of the formula (Formula I): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5>are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n. 42. The method of claim 41 wherein the metal (M) source comprises a M(II) bis(hexamethyldisilazane), a M(II) bis(hexamethyldisilazane)bis(tetrahydrofuran), or combinations thereof. 43. The method of claim 41 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 44. A method of making a metal-containing compound, the method comprising combining components comprising: a compound of the formula (Formula I): a compound of the formula (Formula VI): wherein: each M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; each n represents the valence state of the metal; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10>is independently hydrogen or an organic group; and the [beta]-diketiminate ligands shown in Formula I and Formula VI have different structures; with the proviso that one or more of the following apply: R<1> is different than R<5>, R<2> is different than R<4>, R<6> is different than R<10>, or R<7> is different than R<9>; under conditions sufficient to provide a metal-containing compound of the formula (Formula II): wherein M, Y, L, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above. 45. A precursor composition for a vapor deposition process, the composition comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. 46. A compound of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]-diketiminate ligands shown in Formula II have different structures. 47. The compound of claim 46 wherein each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<1> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 48. The compound of claim 47 wherein R<1> = R<5> = tert-butyl; and R<6> = R<10> = isopropyl. 49. The compound of claim 47 wherein R<2> = R<4> = R<7> = R<9> = methyl; and R<3> = R<8> - H. 50. The compound of claim 49 wherein R<1> = R<5> = tert-butyl; and R<6> = R<10> = isopropyl. 51. The compound of claim 46 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 52. A method of making a metal-containing compound, the method comprising combining components comprising: a ligand source of the formula (Formula III): or a tautomer thereof; a ligand source of the formula (Formula IV): optionally a source for an anionic ligand L; optionally a source for a neutral ligand Y; and a metal (M) source; wherein each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the ligand sources shown in Formula III and Formula IV have different structures; and wherein the metal (M) source is selected from the group consisting of a Group 2 metal source, a Group 3 metal source, a Lanthanide metal source, and combinations thereof, under conditions sufficient to provide a metal- containing compound of the formula (Formula II): wherein M, Y, L, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> are as defined above, n represents the valence state of the metal, and z is from 0 to 10. 53. A method of making a metal-containing compound, the method comprising combining components comprising: a compound of the formula (Formula I): a compound of the formula (Formula VI): wherein: each M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; each n represents the valence state of the metal; each z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> - R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the [beta]-diketiminate ligands shown in Formula I and Formula VI have different structures; under conditions sufficient to provide a metal-containing compound of the formula (Formula II): wherein M, Y, L, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above, and the two [beta]-diketiminate ligands shown in Formula II have different structures. 54. A precursor composition for a vapor deposition process, the composition comprising at least one compound of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from O to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]-diketiminate ligands shown in Formula II have different structures. 55. A ligand source of the Formula (III): or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an alkyl moiety having 1 to 10 carbon atoms, with the proviso that one of more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. 56. The ligand source of claim 55 wherein R<1> = tert-butyl and R<5> = isopropyl. 57. The ligand source of claim 55 wherein R<2> = R<4> = methyl, and R<3> = H. 58. The ligand source of claim 55 wherein R<1> = tert-butyl and R<5> = isopropyl. 59. A method of making a [beta]-diketiminate ligand source, the method comprising combining components comprising: an amine of the formula R<1>NH2; a compound of the formula (Formula V): or a tautomer thereof; and an alkylating agent, under conditions sufficient to provide a ligand source of the formula (Formula III): or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an alkyl moiety having 1 to 10 carbon atoms, with the proviso that one of more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. 60. A vapor deposition system comprising: a deposition chamber having a substrate positioned therein; and at least one vessel comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. 61. A vapor deposition system comprising: a deposition chamber having a substrate positioned therein; and at least one vessel comprising at least one compound of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> - R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]-diketiminate ligands shown in Formula II have different structures.
UNSYMMETRICAL LIGAND SOURCES, REDUCED SYMMETRY METAL-CONTAINING COMPOUNDS, ANDSYSTEMS AND METHODS INCLUDING SAMEThis application claims priority to U.S. Patent Application Serial No. 11/169,082, filed June 28, 2005, which is incorporated herein by reference in its entirety.BACKGROUNDThe scaling down of integrated circuit devices has created a need to incorporate high dielectric constant materials into capacitors and gates. The search for new high dielectric constant materials and processes is becoming more important as the minimum size for current technology is practically constrained by the use of standard dielectric materials. Dielectric materials containing alkaline earth metals can provide a significant advantage in capacitance compared to conventional dielectric materials. For example, the perovskite material SrTiO3 has a disclosed bulk dielectric constant of up to 500.Unfortunately, the successful integration of alkaline earth metals into vapor deposition processes has proven to be difficult. For example, although atomic layer deposition (ALD) of alkaline earth metal diketonates has been disclosed, these metal diketonates have low volatility, which typically requires that they be dissolved in organic solvent for use in a liquid injection system, hi addition to low volatility, these metal diketonates generally have poor reactivity, often requiring high substrate temperatures and strong oxidizers to grow a film, which is often contaminated with carbon. Other alkaline earth metal sources, such as those including substituted or unsubstituted cyclopentadienyl ligands, typically have poor volatility as well as low thermal stability, leading to undesirable pyrolysis on the substrate surface. New sources and methods of incorporating high dielectric materials are being sought for new generations of integrated circuit devices.SUMMARY OF THE INVENTIONThe present invention provides metal-containing compounds (i.e., metal- containing complexes) that include at least one [beta]-diketiminate ligand, and methods of making and using, and vapor deposition systems including the same. The presently disclosed metal-containing compounds have reduced symmetry compared to known homoleptic complexes with symmetrical ligands. The reduced symmetry may result from the unsymmetric ligands themselves, the coordination of different types of ligands, or both. Reduced symmetry may lead to desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods. In one aspect, the present invention provides a method of forming a metal-containing layer on a substrate (e.g., a semiconductor substrate or substrate assembly) using a vapor deposition process. The method can be useful in the manufacture of semiconductor structures. The method includes: providing a substrate; providing a vapor including at least one compound of the formula (Formula I):and contacting the vapor including the at least one compound of Formula I with the substrate (and typically directing the vapor to the substrate) to form a metal- containing layer on at least one surface of the substrate. The reduced symmetry compound of the formula (Fo[pi]nula I) includes at least one unsymmetrical [beta]- diketiminate ligand, wherein M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from O to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. In another aspect, the present invention provides a method of forming a metal-containing layer on a substrate (e.g., a semiconductor substrate or substrate assembly) using a vapor deposition process. The method can be useful in the manufacture of semiconductor structures. The method includes: providing a substrate; providing a vapor including at least one compound of the formula (Formula II):and contacting the vapor including the at least one compound of Formula II with the substrate (and typically directing the vapor to the substrate) to form a metal- containing layer on at least one surface of the substrate. The reduced symmetry compound of the formula (Formula II) includes two different symmetrical [beta]- diketiminate ligands, wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; and R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>. In another aspect, the present invention provides metal-containing compounds having at least one unsymmetrical [beta]-diketiminate ligand, precursor compositions including such compounds, vapor deposition systems including such compounds, and methods of making such compounds. Such metal- containing compounds include those of the formula (Formula I):wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n; and each R<1>, R<2>, R<3>, R , and R<5> is independently hydrogen or an organic group; with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R . The present invention also provides sources for unsymmetrical [beta]-diketiminate ligands, and methods of making same, which are useful for making metal- containing compounds having at least one unsymmetrical [beta]-diketiminate ligand.In another aspect, the present invention provides metal-containing compounds having two different symmetrical [beta]-diketiminate ligands, precursor compositions including such compounds, vapor deposition systems including such compounds, and methods of making such compounds. Such metal- containing compounds include those of the formula (Formula II): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; and R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>.Advantageously, the reduced symmetry metal-containing compounds of the present invention include elements of asymmetry that may lead to desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods. DefinitionsAs used herein, formulas of the type:are used to represent pentadienyl-group type ligands (e.g., [beta]-diketiminate ligands) having delocalized electron density that are coordinated to a metal. The ligands may be coordinated to the metal through one, two, three, four, and/or five atoms (i.e., [eta]<1>-, [eta]<2>-, [eta]<3>-, [eta]<4>-, and/or incoordination modes).As used herein, the term "organic group" is used for the purpose of this invention to mean a hydrocarbon group that is classified as an aliphatic group, cyclic group, or combination of aliphatic and cyclic groups (e.g., alkaryl and aralkyl groups). In the context of the present invention, suitable organic groups for metal-containing compounds of this invention are those that do not interfere with the formation of a metal oxide layer using vapor deposition techniques. In the context of the present invention, the term "aliphatic group" means a saturated or unsaturated linear or branched hydrocarbon group. This term is used to encompass alkyl, alkenyl, and alkynyl groups, for example. The term "alkyl group" means a saturated linear or branched monovalent hydrocarbon group including, for example, methyl, ethyl, n-propyl, isopropyl, terf-butyl, amyl, heptyl, and the like. The term "alkenyl group" means an unsaturated, linear or branched monovalent hydrocarbon group with one or more olefinically unsaturated groups (i.e., carbon-carbon double bonds), such as a vinyl group. The term "alkynyl group" means an unsaturated, linear or branched monovalent hydrocarbon group with one or more carbon-carbon triple bonds. The term "cyclic group" means a closed ring hydrocarbon group that is classified as an alicyclic group, aromatic group, or heterocyclic group. The term "alicyclic group" means a cyclic hydrocarbon group having properties resembling those of aliphatic groups. The term "aromatic group" or "aryl group" means a mono- or polynuclear aromatic hydrocarbon group. The term "heterocyclic group" means a closed ring hydrocarbon in which one or more of the atoms in the ring is an element other than carbon (e.g., nitrogen, oxygen, sulfur, etc.).As a means of simplifying the discussion and the recitation of certain terminology used throughout this application, the terms "group" and "moiety" are used to differentiate between chemical species that allow for substitution or that may be substituted and those that do not so allow for substitution or may not be so substituted. Thus, when the term "group" is used to describe a chemical substituent, the described chemical material includes the unsubstituted group and that group with nonperoxidic O, N, S, Si, or F atoms, for example, in the chain as well as carbonyl groups or other conventional substituents. Where the term "moiety" is used to describe a chemical compound or substituent, only an unsubstituted chemical material is intended to be included. For example, the phrase "alkyl group" is intended to include not only pure open chain saturated hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, tert-butyl, and the like, but also alkyl substituents bearing further substituents known in the art, such as hydroxy, alkoxy, alkylsulfonyl, halogen atoms, cyano, nitro, amino, carboxyl, etc. Thus, "alkyl group" includes ether groups, haloalkyls, nitroalkyls, carboxyalkyls, hydroxyalkyls, sulfoalkyls, etc. On the other hand, the phrase "alkyl moiety" is limited to the inclusion of only pure open chain saturated hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, tert-butyl, and the like.As used herein, "metal-containing" is used to refer to a material, typically a compound or a layer, that may consist entirely of a metal, or may include other elements in addition to a metal. Typical metal-containing compounds include, but are not limited to, metals, metal-ligand complexes, metal salts, organometallic compounds, and combinations thereof. Typical metal-containing layers include, but are not limited to, metals, metal oxides, metal silicates, and combinations thereof.As used herein, "a," "an," "the," and "at least one" are used interchangeably and mean one or more than one. As used herein, the term "comprising," which is synonymous with"including" or "containing," is inclusive, open-ended, and does not exclude additional unrecited elements or method steps.The terms "deposition process" and "vapor deposition process" as used herein refer to a process in which a metal-containing layer is formed on one or more surfaces of a substrate (e.g., a doped polysilicon wafer) from vaporized precursor composition(s) including one or more metal-containing compounds. Specifically, one or more metal-containing compounds are vaporized and directed to and/or contacted with one or more surfaces of a substrate (e.g., semiconductor substrate or substrate assembly) placed in a deposition chamber. Typically, the substrate is heated. These metal-containing compounds form (e.g., by reacting or decomposing) a non-volatile, thin, uniform, metal- containing layer on the surface(s) of the substrate. For the purposes of this invention, the term "vapor deposition process" is meant to include both chemical vapor deposition processes (including pulsed chemical vapor deposition processes) and atomic layer deposition processes."Chemical vapor deposition" (CVD) as used herein refers to a vapor deposition process wherein the desired layer is deposited on the substrate from vaporized metal-containing compounds (and any reaction gases used) within a deposition chamber with no effort made to separate the reaction components. In contrast to a "simple" CVD process that involves the substantial simultaneous use of the precursor compositions and any reaction gases, "pulsed" CVD alternately pulses these materials into the deposition chamber, but does not rigorously avoid intermixing of the precursor and reaction gas streams, as is typically done in atomic layer deposition or ALD (discussed in greater detail below).The term "atomic layer deposition" (ALD) as used herein refers to a vapor deposition process in which deposition cycles, preferably a plurality of consecutive deposition cycles, are conducted in a process chamber (i.e., a deposition chamber). Typically, during each cycle the precursor is chemisorbed to a deposition surface (e.g., a substrate assembly surface or a previously deposited underlying surface such as material from a previous ALD cycle), forming a monolayer or sub-monolayer that does not readily react with additional precursor (i.e., a self-limiting reaction). Thereafter, if necessary, a reactant (e.g., another precursor or reaction gas) may subsequently be introduced into the process chamber for use in converting the chemisorbed precursor to the desired material on the deposition surface. Typically, this reactant is capable of further reaction with the precursor. Further, purging steps may also be utilized during each cycle to remove excess precursor from the process chamber and/or remove excess reactant and/or reaction byproducts from the process chamber after conversion of the chemisorbed precursor. Further, the term "atomic layer deposition," as used herein, is also meant to include processes designated by related terms such as, "chemical vapor atomic layer deposition", "atomic layer epitaxy" (ALE) (see U.S. Patent No. 5,256,244 to Ackerman), molecular beam epitaxy (MBE), gas source MBE, or organometallic MBE, and chemical beam epitaxy when performed with alternating pulses of precursor composition(s), reactive gas, and purge (e.g., inert carrier) gas. As compared to the one cycle chemical vapor deposition (CVD) process, the longer duration multi-cycle ALD process allows for improved control of layer thickness and composition by self-limiting layer growth, and minimizing detrimental gas phase reactions by separation of the reaction components. The self-limiting nature of ALD provides a method of depositing a film on a wide variety of reactive surfaces, including surfaces with irregular topographies, with better step coverage than is available with CVD or other "line of sight" deposition methods such as evaporation or physical vapor deposition (PVD or sputtering). BRIEF DESCRIPTION OF THE FIGURESFigure l is a perspective view of a vapor deposition system suitable for use in methods of the present invention.DETAILED DESCRIPTION OF CERTAIN EMBODIMENTSThe present invention provides metal-containing compounds (i.e., metal- containing complexes) that include at least one [beta]-diketiminate ligand, and methods of making and using, and vapor deposition systems including the same. In some embodiments, the at least one [beta]-diketiminate ligand can be in the [eta]<5>- coordination mode. In some embodiments, the metal-containing compounds are homoleptic complexes (i.e., complexes in which the metal is bound to only one type of ligand) that include unsymmetrical [beta]-diketiminate ligands. In other embodiments, the metal-containing compounds are heteroleptic complexes (i.e., complexes in which the metal is bound to more than one type of ligand) including at least one [beta]-diketiminate ligand, which can be symmetric or unsymmetric. Thus, the presently disclosed metal-containing compounds have reduced symmetry compared to known homoleptic complexes with symmetrical ligands. The reduced symmetry may result from the unsymmetric ligands themselves, the coordination of different types of ligands, or both. Reduced symmetry may lead to desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods. COMPOUNDS WITHATLEASTONE UNSYMMETRICAL LIGANDIn one embodiment, metal-containing compounds including at least one unsymmetrical [beta]-diketiminate ligand, and precursor compositions including such compounds, are disclosed. Such compounds include a compound of the formula (Formula I):M is a Group 2 metal (e.g., Ca, Sr, Ba), a Group 3 metal (e.g., Sc, Y, La), a Lanthanide (e.g., Pr, Nd), or a combination thereof. Preferably M is Ca, Sr, or Ba. Each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group (e.g., an alkyl group, and preferably, for example, an alkyl moiety); with the proviso that one or more of the following apply: R<1> is different than R<5>, or R is different than R<4>. In certain embodiments, R<1>, R<2>, R<3>, R<4>, and R<5> are each independently hydrogen or an organic group having 1 to 10 carbon atoms (e.g., methyl, ethyl, propyl, isopropyl, butyl, sec-butyl, tert-bx&yX). In certain embodiments, R<1> = isopropyl and R<5> = tert-hutyl. In certain embodiments, R and/or R<4> are methyl. In certain embodiments, R<3> is H. Such an exemplary compound of Formula I is the compound in which R<2> = R = methyl, R<3> = H, R = isopropyl, and R<5> = te/t-butyl.L can represent a wide variety of anionic ligands. Exemplary anionic ligands (L) include halides, alkoxide groups, amide groups, mercaptide groups, cyanide, alkyl groups, amidinate groups, guanidinate groups, isoureate groups, [beta]-diketonate groups, [beta]-iminoketonate groups, [beta]-diketiminate groups, and combinations thereof. In certain embodiments, L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. In other certain embodiments, L is a [beta]-diketiminate group (e.g., symmetric or unsymrnetric) having a structure that is different than that of the [beta]- diketiminate ligand shown in Formula I.Y represents an optional neutral ligand. Exemplary neutral ligands (Y) include carbonyl (CO), nitrosyl (NO), ammonia (NH3), amines (NR3), nitrogen (N2), phosphines (PR3), alcohols (ROH), water (H2O), tetrahydrofuran, and combinations thereof, wherein each R independently represents hydrogen or an organic group. The number of optional neutral ligands (Y) is represented by z, which is from 0 to 10, and preferably from 0 to 3. More preferably, Y is not present (i.e., z = 0). In one embodiment, a metal-containing compound including at least one unsymmetrical [beta]-diketiminate ligand can be made, for example, by a method that includes combining components including an unsymmetrical [beta]-diketiminate ligand source, a metal source, optionally a source for a neutral ligand Y, and a source for an anionic ligand L, which can be the same or different than the unsymmetrical [beta]-diketiminate ligand source. Typically, a ligand source can be deprotonated to become a ligand.An exemplary method includes combining components including: a ligand source of the formula (Formula III):or a tautomer thereof; a source for an anionic ligand L (e.g., as described herein); optionally a source for a neutral ligand Y (e.g., as described herein); and a metal (M) source under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refiuxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water. The metal (M) source can be selected from the group consisting of a Group II metal source, a Group III metal source, a Lanthanide metal source, and combinations thereof. Exemplary metal sources include, for example, a M(II) bis(hexamethyldisilazane), a M(II) bis(hexamethyldisilazane)bis(tetrahydrofuran), or combinations thereof.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group (e.g., an alkyl group, and preferably, for example, an alkyl moiety), with the proviso that one or more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. The method provides a metal-containing compound of the formula(Formula I):wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n. Unsymmetrical [beta]-diketiminate ligand sources can be made, for example, using condensation reactions. For example, exemplary unsymmetrical [beta]- diketiminate ligand sources can be made by a method including combining components including an amine of the formula R<1>NH2; a compound of the formula (Formula V): or a tautomer thereof; and an agent capable of activating the carbonyl group for reaction with the amine, under conditions sufficient to provide a ligand source of the formula (Formula HI):or a tautomer thereof.Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an alkyl moiety having 1 to 10 carbon atoms (e.g., methyl, ethyl, propyl, isopropyl, butyl, seabutyl, tert-b[upsilon]tyl), with the proviso that one of more of the following apply: R<1> is different than R<5>, or R<2> is different than R<4>. Accordingly, the present invention also provides ligand sources of Formula III. In certain embodiments, R<1> = isopropyl and R<5> = tert-butyl. In certain embodiments, R<2> and/or R<4> are methyl. In certain embodiments, R<3> is H. Such an exemplary compound of Formula III is the compound in which R<2> = R<4> = methyl, R<3> = H, R<1> = isopropyl, and R<5> = tert-butyl.Tautomers of compounds of Formula III and Formula V include isomers in which a hydrogen atom is bonded to another atom. Typically, tautomers can be in equilibrium with one another. Specifically, the present invention contemplates tautomers of Formula III including, for example,Similarly, the present invention contemplates tautomers of Formula V including, for example,Suitable agents capable of activating a carbonyl group for reaction with an amine are well known to those of skill in the art and include, for example, alkylating agents. Exemplary alkylating agents include triethyloxonium tetrafluoroborate, dimethyl sulfate, nitrosoureas, mustard gases (e.g., 1,1- thiobis(2-chloroethane)), and combinations thereof.Additional metal-containing compounds including at least one unsymmetrical [beta]-diketiminate ligand can be made, for example, by ligand exchange reactions between a metal-containing compound including at least one unsymmetrical [beta]-diketiminate ligand and a metal-containing compound including at least one different [beta]-diketiminate ligand. Such an exemplary method includes combining components including a compound of the formula (Formula I): and a compound of the formula (Formula VI):under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.Each M is a Group 2 metal, a Group 3 metal, a Lanthanide, or a combination thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n. Each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; and the [beta]-diketiminate ligands shown in Formula I and Formula VI have different structures, with the proviso that one or more of the following apply: R<1> is different than R<5>, R<2> is different than R<4>, R<6> is different than R<10>, or R<7> is different than R<9>. The method can provide a metal-containing compound of the formula(Formula II): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above, and the two [beta]-diketiminate ligands shown in Formula II have different structures.HETEROLEPTIC COMPOUNDS WITH DIFFERENT SYMMETRICAL LIGANDSIn another embodiment, compounds that are heteroleptic metal- containing compounds including different symmetrical [beta]-diketiminate ligands, and precursor compositions including such compounds, are disclosed. Such compounds include a compound of the formula (Formula II):M is a Group 2 metal (e.g., Ca, Sr, Ba), a Group 3 metal (e.g., Sc, Y, La), a Lanthanide (e.g., Pr, Nd), or combinations thereof. Preferably M is Ca, Sr, or Ba. Each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; and z is from 0 to 10.Each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group (e.g., an alkyl group, and preferably, for example, an alkyl moiety); R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the two [beta]- diketiminate ligands shown in Formula II have different structures. In certain embodiment, each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group having 1 to 10 carbon atoms (e.g., methyl, ethyl, propyl, isopropyl, butyl, sec-butyl, tert-hutyl). In certain embodiments, R<1> = R<5> = tert-hvXyl, and R<6> = R<10> = isopropyl. In certain embodiments, R<2>, R<4>, R<7>, and/or R<9> are methyl. In certain embodiments, R<3> and/or R<8> are H. Such an exemplary compound of Formula II is the compound in which R<2> = R<4> = R<7> = R<9> = methyl, R<3> - R<8> = H, R<1> = R<5> = tert-butyl, and R<6> = R<10> = isopropyl.L represents a wide variety of optional anionic ligands. Exemplary anionic ligands (L) include halides, alkoxide groups, amide groups, mercaptide groups, cyanide, alkyl groups, amidinate groups, guanidinate groups, isoureate groups, [beta]-diketonate groups, [beta]-iminoketonate groups, [beta]-diketiminate groups, and combinations thereof. In certain embodiments, L is a [beta]-diketiminate group having a structure that is the same as that of one of the [beta]-diketiminate ligands shown in Formula II. In other certain embodiments, L is a [beta]-diketiminate group (e.g., symmetric or unsymmetric) having a structure that is different than either of the [beta]-diketiminate ligands shown in Formula II.Y represents an optional neutral ligand. Exemplary neutral ligands (Y) include carbonyl (CO), nitrosyl (NO), ammonia (NH3), amines (NR3), nitrogen (N2), phosphines (PR3), alcohols (ROH), water (H2O), tetrahydrofuran, and combinations thereof, wherein each R independently represents hydrogen or an , organic group. The number of optional neutral ligands (Y) is represented by z, which is from 0 to 10, and preferably from 0 to 3. More preferably, Y is not present (i.e., z = 0).In one embodiment, a metal-containing compound including different symmetrical [beta]-diketiminate ligands can be made, for example, by a method that includes combining components including at least two different symmetrical [beta]- diketiminate ligand sources and a metal source. Symmetrical [beta]-diketiminate ligand sources can be made as described, for example, in El-Kaderi et al., Organometallics, 23:4995-5002 (2004). An exemplary method includes combining components including: a ligand source of the formula (Formula III):a tautomer thereof; a ligand source of the formula (Formula IV):a tautomer thereof; and optionally a source for an anionic ligand L (e.g., as described herein); optionally a source for a neutral ligand Y (e.g., as described herein); and a metal (M) source under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.The metal (M) source is a Group II metal source, a Group III metal source, a Lanthanide metal source, or a combination thereof. Exemplary metal sources include, for example, a M(II) bis(hexamethyldisilazane), a M(II) bis(hexamethyldisilazane)bis(tetrahydrofuran), or combinations thereof. Each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group (e.g., an alkyl group, and preferably, for example, an alkyl moiety); R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>, and the ligand sources shown in Formula III and Formula IV have different structures. The method can provide a metal-containing compound of the formula (Formula II):wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> are as defined above, n represents the valence of the metal, and z is from 0 to 10.Specifically, the present invention contemplates tautomers of Formula IV including, for example,In another embodiment, a metal-containing compound including different symmetrical [beta]-diketiminate ligands can be made, for example, by ligand exchange reactions between metal-containing compounds including different symmetrical [beta]-diketiminate ligands. Such an exemplary method includes combining components including a compound of the formula (Formula I): and a compound of the formula (Formula VI):under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.Each M is a Group 2 metal, a Group 3 metal, a Lanthanide, or a combination thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n. Each R<1> , R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group; R<1> = R<5>, R<2> = R<4>, R<6> = R<10>, and R<7> = R<9>; and the [beta]-diketiminate ligands shown in Formula I and Formula VI have different structures.The method can provide a metal-containing compound of the formula (Formula II): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above.OTHER METAL-CONTAINING COMPOUNDS Precursor compositions that include a metal- containing compound that includes at least one [beta]-diketiminate ligand can be useful for depositing metal- containing layers using vapor deposition methods, hi addition, such vapor deposition methods can also include precursor compositions that include one or more different metal-containing compounds. Such precursor compositions can be deposited/chemisorbed, for example in an ALD process discussed more fully below, substantially simultaneously with or sequentially to, the precursor compositions including metal-containing compounds with at least one [beta]- diketiminate ligand. The metals of such different metal-containing compounds can include, for example, Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. Suitable different metal-containing compounds include, for example, tetrakis titanium isopropoxide, titanium tetrachloride, trichlorotitanium dialkylamides, tetrakis titanium dialkylamides, tetrakis hafnium dialkylamides, trimethyl aluminum, zirconium (IV) chloride, pentakis tantalum ethoxide, and combinations thereof.VAPOR DEPOSITIONMETHODSThe metal-containing layer can be deposited, for example, on a substrate (e.g., a semiconductor substrate or substrate assembly). "Semiconductor substrate" or "substrate assembly" as used herein refer to a semiconductor substrate such as a base semiconductor layer or a semiconductor substrate having one or more layers, structures, or regions formed thereon. A base semiconductor layer is typically the lowest layer of silicon material on a wafer or a silicon layer deposited on another material, such as silicon on sapphire. When reference is made to a substrate assembly, various process steps may have been previously used to form or define regions, junctions, various structures or features, and openings such as transistors, active areas, diffusions, implanted regions, vias, contact openings, high aspect ratio openings, capacitor plates, barriers for capacitors, etc."Layer," as used herein, refers to any layer that can be formed on a substrate from one or more precursors and/or reactants according to the deposition process described herein. The term "layer" is meant to include layers specific to the semiconductor industry, such as, but clearly not limited to, a barrier layer, dielectric layer (i.e., a layer having a high dielectric constant), and conductive layer. The term "layer" is synonymous with the term "film" frequently used in the semiconductor industry. The term "layer" is also meant to include layers found in technology outside of semiconductor technology, such as coatings on glass. For example, such layers can be formed directly on fibers, wires, etc., which are substrates other than semiconductor substrates. Further, the layers can be formed directly on the lowest semiconductor surface of the substrate, or they can be formed on any of a variety of layers (e.g., surfaces) as in, for example, a patterned wafer.The layers or films formed may be in the form of metal-containing films, such as reduced metals, metal silicates, metal oxides, metal nitrides, etc, as well as combinations thereof. For example, a metal oxide layer may include a single metal, the metal oxide layer may include two or more different metals (i.e., it is a mixed metal oxide), or a metal oxide layer may optionally be doped with other metals.If the metal oxide layer includes two or more different metals, the metal oxide layer can be in the form of alloys, solid solutions, or nanolaminates. Preferably, these have dielectric properties. The metal oxide layer (particularly if it is a dielectric layer) preferably includes one or more OfBaTiO3, SrTiO3, CaTiO3, (Ba5Sr)TiO3, SrTa2O6, SrBi2Ta2O9 (SBT), SrHfO3, SrZrO3, BaHfO3, BaZrO3, (Pb5Ba)Nb2O6, (Sr5Ba)Nb2O6, Pb[(Sc5Nb)0.575Ti0.425]O3 (PSNT)5 La2O3, Y2O3, LaAlO3, YAlO3, Pr2O3, Ba(Li5Nb)174O3-PbTiO3, and Ba(0.6)Sr(0.4)TiO3- MgO. Surprisingly, the metal oxide layer formed according to the present invention is essentially free of carbon. Preferably metal-oxide layers formed by the systems and methods of the present invention are essentially free of carbon, hydrogen, halides, phosphorus, sulfur, nitrogen or compounds thereof. As used herein, "essentially free" is defined to mean that the metal-containing layer may include a small amount of the above impurities. For example, for metal-oxide layers, "essentially free" means that the above impurities are present in an amount of less than 1 atomic percent, such that they have a minor effect on the chemical properties, mechanical properties, physical form (e.g., crystallinity), or electrical properties of the film.Various metal-containing compounds can be used in various combinations, optionally with one or more organic solvents (particularly for CVD processes), to form a precursor composition. Advantageously, some of the metal-containing compounds disclosed herein can be used in ALD without adding solvents. "Precursor" and "precursor composition" as used herein, refer to a composition usable for forming, either alone or with other precursor compositions (or reactants), a layer on a substrate assembly in a deposition process. Further, one skilled in the art will recognize that the type and amount of precursor used will depend on the content of a layer which is ultimately to be formed using a vapor deposition process. The preferred precursor compositions of the present invention are preferably liquid at the vaporization temperature and, more preferably, are preferably liquid at room temperature.The precursor compositions may be liquids or solids at room temperature (preferably, they are liquids at the vaporization temperature). Typically, they are liquids sufficiently volatile to be employed using known vapor deposition techniques. However, as solids they may also be sufficiently volatile that they can be vaporized or sublimed from the solid state using known vapor deposition techniques. If they are less volatile solids, they are preferably sufficiently soluble in an organic solvent or have melting points below their decomposition temperatures such that they can be used in flash vaporization, bubbling, microdroplet formation techniques, etc.Herein, vaporized metal-containing compounds may be used either alone or optionally with vaporized molecules of other metal-containing compounds or optionally with vaporized solvent molecules or inert gas molecules, if used. As used herein, "liquid" refers to a solution or a neat liquid (a liquid at room temperature or a solid at room temperature that melts at an elevated temperature). As used herein, "solution" does not require complete solubility of the solid but may allow for some undissolved solid, as long as there is a sufficient amount of the solid delivered by the organic solvent into the vapor phase for chemical vapor deposition processing. If solvent dilution is used in deposition, the total molar concentration of solvent vapor generated may also be considered as a inert carrier gas."Inert gas" or "non-reactive gas," as used herein, is any gas that is generally unreactive with the components it comes in contact with. For example, inert gases are typically selected from a group including nitrogen, argon, helium, neon, krypton, xenon, any other non-reactive gas, and mixtures thereof. Such inert gases are generally used in one or more purging processes described according to the present invention, and in some embodiments may also be used to assist in precursor vapor transport.Solvents that are suitable for certain embodiments of the present invention may be one or more of the following: aliphatic hydrocarbons or unsaturated hydrocarbons (C3-C20, and preferably C5-C10, cyclic, branched, or linear), aromatic hydrocarbons (C5-C20, and preferably C5-C10), halogenated hydrocarbons, silylated hydrocarbons such as alkylsilanes, alkylsilicates, ethers, polyethers, thioethers, esters, lactones, nitriles, silicone oils, or compounds containing combinations of any of the above or mixtures of one or more of the above. The compounds are also generally compatible with each other, so that mixtures of variable quantities of the metal-containing compounds will not interact to significantly change their physical properties.The precursor compositions of the present invention can, optionally, be vaporized and deposited/chemisorbed substantially simultaneously with, and in the presence of, one or more reaction gases. Alternatively, the metal-containing layers may be formed by alternately introducing the precursor composition and the reaction gas(es) during each deposition cycle. Such reaction gases may typically include oxygen, water vapor, ozone, nitrogen oxides, sulfur oxides, hydrogen, hydrogen sulfide, hydrogen selenide, hydrogen telluride, hydrogen peroxide, ammonia, organic amines, hydrazines (e.g., hydrazine, methylhydrazine, symmetrical and unsymmetrical dimethylhydrazines), silanes, disilanes and higher silanes, diborane, plasma, air, borazene (nitrogen source), carbon monoxide (reductant), alcohols, and any combination of these gases. For example, oxygen-containing sources are typically used for the deposition of metal-oxide layers. Preferable optional reaction gases used in the formation of metal-oxide layers include oxidizing gases (e.g., oxygen, ozone, and nitric oxide).Suitable substrate materials of the present invention include conductive materials, semiconductive materials, conductive metal-nitrides, conductive metals, conductive metal oxides, etc. The substrate on which the metal- containing layer is formed is preferably a semiconductor substrate or substrate assembly. A wide variety of semiconductor materials are contemplated, such as for example, borophosphosilicate glass (BPSG), silicon such as, e.g., conductively doped polysilicon, monocrystalline silicon, etc. (for this invention, appropriate forms of silicon are simply referred to as "silicon"), for example in the form of a silicon wafer, tetraethylorthosilicate (TEOS) oxide, spin on glass (i.e., a thin layer of SiO2, optionally doped, deposited by a spin on process), TiN, TaN, W, Ru, Al, Cu, noble metals, etc. A substrate assembly may also contain a layer that includes platinum, iridium, iridium oxide, rhodium, ruthenium, ruthenium oxide, strontium ruthenate, lanthanum nickelate, titanium nitride, tantalum nitride, tantalum-silicon-nitride, silicon dioxide, aluminum, gallium arsenide, glass, etc., and other existing or to-be-developed materials used in semiconductor constructions, such as dynamic random access memory (DRAM) devices, static random access memory (SRAM) devices, and ferroelectric memory (FERAM) devices, for example. For substrates including semiconductor substrates or substrate assemblies, the layers can be formed directly on the lowest semiconductor surface of the substrate, or they can be formed on any of a variety of the layers (i.e., surfaces) as in a patterned wafer, for example.Substrates other than semiconductor substrates or substrate assemblies can also be used in methods of the present invention. Any substrate that may advantageously form a metal-containing layer thereon, such as a metal oxide layer, may be used, such substrates including, for example, fibers, wires, etc. A preferred deposition process for the present invention is a vapor deposition process. Vapor deposition processes are generally favored in the semiconductor industry due to the process capability to quickly provide highly conformal layers even within deep contacts and other openings. The precursor compositions can be vaporized in the presence of an inert carrier gas if desired. Additionally, an inert carrier gas can be used in purging steps in an ALD process (discussed below). The inert carrier gas is typically one or more of nitrogen, helium, argon, etc. In the context of the present invention, an inert carrier gas is one that does not interfere with the formation of the metal- containing layer. Whether done in the presence of a inert carrier gas or not, the vaporization is preferably done in the absence of oxygen to avoid oxygen contamination of the layer (e.g., oxidation of silicon to form silicon dioxide or oxidation of precursor in the vapor phase prior to entry into the deposition chamber). Chemical vapor deposition (CVD) and atomic layer deposition (ALD) are two vapor deposition processes often employed to form thin, continuous, uniform, metal-containing layers onto semiconductor substrates. Using either vapor deposition process, typically one or more precursor compositions are vaporized in a deposition chamber and optionally combined with one or more reaction gases and directed to and/or contacted with the substrate to form a metal-containing layer on the substrate. It will be readily apparent to one skilled in the art that the vapor deposition process may be enhanced by employing various related techniques such as plasma assistance, photo assistance, laser assistance, as well as other techniques. Chemical vapor deposition (CVD) has been extensively used for the preparation of metal-containing layers, such as dielectric layers, in semiconductor processing because of its ability to provide conformal and high quality dielectric layers at relatively fast processing times. Typically, the desired precursor compositions are vaporized and then introduced into a deposition chamber containing a heated substrate with optional reaction gases and/or inert carrier gases in a single deposition cycle. In a typical CVD process, vaporized precursors are contacted with reaction gas(es) at the substrate surface to form a layer (e.g., dielectric layer). The single deposition cycle is allowed to continue until the desired thickness of the layer is achieved.Typical CVD processes generally employ precursor compositions in vaporization chambers that are separated from the process chamber wherein the deposition surface or wafer is located. For example, liquid precursor compositions are typically placed in bubblers and heated to a temperature at which they vaporize, and the vaporized liquid precursor composition is then transported by an inert carrier gas passing over the bubbler or through the liquid precursor composition. The vapors are then swept through a gas line to the deposition chamber for depositing a layer on substrate surface(s) therein. Many techniques have been developed to precisely control this process. For example, the amount of precursor composition transported to the deposition chamber can be precisely controlled by the temperature of the reservoir containing the precursor composition and by the flow of an inert carrier gas bubbled through or passed over the reservoir.A typical CVD process may be carried out in a chemical vapor deposition reactor, such as a deposition chamber available under the trade designation of 7000 from Genus, Inc. (Sunnyvale, CA), a deposition chamber available under the trade designation of 5000 from Applied Materials, Inc. (Santa Clara, CA), or a deposition chamber available under the trade designation of Prism from Novelus, Inc. (San Jose, CA). However, any deposition chamber suitable for performing CVD may be used.Several modifications of the CVD process and chambers are possible, for example, using atmospheric pressure chemical vapor deposition, low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), hot wall or cold wall reactors or any other chemical vapor deposition technique. Furthermore, pulsed CVD can be used, which is similar to ALD (discussed in greater detail below) but does not rigorously avoid inte[pi]nixing of precursor and reactant gas streams. Also, for pulsed CVD3 the deposition thickness is dependent on the exposure time, as opposed to ALD, which is self-limiting (discussed in more detail below). Alternatively, and preferably, the vapor deposition process employed in the methods of the present invention is a multi-cycle atomic layer deposition (ALD) process. Such a process is advantageous, in particular advantageous over a CVD process, in that it provides for improved control of atomic-level thickness and uniformity to the deposited layer (e.g., dielectric layer) by providing a plurality of deposition cycles. The self-limiting nature of ALD provides a method of depositing a film on a wide variety of reactive surfaces including, for example, surfaces with irregular topographies, with better step coverage than is available with CVD or other "line of sight" deposition methods (e.g., evaporation and physical vapor deposition, i.e., PVD or sputtering). Further, ALD processes typically expose the metal-containing compounds to lower volatilization and reaction temperatures, which tends to decrease degradation of the precursor as compared to, for example, typical CVD processes. See, for example, U.S. Application Serial No. 11/168,160 (entitled " ATOMIC LAYER DEPOSITION SYSTEMS AND METHODS INCLUDING METAL BETA- DIKETIMINATE COMPOUNDS"), filed June 28, 2005.Generally, in an ALD process each reactant is pulsed sequentially onto a suitable substrate, typically at deposition temperatures of at least 25[deg.]C, preferably at least 15O<0>C, and more preferably at least 200<0>C. Typical ALD deposition temperatures are no greater than 400<0>C, preferably no greater than350[deg.]C, and even more preferably no greater than 25O<0>C. These temperatures are generally lower than those presently used in CVD processes, which typically include deposition temperatures at the substrate surface of at least 150<0>C, preferably at least 200<0>C, and more preferably at least 250<0>C. Typical CVD deposition temperatures are no greater than 600<0>C, preferably no greater than 500<0>C, and even more preferably no greater than 400<0>C.Under such conditions the film growth by ALD is typically self-limiting (i.e., when the reactive sites on a surface are used up in an ALD process, the deposition generally stops), insuring not only excellent conformality but also good large area uniformity plus simple and accurate composition and thickness control. Due to alternate dosing of the precursor compositions and/or reaction gases, detrimental vapor-phase reactions are inherently eliminated, in contrast to the CVD process that is carried out by continuous co-reaction of the precursors and/or reaction gases. (See Vehkamaki et al, "Growth Of SrTiO3 and BaTiO3 Thin Films by Atomic Layer Deposition," Electrochemical and Solid-State Letters, 2(10):504-506 (1999)). A typical ALD process includes exposing a substrate (which may optionally be pretreated with, for example, water and/or ozone) to a first chemical to accomplish chemisorption of the species onto the substrate. The term "chemisorption" as used herein refers to the chemical adsorption of vaporized reactive metal-containing compounds on the surface of a substrate. The adsorbed species are typically irreversibly bound to the substrate surface as a result of relatively strong binding forces characterized by high adsorption energies (e.g., >30 kcal/mol), comparable in strength to ordinary chemical bonds. The chemisorbed species typically form a monolayer on the substrate surface. (See "The Condensed Chemical Dictionary", 10th edition, revised by G. G. Hawley, published by Van Nostrand Reinhold Co., New York, 225 (1981 )). The technique of ALD is based on the principle of the formation of a saturated monolayer of reactive precursor molecules by chemisorption. In ALD one or more appropriate precursor compositions or reaction gases are alternately introduced (e.g., pulsed) into a deposition chamber and chemisorbed onto the surfaces of a substrate. Each sequential introduction of a reactive compound (e.g., one or more precursor compositions and one or more reaction gases) is typically separated by an inert carrier gas purge. Each precursor composition co- reaction adds a new atomic layer to previously deposited layers to form a cumulative solid layer. The cycle is repeated to gradually form the desired layer thickness. It should be understood that ALD can alternately utilize one precursor composition, which is chemisorbed, and one reaction gas, which reacts with the chemisorbed species.Practically, chemisorption might not occur on all portions of the deposition surface (e.g., previously deposited ALD material). Nevertheless, such imperfect monolayer is still considered a monolayer in the context of the present invention. In many applications, merely a substantially saturated monolayer may be suitable. A substantially saturated monolayer is one that will still yield a deposited monolayer or less of material exhibiting the desired quality and/or properties.A typical ALD process includes exposing an initial substrate to a first chemical species A (e.g., a metal-containing compound as described herein) to accomplish chemisorption of the species onto the substrate. Species A can react either with the substrate surface or with Species B (described below) but not with itself. Typically in chemisorption, one or more of the ligands of Species A is displaced by reactive groups on the substrate surface. Theoretically, the chemisorption forms a monolayer that is uniformly one atom or molecule thick on the entire exposed initial substrate, the monolayer being composed of Species A, less any displaced ligands. In other words, a saturated monolayer is substantially formed on the substrate surface. Practically, chemisorption may not occur on all portions of the substrate. Nevertheless, such a partial monolayer is still understood to be a monolayer in the context of the present invention. In many applications, merely a substantially saturated monolayer may be suitable. In one aspect, a substantially saturated monolayer is one that will still yield a deposited monolayer or less of material exhibiting the desired quality and/or properties. In another aspect, a substantially saturated monolayer is one that is self-limited to further reaction with precursor. The first species (e.g., substantially all non-chemisorbed molecules ofSpecies A) as well as displaced ligands are purged from over the substrate and a second chemical species, Species B (e.g., a different metal-containing compound or reactant gas) is provided to react with the monolayer of Species A. Species B typically displaces the remaining ligands from the Species A monolayer and thereby is chemisorbed and forms a second monolayer. This second monolayer displays a surface which is reactive only to Species A. Non-chemisorbed Species B, as well as displaced ligands and other byproducts of the reaction are then purged and the steps are repeated with exposure of the Species B monolayer to vaporized Species A. Optionally, the second species can react with the first species, but not chemisorb additional material thereto. That is, the second species can cleave some portion of the chemisorbed first species, altering such monolayer without forming another monolayer thereon, but leaving reactive sites available for formation of subsequent monolayers. In other ALD processes, a third species or more may be successively chemisorbed (or reacted) and purged just as described for the first and second species, with the understanding that each introduced species reacts with the monolayer produced immediately prior to its introduction. Optionally, the second species (or third or subsequent) can include at least one reaction gas if desired.Thus, the use of ALD provides the ability to improve the control of thickness, composition, and uniformity of metal-containing layers on a substrate. For example, depositing thin layers of metal-containing compound in a plurality of cycles provides a more accurate control of ultimate film thickness. This is particularly advantageous when the precursor composition is directed to the substrate and allowed to chemisorb thereon, preferably further including at least one reaction gas that reacts with the chemisorbed species on the substrate, and even more preferably wherein this cycle is repeated at least once. Purging of excess vapor of each species following deposition/chemisorption onto a substrate may involve a variety of techniques including, but not limited to, contacting the substrate and/or monolayer with an inert carrier gas and/or lowering pressure to below the deposition pressure to reduce the concentration of a species contacting the substrate and/or chemisorbed species. Examples of carrier gases, as discussed above, may include N2, Ar, He, etc. Additionally, purging may instead include contacting the substrate and/or monolayer with any substance that allows chemisorption byproducts to desorb and reduces the concentration of a contacting species preparatory to introducing another species. The contacting species may be reduced to some suitable concentration or partial pressure known to those skilled in the art based on the specifications for the product of a particular deposition process. ALD is often described as a self-limiting process, in that a finite number of sites exist on a substrate to which the first species may form chemical bonds. The second species might only react with the surface created from the chemisorption of the first species and thus, may also be self-limiting. Once all of the finite number of sites on a substrate are bonded with a first species, the first species will not bond to other of the first species already bonded with the substrate. However, process conditions can be varied in ALD to promote such bonding and render ALD not self-limiting, e.g., more like pulsed CVD. Accordingly, ALD may also encompass a species forming other than one monolayer at a time by stacking of a species, forming a layer more than one atom or molecule thick.The described method indicates the "substantial absence" of the second precursor (i.e., second species) during chemisorption of the first precursor since insignificant amounts of the second precursor might be present. According to the knowledge and the preferences of those with ordinary skill in the art, a determination can be made as to the tolerable amount of second precursor and process conditions selected to achieve the substantial absence of the second precursor.Thus, during the ALD process, numerous consecutive deposition cycles are conducted in the deposition chamber, each cycle depositing a very thin metal-containing layer (usually less than one monolayer such that the growth rate on average is 0.2 to 3.0 Angstroms per cycle), until a layer of the desired thickness is built up on the substrate of interest. The layer deposition is accomplished by alternately introducing (i.e., by pulsing) precursor composition(s) into the deposition chamber containing a substrate, chemisorbing the precursor composition(s) as a monolayer onto the substrate surfaces, purging the deposition chamber, then introducing to the chemisorbed precursor composition(s) reaction gases and/or other precursor composition(s) in a plurality of deposition cycles until the desired thickness of the metal-containing layer is achieved. Preferred thicknesses of the metal-containing layers of the present invention are at least 1 angstrom (A), more preferably at least 5 A, and more preferably at least 10 A. Additionally, preferred film thicknesses are typically no greater than 500 A, more preferably no greater than 400 A, and more preferably no greater than 300 A.The pulse duration of precursor composition(s) and inert carrier gas(es) is generally of a duration sufficient to saturate the substrate surface. Typically, the pulse duration is at least 0.1, preferably at least 0.2 second, and more preferably at least 0.5 second. Preferred pulse durations are generally no greater than 5 seconds, and preferably no greater than 3 seconds.In comparison to the predominantly thermally driven CVD, ALD is predominantly chemically driven. Thus, ALD may advantageously be conducted at much lower temperatures than CVD. During the ALD process, the substrate temperature may be maintained at a temperature sufficiently low to maintain intact bonds between the chemisorbed precursor composition(s) and the underlying substrate surface and to prevent decomposition of the precursor composition(s). The temperature, on the other hand, must be sufficiently high to avoid condensation of the precursor composition(s). Typically the substrate is kept at a temperature of at least 25[deg.]C, preferably at least 150[deg.]C, and more preferably at least 200[deg.]C. Typically the substrate is kept at a temperature of no greater than 400[deg.]C, preferably no greater than 300[deg.]C, and more preferably no greater than 250[deg.]C, which, as discussed above, is generally lower than temperatures presently used in typical CVD processes. Thus, the first species or precursor composition is chemisorbed at this temperature. Surface reaction of the second species or precursor composition can occur at substantially the same temperature as chemisorption of the first precursor or, optionally but less preferably, at a substantially different temperature. Clearly, some small variation in temperature, as judged by those of ordinary skill, can occur but still be considered substantially the same temperature by providing a reaction rate statistically the same as would occur at the temperature of the first precursor chemisorption. Alternatively, chemisorption and subsequent reactions could instead occur at substantially exactly the same temperature. For a typical vapor deposition process, the pressure inside the deposition chamber is at least 10<"8> torr (1.3 x 10<"6> Pa), preferably at least 10<"7> torr (1.3 x 10<"5> Pa), and more preferably at least 10<"6> torr (1.3 x 10<"4> Pa). Further, deposition pressures are typically no greater than 10 torr (1.3 x 10<3> Pa), preferably no greater than 1 torr (1.3 x 10<2> Pa), and more preferably no greater than 10<"1> torr (13 Pa). Typically, the deposition chamber is purged with an inert carrier gas after the vaporized precursor composition(s) have been introduced into the chamber and/or reacted for each cycle. The inert carrier gas/gases can also be introduced with the vaporized precursor composition(s) during each cycle.The reactivity of a precursor composition can significantly influence the process parameters in ALD. Under typical CVD process conditions, a highly reactive compound may react in the gas phase generating particulates, depositing prematurely on undesired surfaces, producing poor films, and/or yielding poor step coverage or otherwise yielding non-uniform deposition. For at least such reason, a highly reactive compound might be considered not suitable for CVD. However, some compounds not suitable for CVD are superior ALD precursors. For example, if the first precursor is gas phase reactive with the second precursor, such a combination of compounds might not be suitable for CVD, although they could be used in ALD. In the CVD context, concern might also exist regarding sticking coefficients and surface mobility, as known to those skilled in the art, when using highly gas-phase reactive precursors, however, little or no such concern would exist in the ALD context.After layer formation on the substrate, an annealing process may be optionally performed in situ in the deposition chamber in a reducing, inert, plasma, or oxidizing atmosphere. Preferably, the annealing temperature is at least 400[deg.]C, more preferably at least 600<0>C. The annealing temperature is preferably no greater than 1000[deg.]C, more preferably no greater than 750<0>C, and even more preferably no greater than 700<0>C.The annealing operation is preferably performed for a time period of at least 0.5 minute, more preferably for a time period of at least 1 minute. Additionally, the annealing operation is preferably performed for a time period of no greater than 60 minutes, and more preferably for a time period of no greater than 10 minutes.One skilled in the art will recognize that such temperatures and time periods may vary. For example, furnace anneals and rapid thermal annealing may be used, and further, such anneals may be performed in one or more annealing steps.As stated above, the use of the compounds and methods of forming films of the present invention are beneficial for a wide variety of thin film applications in semiconductor structures, particularly those using high dielectric materials. For example, such applications include gate dielectrics and capacitors such as planar cells, trench cells (e.g., double sidewall trench capacitors), stacked cells (e.g., crown, V-cell, delta cell, multi-fingered, or cylindrical container stacked capacitors), as well as field effect transistor devices.A system that can be used to perform vapor deposition processes (chemical vapor deposition or atomic layer deposition) of the present invention is shown in Figure 1. The system includes an enclosed vapor deposition chamber 10, in which a vacuum may be created using turbo pump 12 and backing pump 14. One or more substrates 16 (e.g., semiconductor substrates or substrate assemblies) are positioned in chamber 10. A constant nominal temperature is established for substrate 16, which can vary depending on the process used. Substrate 16 may be heated, for example, by an electrical resistance heater 18 on which substrate 16 is mounted. Other known methods of heating the substrate may also be utilized.In this process, precursor compositions as described herein, 60 and/or 61, are stored in vessels 62. The precursor composition(s) are vaporized and separately fed along lines 64 and 66 to the deposition chamber 10 using, for example, an inert carrier gas 68. A reaction gas 70 may be supplied along line 72 as needed. Also, a purge gas 74, which is often the same as the inert carrier gas 68, may be supplied along line 76 as needed. As shown, a series of valves 80-85 are opened and closed as required.The following examples are offered to further illustrate various specific embodiments and techniques of the present invention. It should be understood, however, that many variations and modifications understood by those of ordinary skill in the art may be made while remaining within the scope of the present invention. Therefore, the scope of the invention is not intended to be limited by the following example. Unless specified otherwise, all percentages shown in the examples are percentages by weight. EXAMPLESEXAMPLE 1 : Synthesis and Characterization of a Ligand Source of Formula III, with R<1> = tert-butyl; R<5> = isopropyl; R<2> = R<4> = methyl; and R<3> =H: N-isopropyl-(4-tert-butylimino)-2-penten-2-amine.An oven-dry 1-L Schlenk flask was charged with 38.0 g of triethyloxonium tetrafluoroborate (0.2 mol) and 75 mL diethyl ether under argon atmosphere, and fitted with an addition funnel. 250 mL of dichloromethane and 28.2 grams of N-isopropyl-4-amino-3-penten-2-one (0.2 mol) were charged into the addition funnel and this solution was added dropwise, then stirred for 30 minutes. A solution of 21 mL tert-butyl amine (0.2 mol) and 25 mL dichloromethane was charged into the addition funnel and added to the reaction solution, which was then stirred overnight. Volatiles were then removed in vacuo and the resulting yellow-orange solid was washed with two 100 mL aliquots of cold ethyl acetate while the flask was placed in an ice-bath. After decanting off each ethyl acetate wash, the yellow solid residue was added to a mixture of 500 mL benzene and 500 mL water containing 8.0 g sodium hydroxide (0.2 mol). The mixture was stirred for three minutes, then the organic phase was separated. The aqueous phase was extracted three times, each with 100 mL diethyl ether portions. All the organic phases were combined, dried over sodium sulfate and concentrated on a rotary evaporator. The crude product was then distilled through a 20 cm glass-bead packed column and short path still head. The desired product was collected in 96% pure form at 34-42 [deg.]C, 40 mTorr (5.3 Pa) pressure. The only impurity observed by gas chromatography- mass spectrometry (GCMS) was N-isopropyl-(4-isopropylimino)-2-penten-2- amine. The amount of N-isopropyl-(4-isopropylimino)-2-penten-2-amine formed may be limited by limiting the reaction time (e.g., 30 minutes after addition of the tert-butyl amine). Allowing the reaction to stir overnight may result in the formation of more N-isopropyl-(4-isopropylirnino)-2-penten-2- amine. EXAMPLE 2: Synthesis and Characterization of a Metal-containing Compound of Formula I, with M= Sr (n = 2); R<1> = tert-butyl; R<5> = isopropyl; R<2> = R<4> = methyl; R = H; x = 2; and z = 0: Strontium bis(N-isopropyl-(4-tert- butylimino)-2-penten-2-aminato). In a dry box, a 500 mL Schlenk flask was charged with 13.819 g of strontium bis(hexamethyldisilazane)bis(tetrahydrofuran) (25 mmol) and 100 mL toluene. A second Schlenk flask was charged with 9.800 g of N-isopropyl-(4- tert-butylimino)-2-penten-2-amine (50 mmol) and 100 mL toluene. The ligand solution was added to the strontium solution, immediately producing a bright yellow reaction solution, which was stirred for 60 hours. Volatiles were then removed in vacuo. The crude product, a bright yellow solid, was charged into a sublimator in dry box. The sublimator was attached to a vacuum manifold in a fume hood, evacuated to less than 100 mTorr (13 Pa) and heated to 115 <0>C. A total of 8.204 g of off-white crystalline solid was sublimed in three batches (68.5% yield). Elemental Analysis calculated for C24H46N4Sr: Sr, 18.3%. Found 18.5%. <1>H nuclear magnetic resonance (NMR) (C6D6, 25<0>C, [delta]) 4.234 (s, 2H, [beta]- CH), 3.586 (septet, J=6.0 Hz, 2H, CH(CH3)2), 1.989 (s, 6H, Ot-C-CH3 (isopropyl side)), 1.907 (s, 6H, Cx-C-CH3 (tert-buty\ side)), 1.305 (s, 18[Eta], C(CHj)3), 1.200 (d, J=6.0 Hz, 12H, CH(CHj)2); <13>C(<1>H) (C6D6, 25<0>C, [delta]) 161.19 (s, [alpha]-C-C[Eta]3 (isopropyl side)), 160.44 (s, Ot-C-CH3 (tert-butyl side)), 88.33 (s, [beta]-CH), 54.07 (s, C(CHa)3), 49.86 (s, CH(CHs)2)), 32.44 (s, C(CH3)3), 26.50 (s, CH(CH3)2), 24.84 (s, Ot-C-CH3 (fert-butyl side)), 22.09 (s, Ct-C-CH3 (isopropyl side)).EXAMPLE 3: Synthesis and Characterization of a Metal-containing Compound of Formula II, with M = Sr (n = 2); R<1> = R<5> - tert-butyl; R<6> = R<10> = isopropyl; R<2> = R<4> = R<?> = R<9> = methyl; R<3> = R<s> = H; andz - 0: Strontium (N-isopropyl- (4-isopropylimino)-2-penten~2-aminato)(N-tert~butyl-(4~tert-butylimino)~2- penten-2-aminato).In a dry box, a 500 mL Schlenk flask was charged with 5.526 g of strontium bis(hexamethyldisilazane) (10 mmol) and 100 mL toluene. A solution of 2.104 g N-tert-butyl-(4-tert-butylimino)-2-penteii-2-amine (10 mmol, prepared according to literature) in 20 mL toluene was added to the reaction flask. The reaction solution was stirred for 18 hours. A solution of 1.823 g N- isopropyl-(4-isopropylimino)-2-penten-2-amine (10 mmol, prepared according to literature) in 20 mL toluene was added to the reaction flask. The reaction solution was then stirred an additional 24 hours. Volatiles were removed in vacuo to afford a red-brown solid, which was charged into a sublimator in a dry box (4.70 g, 9.98 mmol). The sublimator was evacuated on a vacuum manifold in a hood and heated. At around 80 <0>C, the pot residue appeared to begin to melt and bump. A yellow-brown condensate was collected on the cold finger while heating the pot at 112 <0>C at 115 mTorr (15.3 Pa). 2.856 g of a yellow semi- crystalline but somewhat oily solid was recovered from the cold- finger (59.7% yield). Analysis by proton NMR indicates that the sublimed material consists of a 1 : 1 : 1 mixture of the title compound with Strontium bis(N-isopropyl-(4- isopropyl imino)-2-penten-2-aminato) and Strontium bis(N-tert-butyl-(4-ter/- butylimino)-2-penten-2-aminato). The material also contains a 0.3 relative ratio of N-ter/-butyl-(4-tert-butylimino)-2-penten-2-amine. The chemical shifts for the title compound are as follows: <1>H NMR (C6D6, 25<0>C, [delta]) 4.218 (s, 2H, [beta]-CH), 3.586 (septet, J=6.0 Hz, 2H, CH(CH3)2), 1.990 (s, 6H, (X-C-CH3 (terf-butyl)), 1.865 (s, 6H, (X-C-CH3 (isopropyl)), 1.325 (s, 18[Eta], C(CH3)3), 1.172 (d, J=6.0 Hz, 12H, CH(CH3)2); <13>C(<1>H) (C6D6, 25[deg.]C, [delta]) 160.95 (s, Ci-C-CH3 (isopropyl)), 160.79 (s, (X-C-CH3 (tert-batyl)), 90.05 (s, [beta]-CH (tert-batyl)), 86.51 (s, [beta]-CH(isopropyl)), 53.99 (s, C(CH3)3), 49.93 (s, CH(CH3)2)), 32.81 (s, C(CH3)3), 25.06 (s, CH(CH3)2), 24.83 (s, [alpha]-C-CH3 (tert-batyl)), 22.05 (s, (X-C-CH3 (isopropyl)). Elemental Analysis calculated for C24H46N4Sr: Sr, 18.3%. Found 17.5%.EXAMPLE 4: Alternate Synthesis of the Metal-containing Compound Prepared and Characterized in Example 3 by Ligand Exchange Reactions Between Metal- containing Compounds Including Different Symmetrical [beta]-Diketiminate Ligands.A 50 mL schlenk flask was charged with 0.50 g of bis(N-tert-butyl-(4- fert-butylimino)-2-[rho]enten-2-aminato)strontium (1 mmol), 0.45 g of bis(N- isopropyl-(4-isopropylimino)-2-penten-2-aminato)strontium (1 mmol), and 20 mL toluene. The resulting solution was refluxed for 24 hours, then volatiles were removed in vacuo. A sample of the resulting yellow solid was submitted for proton NMR analysis, and the results indicated approximately a 1 : 1 : 1 mixture of bis(N-ter^-butyl-(4-ter?-butylimino)-2-penten-2- aminato)strontium:bis(N-isopropyl-(4-isopropylimino)-2-penten-2- aminato)strontium:(N-isopiOpyl-(4-isopropylimino)-2-penten-2-aminato)(N-fert- butyl-(4-tert-butylimino)-2-penten-2-aminato)strontium, with approximately a 0.3 ratio of free N-ter^-butyl-(4-tert-butylimino)-2-penten-2-amine.The complete disclosures of the patents, patent documents, and publications cited herein are incorporated by reference in their entirety as if each were individually incorporated. Various modifications and alterations to this invention will become apparent to those skilled in the art without departing from the scope and spirit of this invention. It should be understood that this invention is not intended to be unduly limited by the illustrative embodiments and examples set forth herein and that such examples and embodiments are presented by way of example only with the scope of the invention intended to be limited only by the claims set forth herein as follows.
Embodiments of the present invention can reduce the power consumption of memory systems by powering down unused portions of memory, independent of operating system activity.
CLAIMS What is claimed is: 1. A method comprising: relocating data items among a plurality of memory elements comprising a physical memory array, to pack said data items into particular ones of the plurality of memory elements; tracking locations of the data items in the physical memory array with respect to corresponding locations of the data items as defined by an operating system; and reducing a power state of at least one empty memory element among the plurality of memory elements that contains none of the data items. 2. The method of claim 1 further comprising: tracking locations of the data items in the physical memory array with respect to additional corresponding locations of the data items as defined by at least one additional operating system. 3. The method of claim 1 wherein relocating the data items comprises: initiating a relocation of the data items in response to an event selected from a group comprising an expiration of a time period, a new data item written to the physical memory array, and an existing data item deleted from the physical memory array. 4. The method of claim 1 wherein relocating the data items comprises: selecting a particular data item among the plurality of data items; determining if a packed location is available within the physical memory array for the particular data item; and moving the particular data item to the packed location if the packed location is available. 5. The method of claim 4 wherein relocating the data items further comprises: repeating the selecting, determining, and moving until the plurality of data items are packed. 6. The method of claim 4 wherein selecting the particular data item comprises selecting the particular data item from a group comprising a first data item down from a highest address location in the physical memory array, a data item most recently written to the physical memory array, and a first data item beyond an address location defining a packed data boundary. 7. The method of claim 4 wherein determining if a packed location is available comprises: identifying a first empty address location up from a lowest address location in the physical memory array; and determining if the first empty address location is lower than an address location of the particular data item. 8. The method of claim 1 wherein tracking the locations of the data items comprises: recognizing a changed data item in the plurality of data items; identifying an address location in the physical memory array for the changed data item; and updating a record for the changed data item based on the address location and the corresponding location of the changed data item as defined by the operating system. 9. The method of claim 8 wherein recognizing the changed data item comprises recognizing the changed data item from a group comprising a data item written to the physical memory array, a data item deleted from the physical memory array, and a data item relocated within the physical memory array. 10. The method of claim 8 wherein identifying the address location in the physical memory array for the changed data item comprises: locating an active memory element among the plurality of memory elements that has an empty address location; and writing the changed data item to the empty address location. 11. The method of claim 10 wherein updating the record comprises: registering an entry to a relocation mask including the empty address location and the corresponding location of the changed data item as defined by the operating system. 12. The method of claim 8 wherein identifying the address location in the physical memory array for the changed data item comprises: locating an existing address location for the changed data item in the physical memory array based on the corresponding location of the changed data item as defined by the operating system; and deleting the changed data item from the existing memory location. 13. The method of claim 12 wherein updating the record comprises: removing an entry from a relocation mask including the existing memory location and the corresponding location of the changed data item as defined by the operating system. 14. The method of claim 8 wherein identifying an address location in the physical memory array for the changed data item comprises: recognizing a new address location in the physical memory array to which the changed data item has been relocated. 15. The method of claim 14 wherein updating the record comprises: applying a previous address of the changed data item in the physical memory array to a relocation mask to find an entry associated with the changed data item; and re-registering the entry to the relocation mask including the new address location and the corresponding location of the changed data item as defined by the operating system. 16. The method of claim 1 wherein reducing the power state comprises an action selected from a group comprising reducing a refresh rate, disabling refreshes, lowering a supply voltage, and disabling a supply voltage. 17. The method of claim 1 wherein reducing the power state comprises: identifying a packed data boundary among the plurality of memory elements, said packed data boundary to separate the plurality of memory elements into empty memory elements and occupied memory elements; determining an amount of quick access memory; setting enough of the empty memory elements to an active power state to supply the amount of quick access memory; and reducing the power state of any remaining empty memory element. 18. The method of claim 17 further comprising: repeating the setting and reducing in response to a change in the packed data boundary or the amount of quick access memory. 19. A machine readable medium having stored thereon machine executable instructions that, when executed, implement a method comprising: relocating data items among a plurality of memory elements comprising a physical memory array, to pack said data items into particular ones of the plurality of memory elements; tracking locations of the data items in the physical memory array with respect to corresponding locations of the data items as defined by an operating system; and reducing a power state of at least one empty memory element among the plurality of memory elements that contains none of the data items. 20. The machine readable medium of claim 19 wherein relocating the data items comprises: selecting a particular data item among the plurality of data items; determining if a packed location is available within the physical memory array for the particular data item; and moving the particular data item to the packed location if the packed location is available. 21. The machine readable medium of claim 19 wherein tracking the locations of the data items comprises: recognizing a changed data item in the plurality of data items; identifying an address location in the physical memory array for the changed data item; and updating a record for the changed data item based on the address location and the corresponding location of the changed data item as defined by the operating system. 22. The machine readable medium of claim 19 wherein reducing the power state comprises: identifying a packed data boundary among the plurality of memory elements, said packed data boundary to separate the plurality of memory elements into empty memory elements and occupied memory elements; determining an amount of quick access memory; setting enough of the empty memory elements to an active power state to supply the amount of quick access memory; and reducing the power state of any remaining empty memory element. 23. An apparatus comprising: relocation logic to relocate data items among a plurality of memory elements comprising a physical memory array, to pack said data items into particular ones of the plurality of memory elements; tracking logic to track locations of the data items in the physical memory array with respect to corresponding locations of the data items as defined by an operating system; and power state logic to reduce a power state of at least one empty memory element among the plurality of memory elements that contains none of the data items. 24. The apparatus of claim 23 wherein the relocation logic is further to select a particular data item among the plurality of data items, determine if a packed location is available within the physical memory array for the particular data item, and move the particular data item to the packed location if the packed location is available. 25. The apparatus of claim 23 wherein the tracking logic is further to recognize a changed data item in the plurality of data items, identify an address location in the physical memory array for the changed data item, and update a record for the changed data item based on the address location and the corresponding location of the changed data item as defined by the operating system. 26. The apparatus of claim 23 wherein the power state logic is further to identify a packed data boundary among the plurality of memory elements, said packed data boundary to separate the plurality of memory elements into empty memory elements and occupied memory elements, determine an amount of quick access memory, set enough of the empty memory elements to an active power state to supply the amount of quick access memory, and reduce the power state of any remaining empty memory element. 27. A system comprising: a notebook computer; and a memory power manager, said memory power manager including relocation logic to relocate data items among a plurality of memory elements comprising a physical memory array, to pack said data items into particular ones of the plurality of memory elements, tracking logic to track locations of the data items in the physical memory array with respect to corresponding locations of the data items as defined by an operating system, and power state logic to reduce a power state of at least one empty memory element among the plurality of memory elements that contains none of the data items. 28. The system of claim 27 wherein the relocation logic is further to select a particular data item among the plurality of data items, determine if a packed location is available within the physical memory array for the particular data item, and move the particular data item to the packed location if the packed location is available. 29. The system of claim 27 wherein the tracking logic is further to recognize a changed data item in the plurality of data items, identify an address location in the physical memory array for the changed data item, and update a record for the changed data item based on the address location and the corresponding location of the changed data item as defined by the operating system. 30. The system of claim 27 wherein the power state logic is further to identify a packed data boundary among the plurality of memory elements, said packed data boundary to separate the plurality of memory elements into empty memory elements and occupied memory elements, determine an amount of quick access memory, set enough of the empty memory elements to an active power state to supply the amount of quick access memory, and reduce the power state of any remaining empty memory element.
OPERATING SYSTEM-INDEPENDENT MEMORY POWERMANAGEMENTFIELD OF THE INVENTIONThe present invention relates to the field of power management. More specifically, the present invention relates to manage memory power, independent of operating system activity.BACKGROUNDIn many computer systems, the memory elements can consume a relatively large amount of power. For example, it is not unusual for memory to represent 20-30% of a typical system's total power consumption. For large server systems, the percentage of total power consumed by memory can be even higher. Power consumption can be an important consideration. For example, in mobile devices, such as notebook computers, personal data assistants, cellular phones, etc., power consumption directly affects battery life. In stationary devices, such as desk top computers, servers, routers, etc., the amount of power that they consume can be expensive.BRIEF DESCRIPTION OF DRAWINGSExamples of the present invention are illustrated in the accompanying drawings. The accompanying drawings, however, do not limit the scope of the present invention. Similar references in the drawings indicate similar elements.Figure 1 illustrates an example of a computing system without operating system-independent memory power management.Figure 2 illustrates an example of a computing system with operating system-independent memory power management according to one embodiment of the present invention.Figure 3 illustrates an example of a computing system with multiple operating systems according to one embodiment of the present invention. Figures 4A through 4D illustrate an example of data items in memory locations at four instants in time according to one embodiment of the present invention.Figure 5 illustrates a functional block diagram according to one embodiment of the present invention.Figure 6 illustrates one embodiment of a method for relocating data items.Figure 7 illustrates one embodiment of a method for tracking locations of data items.Figure 8 illustrates one embodiment of a method for tracking a new data item.Figure 9 illustrates one embodiment of a method for tracking a deleted data item.Figure 10 illustrates one embodiment of a method for tracking a relocated data item.Figure 11 illustrates one embodiment of a method for setting power states of memory elements.Figure 12 illustrates one embodiment of a hardware system that can perform various functions of the present invention.Figure 13 illustrates one embodiment of a machine readable medium to store instructions that can implement various functions of the present invention.DETAILED DESCRIPTION OF THE INVENTION In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, those skilled in the art will understand that the present invention may be practiced without these specific details, that the present invention is not limited to the depicted embodiments, and that the present invention may be practiced in a variety of alternative embodiments. In other instances, well known methods, procedures, components, and circuits have not been described in detail.Parts of the description will be presented using terminology commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. Also, parts of the description will be presented in terms of operations performed through the execution of programming instructions. It is well understood by those skilled in the art that these operations often take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipulated through, for instance, electrical components.Various operations will be described as multiple discrete steps performed in turn in a manner that is helpful for understanding the present invention. However, the order of description should not be construed as to imply that these operations are necessarily performed in the order they are presented, nor even order dependent. Lastly, repeated usage of the phrase "in one embodiment" does not necessarily refer to the same embodiment, although it may.Embodiments of the present invention can reduce the power consumption of memory systems by powering down unused portions of memory, independent of operating system activity.Figure 1 illustrates an example of a typical computing device 100 without the advantages afforded by embodiments of the present invention. Computer device 100 includes an operating system (OS) 110 and a physical memory array 130. Memory array 130 can provide random access memory (RAM) for OS 110. That is, OS 110 can view array 130 as a set of memory locations that are all continuously and equally available to the OS for storing data, and the OS may write data to, or read data from, any of the memory locations at virtually any time.OS 110 can maintain a page table 120 to keep track of where pages of data are stored in memory array 130. Page table 120 can track the locations by recording the physical addresses of each page of data in memory array 130. For example, this is illustrated in Figure 1 by the arrows pointing from pages A, B, C, D, and E in page table 120 to various corresponding locations in memory array 130. In practice, a page table may track many thousands of pages in a memory array at any given time. Pages of data may be continually added and removed from the table and memory array as, for instance, applications close and new applications launch. Servers, in particular, often swap out huge amounts of data in rapid succession.Most random access memory technologies tend to be dynamic. In dynamic random access memory (DRAM), data decay rapidly and will only be retained so long as operating power is maintained and the data are periodically refreshed. In which case, in order to make the entire array 130 fully available to OS 110 for random access, the entire array 130 is typically fully powered and rapidly refreshed whenever the operating system is active, even if little or no data is being stored. For example, the illustrated embodiment includes power and refresh lines 140 that can uniformly supply the entire memory array 130.In contrast to this typical computing system, embodiments of the present invention can insert a layer of abstraction between the operating system and the memory resources. With this layer of abstraction, embodiments of the present invention can pack data into a portion of available memory so that another portion of memory can be placed in a lower power state, all the while providing the appearance of a fully operational memory array to an operating system.For example, Figure 2 illustrates a computing device 200 that includes memory power management features according to one embodiment of the present invention. Computing device 200 can include the same operating system (OS) 110 and page table 120 as computing device 100 in Figure 1. However, in the embodiment of Figure 2, a relocation mask 225 can provide a layer of abstraction between the OS and memory array 230. Memory array 230 can be partitioned into elements A, B, C, and D, and the memory elements can be individually powered and/or refreshed by lines 280, 282, 284, and 286.Relocation mask 225 can include a number of entries 227 that can track the locations of data pages as defined by OS 110 in page table 120 to the actual locations of the data pages in the physical memory array 230. For example, as in Figure 1 , page table 120 defines page A to be at location 2 in the memory array. Relocation mask 225, however, maps location 2 to element A, location 1 in memory array 230. Similarly, page B is defined to be at location 4, which is mapped to element A, location 2; page C is defined to be at location 6, which is mapped to element A, location 3; page D is defined to be at location 7, which is mapped to element B, location 1 ; and page E is defined to be at location 11 , which is mapped to element B, location 2.With the data pages packed into the lower end of memory array 230 as shown, the boundary 232 of packed data is at element B, location 3, and memory elements C and D are empty of data items. Since each memory element in array 230 can be individually powered and refreshed, elements C and D can be set to a lower, inactive power state to save power. For example, the refresh rate could be reduced or stopped entirely, and/or the power level could be reduced or turned off entirely.However, since OS 110 may write additional data to memory at any time, and since returning an inactive memory element to an active power state may introduce an undesirable delay, the illustrated embodiment can keep a some empty memory active in order to provide quick access memory 236 for OS 110. Any of a variety of techniques can be used to anticipate how much memory is likely to be needed at any given time. For example, statistical algorithms such as those used to pre-fetch data into cache memory for a processor could similarly be used to anticipate how much memory an OS is likely to need given a certain state of a computing device as defined, for example, by the number and type of active applications and/or processes over a period of time. In the illustrated embodiment, empty memory element C can be left active for quick access memory 236 and memory element D may be the only inactive memory element 234. If more memory is needed than anticipated, memory element D can be reactivated. On the other hand, if the computing system were to enter a stand-by mode, with little or no memory activity, then quick access memory may not be needed and both memory elements C and D might be powered down.To OS 110, the entire memory array 230 can appear fully and continually active whenever the OS is active. OS 110 can define any memory location within array 230 to write, read, or delete data, and mask 225 can direct each memory access to the corresponding physical memory locations. New data can be directed to the quick access memory locations 236 or to holes in the packed end of array 230 left by deleted data. The boundary 232 between packed locations and empty locations can move as data is swapped in and out of array 230. The amount of quick access memory 236 and the number of inactive memory elements 234 can change as the boundary 232 moves and the anticipated memory requirements of the device 200 change.The data items tracked by page table 120 can take any of a variety of forms. In one embodiment, each data item includes four kilobytes of data. In other embodiments, each data item could be as little as a single bit of data, or up to several kilobytes and beyond. In various other embodiments, the data items could each be a different size.The pages of data tracked in page table 120 can also come from a variety of different sources and be used in a variety of different ways. For example, the OS itself may generate and use data tracked in page table 120. The data could also belong to any of a variety of applications or processes running on the computing device 200. In another example, the data could comprise paged virtual memory.Memory array 230 can be configured in a variety of different ways. For example, in one embodiment, memory array 230 may represent a single integrated circuit (IC) chip, or one region within a larger IC chip. In another embodiment, each element A, B, C1 and D may represent a separate IC chip coupled to one more printed circuit boards (PCBs), or separate regions dispersed within one or more larger IC chips. Any of a variety memory technologies can be used for memory array 130.Alternate embodiments may include more or fewer memory elements with individually controlled power states, and each memory element may include more or fewer memory locations. For example, each memory element could include a different number of memory locations. In another example, each memory location could comprise a separate memory element have individually controlled power states.Power states can be controlled in a variety of different ways. For example, many memory technologies include two refresh mechanisms, an external refresh and a self-refresh. The refresh rate for an external refresh is usually higher and generally consumes more energy. External refresh is often designed to provide faster memory performance when, for instance, a computing device is in an active state. The refresh rate for a self-refresh is usually much slower and generally consumes less energy. Self-refresh is often designed to be the slowest possible refresh that will safely maintain data in memory when, for instance, computing activity is suspended for a prolonged period. In which case, in one embodiment of the present invention, rather than individually controlling both power and refresh for each memory element, all the memory elements may share a common power supply, but be individually controllable to switch between an external refresh and a self- refresh.In other embodiments, multiple power states could be used simultaneously or selectively. For example, some memory elements could be fully powered down, some could receive power but no refreshes, some could receive power and self-refreshes, and others could be fully active with both power and external refreshes. In another example, when in a stand-by mode of operation, even occupied memory locations may be placed in a reduced power state with, for instance, a lowered power supply and/or self-refreshes. At the same time, the empty memory locations could be placed in even lower power states with, for instance, no power supply and no refreshes. Other embodiments may use any combination of these and other power states.Embodiments of the present invention be used in virtually any electronic device that includes an operating system and memory. For example, embodiments of the present invention can be used in notebook computers, desk top computers, server computers, personal data assistants (PDAs), cellular phones, gaming devices, global positioning system (GPS) units, and the like.Furthermore, embodiments of the present invention can support multiple operating systems simultaneously. For example, as shown in Figure 3, operating systems 1 to N can maintain page tables 1 to N. Relocation mask 320 can track the positions of data pages as defined by the N operating systems to physical locations in memory array 330. As with the embodiment of Figure 2, the data can be packed into elements within memory array 330 (not shown), and empty memory elements within array 330 can individually enter lower power states.Managing memory power can itself consume a certain amount of power. In particularly active computing systems, there may be a point at which managing memory power consumes more power than it saves. For example, if the memory is re-packed every time a new data item is written or deleted, and large amounts of data are frequently swapped in and out of memory with very little memory left unused, there may be a net increase in power consumption due to managing memory power. In which case, rather than continually performing the various power management functions, it may be beneficial to perform some of the functions on a periodic basis, or to discontinue some or all of the functions entirely, especially during heavy memory traffic.Figures 4A through 4D illustrate an example of activating, and periodically performing, various functions of memory power management according to one embodiment of the present invention. Figure 4A illustrates a number of memory locations 410 that can each be individually controlled to enter a lower power state. At the instant in time show in Figure 4A however, all of the memory elements 410 are in an active state. For instance, locations 410 may all be initially active when a machine turns on, or memory power management may have been previously discontinued.In certain embodiments, a user may have an option to manually disable or enable memory power management. In other embodiments, memory power management may automatically activate or deactivate upon the occurrence of some event, such as a notebook computer switching from AC power to battery power, the power level of a battery dwindling to a certain level, or the data traffic and free memory space reaching certain limits.In any event, since all of the memory elements 410 are active in Figure 4A, data can be written to any location. For example, a relocation mask may simply write the data to whatever locations the operating system defines. In the illustrated embodiment, there are six occupied locations 430 and twelve empty locations 420. The occupied locations 430 are shaded to represent stored data, and are dispersed in apparently random fashion between the low address memory location 412 and the high address memory location 414..Figure 4B illustrates the memory locations 410 after memory power management has been activated. In the illustrated embodiment, the data from the occupied locations 430 have been relocated to pack the data into lower address locations. The boundary 440 for the packed data separates the occupied locations 430 from the empty locations 420.In other embodiments, the data items could be packed in various other ways. For example, the data items could be packed into higher address locations, or the data items could start packing at a certain address and fill each address location up and/or down from that address. In this last situation, the boundary separating the packed locations from the empty locations could include two addresses, one at the low end and one at the high end of the packed data. In yet another example, data could be packed into segments of address locations, with empty address locations interspersed between pairs of packed segments. In this situation, the boundary separating the packed and empty locations could include many address locations, at the low and high ends of each packed segment.Referring again to Figure 4B, the illustrated embodiment shows seven memory locations 450 that can be left active for quick access. For instance, given the current state of the computing device in which the memory locations are being used, seven memory locations may be anticipated to meet the memory needs of the device. The remaining five memory locations 460 can be placed in an inactive state to save power.Between Figure 4B and 4C, data has been deleted from two memory locations 480 among the previously occupied locations 435, and new data has been written to four memory locations 485 among the quick access locations 450. Other than recording what data has been deleted and directing new data to the quick access locations, memory power management may have done little else since Figure 4B. With this low level of active, memory power management may consume very little power. Meanwhile, the same five memory locations 460 can remain inactive, potentially resulting in a significant net power savings.Between Figure 4C and Figure 4D, another iteration of packing and power state setting has occurred. This iteration may have been triggered by any number of events. For example, it may simply have been time for a periodic iteration, or the number of empty quick access locations may have dropped to a certain level, or the anticipated amount of quick access memory may have changed. Whatever the cause, the lower address locations 432 have been re-packed with the data from the eight occupied memory locations, the number of quick access locations 452 has dropped from seven to five, and the number of inactive locations 462 has dropped from five to four. Similar iterations of packing and power state setting may occur each time a trigger event occurs.Figure 5 illustrates a functional block diagram of a memory power manager 510 that can implement various embodiments of the present invention, such as those described above. Relocation logic 520 can pack data into portions of memory. Tracking logic 530 can manage the relocation mask to direct and track memory accesses to active memory locations. Power state logic 540 can anticipate the quick access memory needs for a computing system and reduce the power state of any remaining, empty memory locations. These three basic functions can be implemented in any number of different ways, including hardware, software, firmware, or any combination thereof.Figures 6 through 11 illustrate some examples of methods that can be performed by memory power manage 510 according to various embodiments of the present invention.Figure 6 illustrates one embodiment of a method for relocating data items in a memory array. At 610, the method can initiate a relocation in response to a triggering event. For example, a relocation may be triggered periodically, each time data is written or deleted from the memory array, when there is a shortage of active memory, etc.At 620, the method can select a data item to be relocated. Any number of criteria can be used to decide which data item to select. For example, the method may start at a high address end of the memory array, or the active memory elements in the memory array, and scan down until a data item is encountered. In another example, when a relocation is initiated in response to a new data item being written to memory, the method may simply select the most recently written data item. In yet another example, the method may start at a previously defined boundary between packed data and empty memory locations and scan up until a data item is encountered.At 630, the method can look for a packed address location for the data item. A packed address location may be an empty location closer to some target location than the current location of the selected data item. For example, when packing data items to the low end of the memory array, the target location is likely to be the lowest address location. In which case, the method may start at the lowest address location and scan up to the first empty location. If the first empty location is lower than the current location of the selected data item, then the empty location may be a good place to pack the selected data item. By selecting a data item starting from a highest address location in 620 and looking for a packed address location starting from a lowest address location in 630, the method can fill in empty locations in the low end of the memory array with data items from the high end.The method may not find a packed address location for the selected data item. For example, if the selected data item happens to be written to the first memory location in the quick access memory at the boundary between the packed data and the empty memory locations, the selected data item may already packed. As another example, if a previously packed data item is deleted from a memory location and the selected data item happens to be written to the same memory location, the selected data item may already be packed.Where all of the data items are the same size, looking for a packed address location may be as simple as finding an empty address location. Where the data items can be different sizes, looking for a packed address location can also include comparing the size of an empty block of data with the size of the selected data item. If an empty block of data is smaller than the selected data item, some embodiments of the present invention may skip over the empty block and look for a larger block. Other embodiments of the present invention may partition the selected data item and fit different partitions into different empty blocks of memory. In which case, a relocation mask may track multiple memory locations for data items. Alternately, a relocation mask may track just a first partition of each data item and each partition may include a pointer to where the next partition is stored in memory. Other embodiments may use any of a wide variety of techniques to fit data items into memory locations and keep track of them.Referring again to Figure 6, at 640 the method can move the selected data item into the packed address location, assuming a packed address location was found in 630. If not packed address location was found, the method can leave the data item where it is. At 650, if the all the data is packed, the method can end. If not, the method can continue to select another data item to try to pack it. Recognizing when packing is complete may depend on how the data is being packed. For example, if data items are being packed from the low end of the memory array, the method can scan up from the low end to the first empty address location. Then, the method can continue to scan to see if any active memory locations higher than the first empty location contain a data item. If all the higher locations are empty, then all the data may be packed.Figure 7 illustrates one embodiment of a method for tracking data items in a memory array. At 710, the method can recognize a changed data item. For example, the changed data may be a data item to be written to the memory array, a data item deleted from the memory array, or a data item relocated and packed within the memory array. At 720, the method can identify an address location associated with the changed data item and, at 730, the method can update a record for the changed data item in a relocation mask based on the identified address location and a location defined by an operating system. These last two functions can take a variety of different forms depending on the type of changed data item. Figures 8 through 10 illustrate a few examples of what these last two functions may entail.Figure 8 illustrates one embodiment involving a new data item being written to a memory array. At 810, the method can locate an active memory element with an empty address location using a relocation mask. For example, the method may look first to a section of the memory array that was previously packed for any holes that may have been left by deleted data. Next, the method may look for an available location in quick access memory. If no locations can be found in either of those sections of the memory array, the method may need to reactivate a memory element and select a memory location there.Once an empty memory location has been located, the method can write the new data item to the empty memory location at 820. Then, at 830, the method can register an entry in a relocation mask for the data item. The entry may include, for instance, an address of the data item in physical memory as well as the location for the data item as defined by an operating system.Figure 9 illustrates one embodiment involving a deleted data item. At 910, the method can locate an existing address location for the data item in a relocation masked based on a location defined by an operation system. For example, an operating system may indicate that a data item should be deleted. The operating system's page table may define a particular address location where the operating system thinks the data item is stored. The data item, however, may have been relocated within the physical memory array. The address provided by the operating system can be used in a relocation mask to find the actual address location in physical memory array.At 920, the method can delete the data item from the physical memory location, and, at 930, the method can delete the entry for the data item from the relocation mask.Figure 10 illustrates one embodiment involving a relocated data item. At 1010, the method can recognize a new address location to which the data item has been relocated. At 1020, the method can apply the previous address location of the data item to a relocation mask to find an entry associated with the relocated data item. Then, at 1030, the method can reregister the entry to the relocation mask, matching the new address location for the data item with the address location defined by an operating system.Figure 11 illustrates one embodiment setting power states for memory elements. At 1110, the method can identify a packed data boundary separating the packed data from empty memory locations. For example, when data is packed to a low end of a memory array, the method can scan up from the low end and identify the boundary at the first empty memory location.At 1120, the method can determine an amount of quick access memory. For example, any of a variety of statistical algorithms can be used to anticipate what the likely memory needs will be for a computing device. If the computing device is in a state of low activity, like a stand-by mode, then the method may determine that little or no quick access memory is needed. On the other hand, if the computing device is in a state of especially high activity, the method may determine that all available memory should be ready for quick access.At 1130, the method determines if either the packed data boundary or the amount of quick access memory has changed. For example, if the memory array undergoes an iteration of packing, the position of the boundary may change. Similarly, if the state of the computing device changes due to, for instance, an additional application being launched or a process completing, then the amount of quick access memory that is anticipated to be needed may change. If no change is detected at 1130, the method may loop through 1110 and 1120 many times, monitoring changes.When and if a change is detected at 1130, the method can set one or more empty memory elements to an active state if any quick access memory is needed at 1140. If no quick access memory is needed, of if a partially packed memory element includes enough empty memory locations to provide the quick access memory, the method may not set any empty memory elements to an active state.At 1150, the method can set the power state of any remaining, empty memory elements to a reduced power state. For example, the method may reduce the refresh rate, disable the refresh rate, reduce the supply voltage, and/or disable the supply voltage to one or more empty memory elements.Figures 2-11 illustrate a number of implementation specific details. Other embodiments may not include all the illustrated elements, may arrange the elements differently, may combine one or more of the elements, may include additional elements, and the like. Furthermore, the various functions of the present invention can be implemented in any number of ways. Figure 12 illustrates one embodiment of a generic hardware system that can bring together the functions of various embodiments of the present invention. In the illustrated embodiment, the hardware system includes processor 1210 coupled to high speed bus 1205, which is coupled to input/output (I/O) bus 1215 through bus bridge 1230. Temporary memory 1220 is coupled to bus 1205. Permanent memory 1240 is coupled to bus 1215. I/O device(s) 1250 is also coupled to bus 1215. I/O device(s) 1250 may include a display device, a keyboard, one or more external network interfaces, etc.Certain embodiments may include additional components, may not require all of the above components, or may combine one or more components. For instance, temporary memory 1220 may be on-chip with processor 1210. Alternately, permanent memory 1240 may be eliminated and temporary memory 1220 may be replaced with an electrically erasable programmable read only memory (EEPROM), wherein software routines are executed in place from the EEPROM. Some implementations may employ a single bus, to which all of the components are coupled, while other implementations may include one or more additional buses and bus bridges to which various additional components can be coupled. Similarly, a variety of alternate internal networks could be used including, for instance, an internal network based on a high speed system bus with a memory controller hub and an I/O controller hub. Additional components may include additional processors, multiple processor cores within process 1210, a CD ROM drive, additional memories, and other peripheral components known in the art.Various functions of the present invention, as described above, can be implemented using one or more of these hardware systems. In one embodiment, the functions may be implemented as instructions or routines that can be executed by one or more execution units, such as processor 1210, within the hardware system(s). As shown in Figure 13, these machine executable instructions 1310 can be stored using any machine readable storage medium 1320, including internal memory, such as memories 1220 and 1240 in Figure 12, as well as various external or remote memories, such as a hard drive, diskette, CD-ROM, magnetic tape, digital video or versatile disk (DVD), laser disk, Flash memory, a server on a network, etc. In one implementation, these software routines can be written in the C programming language. It is to be appreciated, however, that these routines may be implemented in any of a wide variety of programming languages.In alternate embodiments, various functions of the present invention may be implemented in discrete hardware or firmware. For example, one or more application specific integrated circuits (ASICs) could be programmed with one or more of the above described functions. In another example, one or more functions of the present invention could be implemented in one or more ASICs on additional circuit boards and the circuit boards could be inserted into the computers) described above. In another example, one or more programmable gate arrays (PGAs) could be used to implement one or more functions of the present invention. In yet another example, a combination of hardware and software could be used to implement one or more functions of the present invention.Thus, operating system-independent memory power manage is described. Whereas many alterations and modifications of the present invention will be comprehended by a person skilled in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Therefore, references to details of particular embodiments are not intended to limit the scope of the claims.
A spring element used in a temporary package for testing semiconductors is provided. The spring element is compressed so as to press the semiconductor, either in the form of a bare semiconductor die or as part of a package, against an interconnect structure. The spring element is configured so that it provides sufficient pressure to keep the contacts on the semiconductor in electrical contact with the interconnect structure. Material is added and/or removed from the spring element so that it has the desired modulus of elasticity. The shape of the spring element may also be varied to change the modulus of elasticity, the spring constant, and the force transfer capabilities of the spring element. The spring element also includes conductive material to increase the thermal and electrical conductivity of the spring element.
What is claimed is: 1. An apparatus for attaching to a plurality of contacts of a semiconductor, said apparatus comprising:an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of said plurality of contacts of said semiconductor; and an attachment device arranged to press said semiconductor against said interconnect structure to provide an electrical connection between said plurality of conductors and said corresponding ones of said plurality of contacts, said attachment device comprising a spring element including a conductive member and a first elastic member comprised of a first elastomeric material having first force transfer characteristics, said first elastic member having a plurality of holes formed therein such that said spring element has overall force transfer characteristics different from said first force transfer characteristics. 2. The apparatus of claim 1, wherein said spring element further comprises an elastic member comprised of a second elastomeric material having second force transfer characteristics, said second elastic member positioned in at least one of said plurality of holes formed in said first elastic member such that said overall force transfer characteristics are different from said first and second force transfer characteristics.3. The apparatus of claim 1, wherein said spring element further comprises a plurality of second elastic members positioned in a plurality of said plurality of holes in said first elastic member.4. The apparatus of claim 1, wherein said conductive member comprises a plurality of conductive particles.5. The apparatus of claim 1, wherein said conductive member comprises a plurality of conductive interspersed within said elastomeric member.6. The apparatus of claim 1, wherein said semiconductor is electrically biased through said spring element.7. An apparatus for attaching to a plurality of contacts of a semiconductor, said apparatus comprising:an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of said plurality of contacts of said semiconductor; and an attachment device arranged to press said semiconductor against said interconnect structure to provide an electrical connection between said plurality of conductors and said corresponding ones of said plurality of contacts, said attachment device comprising a spring element including an elastic member comprised of a conductive member and an elastomeric material having first force transfer characteristics, said first elastic member having at least one hole formed therein such that said spring element has overall force transfer characteristics different from said first force transfer characteristics, said elastic member being shaped so as to engage an outer edge of said semiconductor such that a force applied by said attachment device as said semiconductor is pressed by said attachment device against said interconnect structure is substantially uniform around said semiconductor. 8. The apparatus of claim 7, wherein said conductive member comprises a plurality of conductive particles.9. The apparatus of claim 7, wherein said conductive member comprises a plurality of conductive particles interspersed within said elastomeric member.10. The apparatus of claim 7, wherein said semiconductor is electrically biased through said spring element.11. An apparatus for attaching to a plurality of contacts of a semiconductor, said apparatus comprising:an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of said plurality of contacts of said semiconductor; and an attachment device arranged to press said interconnect structure against said semiconductor to provide an electrical connection between said plurality of conductors and said corresponding ones of said plurality of contacts, said attachment device comprising a spring element including a first conductive member, a first elastic member and a second elastic member, said first elastic member comprising a first elastomeric material having first force transfer characteristics and said second elastic member comprising a second elastomeric material having second force transfer characteristics, said second elastic member being positioned within said first elastic member such that said spring element has overall force transfer characteristics different from said first and second force transfer characteristics. 12. The apparatus of claim 11, further comprising a plurality of said second elastic members formed within said first elastic member.13. The apparatus of claim 11, wherein said conductive member comprises a plurality of conductive particles.14. The apparatus of claim 11, wherein said conductive member comprises a plurality of conductive particles interspersed within said elastomeric member.15. The apparatus of claim 11, wherein said semiconductor is electrically biased through said spring element.16. An apparatus for attaching to a plurality of contacts of a semiconductor, said apparatus comprising:an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of said plurality of contacts of said semiconductor; and an attachment device arranged to press said semiconductor against said interconnect structure to provide an electrical connection between said plurality of conductors and said corresponding ones of said plurality of contacts, said attachment device comprising a spring element including a conductive member and an elastic member comprised of an elastomeric material having first force transfer characteristics, said elastic member having at least one cavity formed therein such that said spring element has overall force transfer characteristics different from said first transfer characteristics of said elastomeric material. 17. The apparatus of claim 16, wherein said elastic member has a plurality of cavities formed therein.18. The apparatus of claim 16, wherein said conductive member comprises a plurality of conductive particles.19. The apparatus of claim 16, wherein said conductive member comprises a plurality of conductive particles interspersed within said elastomeric member.20. The apparatus of claim 16, wherein said semiconductor is electrically biased through said spring element.21. An apparatus for attaching to a plurality of contacts of a semiconductor, said apparatus comprising:an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of said plurality of contacts of said semiconductor, and an attachment device arranged to press said interconnect structure against said semiconductor to provide an electrical connection between said plurality of conductors and said corresponding ones of said plurality of contacts, said attachment device comprising a spring element including a conductive member and an elastic member having a variable spring constant. 22. The apparatus of claim 21, wherein said conductive member comprises a plurality of conductive particles.23. The apparatus of claim 21, wherein said conductive member comprises a plurality of conductive particles interspersed within said elastomeric member.24. The apparatus of claim 21, wherein said semiconductor is electrically biased through said spring element.25. The apparatus of claim 21, wherein said spring element includes an elastic member having a cross-section defined by at least one peak, wherein said elastic member exhibits a variable spring constant that changes with a degree of compression of said at least one peak.26. The apparatus of claim 21, wherein said elastic member has a triangular shaped cross-section.27. The apparatus of claim 21, wherein said elastic member has a repeating triangular shaped cross-section.28. The apparatus of claim 21, wherein said elastic member has a diamond shaped cross-section.29. The apparatus of claim 21, wherein said elastic member has a repeating diamond shaped cross-section.
CROSS REFERENCE TO RELATED APPLICATIONSThis application is a division of Ser. No. 09/026,080 filed Feb. 19, 1998, now abandoned, which is a Continuation-in-Part of U.S. patent application Ser. No. 09/009,169, filed Jan. 20, 1998 now U.S. Pat. No. 6,456,100.BACKGROUND OF THE INVENTIONThe present invention relates in general to spring elements, and, more particularly, to a spring element for use in an apparatus for attaching to a plurality of contacts of a semiconductor.Unpackaged or bare semiconductor dies are used to construct multi-chip modules (MCMs) and other electronic devices. Unpackaged dies must be tested and burned in during the manufacturing process to certify each die as a known good die. This has led to the development of temporary packages that hold a single bare die for testing and burn-in. The temporary packages provide the electrical interconnection between the test pads on the die and external test circuitry. Exemplary temporary packages are disclosed in U.S. Pat. Nos. 5,302,891, 5,408,190 and 5,495,179 to Wood et al., which are herein incorporated by reference.Typically, this type of temporary package includes an interconnect having contact members that make a temporary electrical connection with the test pads on the die. The temporary package can also include an attachment device that presses the die against the interconnect. The attachment device may include a clamping device that attaches to a package base and a spring element that presses the die against the interconnect. The configuration of the spring element is dependent on a number of factors. The spring element must be able to withstand relatively high compressive forces and relatively high burn-in temperatures without experiencing compression set. Further, the dimensions of the spring element must be such that it is compatible with the temporary package. Finally, the spring element must be able to withstand the amount of pressure required for pressing the die against the interconnect without causing an excessive amount of force to be transferred to the die, and thus damaging the same.Springs elements used in the prior art are typically formed using rubber-like materials, such as silicone. Such springs elements are poor conductors of heat and electricity which limits the applications in which they can be used. It would be desirable to have a spring element which was electrically conductive for backside biasing of the semiconductor being tested. It would also be desirable to have a spring element which had improved thermal conduction properties for those applications in which increased heat dissipation is necessary.Accordingly, there is a need for a spring element which is compatible with the temporary packages and environment used to test and burn-in semiconductors. There is also a need for a spring element which has improved thermal and electrical conduction properties. Preferably, such spring elements would be reusable and inexpensive to manufacture.SUMMARY OF THE INVENTIONThe present invention meets this need by providing a spring element having a modulus of elasticity which may be adjusted according to the required environment. Metallic particles or films may be added to the spring element to increase its thermal and electrical conduction properties. The spring element may be wrapped in a metallized woven fabric and mechanically clamped to the cover of the semiconductor testing device, thereby alleviating the need for a load distributing pressure plate. Material may be removed from or added to the spring element to change the modulus of elasticity as needed. The shape of the spring element may also be varied to change the modulus of elasticity, the spring constant, and the force transfer capabilities of the spring element.According to a first aspect of the present invention, a spring element comprises a first elastic member and a conductive member. The first elastic member is comprised of a first elastomeric material having a first modulus of elasticity. A portion of the first elastomeric material is removed from the first elastic member such that the spring element has an overall modulus of elasticity different from the first modulus of elasticity.The portion of the first elastomeric material removed from the first elastic member may form a hole in the first elastic member. Preferably, the first elastic member is o-ring shaped. The first elastic member may also comprise a plurality of holes. The spring element may further comprise a second elastic member comprised of a second elastomeric material having a second modulus of elasticity, with the second elastic member being positioned in at least one of the plurality of holes formed in the first elastic member such that the overall modulus of elasticity is different from the first and second moduli of elasticity. The spring element may further comprise a plurality of the second elastic members with the plurality of the second elastic members being positioned in a plurality of the plurality of holes in the first elastic member. The portion of the first elastomeric material removed from the first elastic member may form a cavity in the first elastic member. Preferably, the first elastic member includes a plurality of cavities formed therein.The conductive member may comprise a plurality of conductive particles. Preferably, the plurality of conductive particles are interspersed within the first elastic member. Alternatively, the conductive member may comprise a layer of conductive material formed over the first elastic member or a plurality of conductive threads. The plurality of conductive threads may comprise a plurality of non-conductive threads having a conductive coating. Preferably, the plurality of conductive threads form a covering around the first elastic member. The conductive member may be comprised of conductive material selected from the group consisting of gold, aluminum, nickel, silver stainless steel, and alloys thereof. The conductive member may also be comprised of carbon.According to another aspect of the present invention, a spring element comprises a first elastic member, a second elastic member and a conductive member. The first elastic member is comprised of a first elastomeric material having a first modulus of elasticity and the second elastic member is comprised of a second elastomeric material having a second modulus of elasticity. The second elastic member is positioned within the first elastic member such that the spring element has an overall modulus of elasticity different from the first: and second moduli of elasticity.The spring element may further comprise a plurality of the second elastic members positioned within the first elastic member. The conductive member may comprise a plurality of conductive particles, a layer of conductive material formed over the first elastic member, or a plurality of conductive threads.According to a further aspect of the present invention, a spring element comprises a plurality of interwoven threads and a conductive member. The conductive member may comprise a plurality of conductive particles, a layer of conductive material formed over the first elastic member, or a plurality of conductive threads. Preferably, the plurality of conductive threads are interwoven with the plurality of interwoven threads.According to yet another aspect of the present invention, a spring element comprises a conductive member and an elastic member having a variable spring constant. The conductive member may comprise a plurality of conductive particles, a layer of conductive material formed over the first elastic member, or a plurality of conductive threads.According to another aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including an elastomeric member and a conductive member.The conductive member may comprise a plurality of conductive particles. Preferably, the plurality of conductive particles are interspersed within the first elastic member. Alternatively, the conductive member may comprise a layer of conductive material formed over the first elastic member or a plurality of conductive threads. The plurality of conductive threads may comprise a plurality of non-conductive threads having a conductive coating. Preferably, the plurality of conductive threads form a covering around the first elastic member. Preferably, the conductive member is comprised of conductive material selected from the group consisting of gold, aluminum, nickel, silver, stainless steel, and alloys thereof. The conductive member may also be comprised of carbon.The semiconductor may be electrically biased through the spring element. The semiconductor may comprise a semiconductor die. The semiconductor may comprise a semiconductor die formed within a semiconductor package. The semiconductor package may comprise a package selected from the group consisting of a chip-scale package, a ball grid array, a chip-on-board, a direct chip attach, and a flip-chip.According to yet another aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a cover and a spring element mechanically coupled to the cover. The spring element comprises an elastomeric member and a plurality of conductive threads forming a covering over the spring element.The cover may comprise a first clamping member configured so that a first end portion of the spring element is force fit to the cover. The cover may further comprise a second clamping member configured so that a second end portion of the spring element is force fit to the cover.According to a further aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including a conductive member and a first elastic member comprised of a first elastomeric material having a first modulus of elasticity. The first elastic member includes a plurality of holes formed therein such that the spring element has an overall modulus of elasticity different from the first modulus of elasticity.According to a still further aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including an elastic member comprised of a conductive member and an elastomeric material having a modulus of elasticity. The elastic member includes a hole formed therein such that the spring element has an overall modulus of elasticity different from the modulus of elasticity of the elastomeric material. The elastic member is shaped so as to engage an outer edge of the semiconductor such that a force applied by the attachment device as the interconnect structure is pressed against the semiconductor is substantially uniform around the semiconductor.According to yet a still further aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including a first elastic member, a second elastic member and a conductive member. The first elastic member comprises a first elastomeric material having a first modulus of elasticity and the second elastic member comprises a second elastomeric material having a second modulus of elasticity. The second elastic member is positioned within the first elastic member such that the spring element has an overall modulus of elasticity different from the first and second moduli of elasticity.According to another aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts on the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element comprised of a plurality of interwoven threads and a conductive member.According to yet another aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including a conductive member and an elastic member comprised of an elastomeric material having a modulus of elasticity. The elastic member includes at least one cavity formed therein such that the spring element has an overall modulus of elasticity different from the modulus of elasticity of the elastomeric material.According to a further aspect of the present invention, an apparatus for attaching to a plurality of contacts of a semiconductor comprises an interconnect structure comprising a plurality of conductors patterned to match corresponding ones of the plurality of contacts of the semiconductor and an attachment device pressing the interconnect structure against the semiconductor to provide an electrical connection between the plurality of conductors and the corresponding ones of the plurality of contacts. The attachment device comprises a spring element including a conductive member and an elastic member having a variable spring constantAccordingly, it is an object of the present invention to provide a spring element which is compatible with the temporary packages and environment used to test and burn-in semiconductors. It is another object of the present invention to provide a spring element which has improved thermal and electrical conduction properties. It is another object of the present invention to provide a spring element which is reusable and inexpensive to manufacture. Other features and advantages of the invention will be apparent from the following description, the accompanying drawings, and the appended claims.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is an exploded view of a temporary package for testing semiconductors;FIG. 2 is a cross-sectional view of the assembled temporary package shown in FIG. 1;FIG. 3 is a plan view of an interconnect structure for testing semiconductor dies used in the temporary package of FIG. 1 according to first aspect of the present invention;FIG. 4 is a schematic plan view of a semiconductor die to be tested in the temporary package of FIG. 1 according to the first aspect of the present invention;FIG. 5 is a schematic plan view of a semiconductor package to be tested in the temporary package of FIG. 1 according to a second aspect of the present invention;FIG. 6 is a plan view of an interconnect structure for testing semiconductor packages used in the temporary package of FIG. 1 according to the second aspect of the present invention;FIG. 7 is a perspective view of a spring element according to a first embodiment of the present invention;FIG. 8 is perspective view of a spring element according to a second embodiment of the present invention;FIG. 9 is perspective view of a spring element according to a third embodiment of the present invention;FIG. 10 is perspective view of a spring element according to a fourth embodiment of the present invention;FIG. 11 is perspective view of a spring element according to a fifth embodiment of the present invention;FIG. 12 is perspective view of a spring element according to a sixth embodiment of the present invention;FIG. 13 is a perspective view of a spring element according to a seventh embodiment of the present invention;FIGS. 14-18 are perspective views of the spring element of FIG. 13 according to various aspects of the present invention;FIG. 19 is an exploded view of a temporary package for testing semiconductors using a spring element having a conductive material; andFIG. 20 is a side view of the cover and spring element of FIG. 19 coupled together.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSReferring now to FIGS. 1 and 2, a typical temporary package 10 used for testing a semiconductor 12 is shown. The temporary package 10 includes a package base 14, an interconnect structure 16, and an attachment device 18. The interconnect structure 16 establishes electrical communication between the package base 14 and the semiconductor 12. The attachment device 18 secures the semiconductor 12 to the package base 14 and presses the semiconductor 12 against the interconnect structure 16. The attachment device 18 includes a pressure plate 20, a spring element 22, a cover 24 and a pair of clips 26, 28.The interconnect structure 16 is positioned within a recess 30 formed within the package base 14. The semiconductor 12 is positioned over the interconnect structure 16 and held within another recess 32 formed within the package base 14. The spring element 22 is secured to the cover 24 using an appropriate adhesive. However, it will be appreciated by those skilled in the art that the spring element 22 may used without being secured to the cover 24. The pressure plate 20 overlies the semiconductor 12 and is pressed against the semiconductor 12 by the spring element 22 and the cover 24. Accordingly, the semiconductor 12 is pressed against the interconnect structure 16 thereby establishing an electrical connection between the semiconductor 12, the interconnect structure 16 and the package base 14.The cover 24 is secured to package base 16 by the clips 26 and 28. The clips 26, 28 engage a top portion of the cover 24 and are secured to the package base 14 through corresponding openings 34, 36 in the base 14. It will be appreciated by those skilled in the art that other types of latching mechanisms may be used to secure the cover 24 to the package base 14. The cover 24, the spring element 22, the pressure plate 20 and the package base 14 each include a central opening which are designated 24A, 22A, 20A and 14A, respectively. The openings 24A, 22A, 20A and 14A are used during assembly of the package 10 to permit the semiconductor 12 to be held by a vacuum tool (not shown) during optical alignment of the semiconductor 12 and the interconnect structure 16. The vacuum tool may also be used to disassemble the package 10 as required.The apparatus 10 may be used to test semiconductors 12 in a variety of forms. According to a first aspect of the present invention, the apparatus 10 is used to test bare semiconductor dies 12', see FIG. 4. The interconnect structure 16 is arranged so as to interface with such semiconductor dies 12'. Referring to FIG. 3, the interconnect structure 16 includes a plurality of conductors 38. Each of the plurality of conductors 38 includes a contact member 40, a connection line 42 and a bonding site 44. The contact members 40 are formed in a pattern which correspond to a plurality of contacts or bond pads 46 on the semiconductor die 12'; see also FIG. 4. The contact members 40 are adapted to contact and establish an electrical connection with the bond pads 46 on the semiconductor die 12'. For example, the contact members 40 may include a raised portion (not shown) which contacts the bond pads 46 as the semiconductor die 12' is pressed against the interconnect structure 16. The connection lines 42 terminate at the bonding sites 44 for connection to the package base 14. The bonding sites are connected to respective conductive traces 48 on the package base 14 using bond wires 50. The interconnect structure 16 may include a number of test structures (not shown) for evaluating various electrical characteristics of the interconnect structure 16. Once assembled, the semiconductor die 12' may be tested and burned-in as desired.In the illustrated embodiment, the interconnect 16 is formed of a silicon substrate using conventional semiconductor technology. Similarly, the plurality of conductors 38 are formed of an appropriate conductive material using conventional semiconductor technology. The interconnect structure 16 may be formed according to U.S. Pat. Nos. 5,326,428; 5,419,807 and 5,483,741 which are herein incorporated by reference. In the illustrated embodiment, the semiconductor die 12' is formed of a silicon substrate with a number of additional semiconductor layers forming the desired semiconductor device using conventional semiconductor technology. It will be appreciated by those skilled in the art that the semiconductor die 12' may be formed of other semiconductor materials, such as gallium arsenide.According to a second aspect of the present invention, the apparatus 10 is used to test semiconductor packages 12''; see FIG. 5. The semiconductor package 12'' includes at least one semiconductor die 12' and an additional structure 52. The structure 52 basically reroutes the bond pads 46 from the edge of the semiconductor die 12' towards the center of the semiconductor die 12'. This rerouting reduces the precision required for aligning the bond pads 46 with the contact members 40 as there is a greater area in which to position the bond pads 46. The structure 52 includes a plurality of conductive traces 54 electrically coupled to respective bond pads 46. The traces 54 are routed toward the center of the semiconductor die 12' in any desired pattern. The end of each trace 54 includes bonding member 56, such a solder ball. The bonding member 56 is typically larger than the corresponding bond pad 46 such that the precision in aligning the contact members 40 with the bonding member 56 is reduced. The semiconductor package 12'' may comprise a chip-scale package (CSP), ball grid array (BGA), chip-on-board (COB), direct chip attach (DCA), flip-chips and other similar packages. As shown in FIG. 6, the interface structure 16 is arranged and configured to interface with the semiconductor package 12'' as is known in the art. It should be apparent from the above description that the semiconductor 12 may comprise bare semiconductor dies and semiconductor dies arranged in packages as is known in the art.The spring element 22 is composed of an elastomeric material. In the illustrated embodiment, the elastomeric material comprises silicone as it is compatible with the high temperatures associated with burn-in. However, silicone and the silicon used to form the semiconductor 12 tend to bond together due to molecular surface attraction and the compressive forces encountered as the semiconductor 12 is pressed against the interconnect structure 16. Such a bond could damage the underlying structures of the semiconductor 12 as well as the semiconductor 12 itself as the semiconductor 12 and the spring element 22 are separated. The pressure plate 20 acts as an interface between the semiconductor 12 and the spring element 22 to prevent such a bond from forming. The pressure plate 20 is thus composed of a suitable material which is compatible with the spring element 22 and the semiconductor 12 so as to prevent a bond from forming between any of the aforementioned structures. It will be appreciated by those skilled in the art that spring element 22 may be composed of other elastomeric materials, such as appropriate urethanes and polyesters. Further, the pressure plate 20 may be omitted if the material used to form the spring element 22 does not bond to the semiconductor 12 when subjected to high pressure and temperature. The pressure plate also distributes the force from the spring element 22 in a uniform manner.Typically, the semiconductor 12 and the temporary package are relatively small thereby limiting the area or thickness of the spring element 22. The thickness of the spring element 22 may range between approximately 15 mils (0.381 mm) to approximately 125 mils (3.177 mm). However, it will be appreciated by those skilled in the art that the spring element 22 may be any desired thickness depending on the particular package 10 and semiconductor 12. The spring element 22 absorbs some of the force or pressure applied to it as it is compressed by the cover 24. The spring element 22 is sized and configured to transfer a desired amount of pressure to the semiconductor 12. A sufficient amount of pressure needs be applied to the semiconductor 12 so that it properly engages the interconnect structure 16. However, an excessive amount of pressure could damage the semiconductor 12 and the interconnect structure 16. As the dimensions of the spring element 22 are limited due to the size of the semiconductor 12 and the package 10, the configuration of the spring element 22 may be changed so that it exhibits the desired pressure absorption and force transfer characteristics.The force applied by the spring element 22 may be changed by changing the area of the spring element 22 to be compressed. For example, a pressure plate 20 which is larger than the outer dimensions of the semiconductor 12 may be used with a lower psi spring element 22. The larger pressure plate 20 limits the overall compression height of the spring element 22 while applying the appropriate amount of force. Reducing the amount that the spring element 22 is compressed lessens the compression set of the spring element 22.One feature of the spring element 22 which may be changed is its modulus of elasticity. Lowering the modulus of elasticity of the spring element 22 would enable it to absorb more force or pressure so that the amount of pressure applied to the semiconductor 12 is within acceptable levels. Another way of describing such function is forming low psi (lbs. per square inch) materials from high psi materials. Conversely, the modulus of elasticity may be increased so as to lessen the amount of force or pressure absorbed by the spring element 22 and thus increase the amount of force or pressure applied to the semiconductor 12 for a given deflection amount.Referring now to FIG. 7, the spring element 22 according to a first embodiment of the present invention is shown. The spring element 22 comprises a first elastic member 100 comprised of a first elastomeric material having a first modulus of elasticity. In the illustrated embodiment, the first elastomeric material comprises silicone. The silicone may be substantially solid or foam-like by having gas bubbles blown through it during fabrication using conventional methods. It should be apparent that the first modulus of elasticity is dependent, in part, on the configuration of the silicone as being foam-like or substantially solid. Foam-like material is more easily compressed than substantially solid material as the gas bubbles in the foam-like material are more easily compressible. A plurality of openings 102 are formed in the first elastic member 100 in addition to the opening 22A described above. The plurality of openings 102 may extend partially or completely through the first elastic member 100. The plurality of openings 102 are formed by wet drilling the first elastic member 100. Wet drilling is particularly advantageous as it will not leave residual oil or particles from the silicone on the first elastic member 100. The plurality of openings 102 may also be formed using other appropriate methods, such as by molding, regular drilling, laser drilling or by punching out the desired openings. An overall modulus of elasticity of the spring element 22 is thus dependent on the size and total number of openings 102 through the first elastic member 100. The overall modulus of elasticity of the spring element 22 is lower than the first modulus of elasticity of the first elastic member 100 in direct relation to the quantity of first elastomeric material removed from the first elastic member 100. The spring element 22 is thus more compressible.The overall modulus of elasticity of the spring element 22 may be further changed by adding one or more second elastic members 104 to the first elastic member 100. The second elastic members 104 are comprised of a second elastomeric material having a second modulus of elasticity different from the first modulus of elasticity. The second elastic members 104 may be positioned in one or more of the openings 102 as desired. In the illustrated embodiment, the second elastic members 104 also comprise silicone which may be substantially solid or foam-like. The overall modulus of elasticity of the spring element 22 with the second elastic members 104 in the openings 102 will be at least greater than the overall modulus of elasticity of the spring element 22 with empty openings 102. Further, the overall modulus of elasticity of the spring element 22 may be greater than the first modulus of elasticity if the second elastomeric material is stiffer or more dense than the first elastomeric material.Referring now to FIG. 8, the spring element 22 according to a second embodiment of the present invention is shown, with like reference numerals corresponding to like elements. In this embodiment, one or more of the second elastic members 104 are positioned within the first elastic member 100. The second elastic members 104 are formed with the first elastic member 100 as the first elastic member 100 is fabricated. As with the first embodiment, the overall modulus of elasticity is dependent on the number and size of the second elastic members 104. The second elastic members 104 may have any desired shape. In the illustrated embodiment, the second elastic members 104 are generally spherical or oblong. The second elastic members 104 may be foam-like or substantially solid depending on the desired properties of the spring element 22.Referring now to FIG. 9, the spring element 22 according to a third embodiment of the present invention is shown. The spring element 22 comprises an elastic member 106 comprised of an elastomeric material having a modulus of elasticity. The elastic member 106 is shaped so that it engages an outer edge of the semiconductor 12 as it presses the semiconductor 12 against the interconnect structure 16. The spring element 22 of this embodiment includes a relatively large hole 108 through the elastomeric material such that the overall modulus of elasticity of the spring element 22 is different from the modulus of elasticity of the elastic member 106. As the spring element 22 engages the outer edge of the semiconductor 12, the force or pressure from the compressed spring element 22 is substantially uniform around the semiconductor 12. By engaging only the outer edge of the semiconductor 12, the applied force or pressure from the spring element 22 is substantially uniform compared to a sheet in which more force or pressure is applied to the center than the edges due to the deflection properties of a sheet versus an o-ring. In the illustrated embodiment, the elastic member 106 is o-ring shaped.Referring now to FIG. 10, the spring element 22 according to a fourth embodiment of the present invention is shown. The spring element 22 comprises a plurality of interwoven threads 110. The amount in which the spring element of the fourth embodiment may be compressed is dependent, in part, to the size and the degree in which the threads 110 are woven together. The threads 110 are comprised of an elastomeric material which is silicone in the illustrated embodiment.Referring now to FIG. 11, the spring element 22 according to a fifth embodiment of the present invention is shown. The spring element 22 comprises an elastic member 112 comprised of an elastomeric material having a modulus of elasticity. One or more cavities or dimples 114 are formed in the elastic member 112. The overall modulus of elasticity of the spring element 22 is thus dependent on the size and number of cavities 114. The cavities 114 may be formed by molding them into the elastic member 112 or by cutting cavities out of the elastic member 112. The cavities 114 may comprise any desired shape.Referring now to FIG. 12, the spring element 22 according to a sixth embodiment of the present invention is shown. The spring element 22 comprises an elastic member 116 having a variable spring constant. The elastic member 116 has a repeating diamond shaped cross-section with a set of first peaks 116A and a set of second peaks 116B. The spring constant of the elastic member 116 changes based on the level of compression. The spring constant increases in direct proportion to the level of compression. The spring constant increases with compression because a greater amount of material is compressed. As there is less material near the peaks 116A, 116B, the amount of material compressed is less such that the spring constant is low. However, as compression increases, the amount of material compressed also increases such that the spring constant is higher. The elastic member 116 may have different shapes provided that the spring constant changes with the degree of compression. The elastic member 116 may have a triangular cross-section or a repeating triangular shaped cross-section. The elastic member 116 may be formed by molding or extruding an appropriate elastomeric material. The elastomeric material may be substantially solid or foam-like.Referring now to FIG. 13, a spring element 22 according to a seventh embodiment of the present invention is illustrated. The spring element 22 comprises an elastomeric material 118 having any of the above configurations and a conductive member 120. The conductive member 120 is configured so as to make the spring element 22 electrically conductive and/or to improve its thermal conductivity. An electrically conductive spring element 22 enables the semiconductor 12 to be backside biased through the spring element 22 as required for the particular test being performed. A spring element with improved thermal conductivity may be used in an application where heat dissipation is required.As shown in FIG. 13, the conductive member 120 may take the form of a plurality of conductive particles 122 interspersed within the spring element 22. The concentration of conductive particles 122 is chosen so as to optimize the desired electrical and thermal conduction properties without adversely affecting the elasticity of the spring element 22. The conductive particles 122 may be mixed with the elastomeric material 118 as the spring element 22 is being formed. Another method of adding the conductive particles 122 to the spring element 22 is to inject them into the elastomeric material 118 after the spring element 22 is formed using known methods.As shown in FIG. 14, the conductive particles 122 may also be applied to one or more surfaces of the spring element 22 using an appropriate adhesive. The concentration of the conductive particles 122 may be such that a layer of conductive material is formed on one or more surfaces of the spring element 22. In the alternative, a coating of conductive material 124 may be formed over one or more surfaces of the spring element 22 to form a layer of conductive material as shown in FIG. 15. Conductive material, in liquid form, may be applied to one or more surfaces of the spring element 22, as desired, using methods known in the art. The conductive material may also be applied by sputtering.Referring now to FIGS. 16-18, the conductive member 120 may comprise a plurality of conductive threads 126. The conductive threads 126 may be set within the elastomeric material 118 as the spring element 22 is formed or the conductive threads 126 may be applied to one or more surfaces of the spring element 22 using an appropriate adhesive as shown in FIG. 16. Alternatively, the conductive threads 126 may be interwoven so as to form a fabric 128 of conductive material which is wrapped around the elastomeric material 118 as shown in FIG. 17. The fabric 128 may be arranged so as encase or cover the elastomeric material 118. Such a covering adds structural strength and protects the spring element 22 as well as being electrically and/or thermally conductive.Referring now to FIG. 18, the spring element 22 may comprise the plurality of interwoven threads 110 and the plurality conductive threads 126 interwoven together. The ratio between threads 110 and conductive threads 126 may be adjusted so that the spring element 22 exhibits the desired elastic, electric and/or thermal properties. The conductive threads 126 may be formed of generally solid filaments of conductive material. Alternatively, the conductive threads 126 may be formed from filaments of non-conductive materials which are coated with conductive material.The conductive material forming the conductive member 120 may be any desired electrically or thermally conductive material appropriate for the particular application. The conductive material may be comprised of any appropriate metal, such as gold, aluminum, nickel, silver, stainless steel, and alloys thereof. The conductive material may also be comprised of carbon in diamond or graphite crystalline form. Diamond is particularly advantageous as it has very high thermal conductivity while graphite is electrically conductive.Referring now to FIG. 19, another temporary package 10' used for testing a semiconductor 12 is shown, with like reference numerals corresponding to like elements. The spring element 22 is covered with the fabric 128 of conductive material. As the fabric 128 increases the strength and structural integrity of the spring element 22, the force applied by the cover 24 to the spring element 22 is better distributed to the semiconductor 12. Accordingly, the pressure plate is not needed as its function is performed by the spring element22 through the fabric 128. However, the pressure plate may still be used as desired.Further, the added strength provided by the fabric 128 allows the spring element 22 to be mechanically coupled to the cover 24 as shown in FIG. 20. The cover 24 includes a first clamp 24B and a second clamp 24C for latching or clamping the spring element 22 to the cover 24. A first end portion 22A of the spring element 22 is compressed and clamped to the cover 24 by the first clamp 24B while a second end portion 22B of the spring element 22 is compressed and clamped to the cover 24 by the second clamp 24C. It will be appreciated by those skilled in the art that all of the end portions of the spring element 22 may be clamped to the cover 24.In the illustrated embodiment, the spring element 22 is force fit to the cover 24 as the first and second clamps 24B and 24C comprise static latching shelves with the first and second end portions 22A and 22B being slid in place between the first and second clamps 24B and 24C. The spring element 22 may be easily removed from the cover 24 by applying sufficient force to overcome the force being applied by the clamps 24B and 24C. A new spring element 22 may then be slid and latched in place. Such a clamping device is cleaner and easier to use than adhesives. However, the spring element 22 may be adhered to the cover 24 as desired. It will be appreciated by those skilled in the art that other mechanical latching devices may be used to clamp the spring element 22 to the cover 24.It will be appreciated by those skilled in the art that the spring element 22 may have any combination of the above embodiments. The final configuration of the spring element 22 will be dependent on the desired physical properties of the spring element 22 as well as the dimensional limitations for each particular package 10 and semiconductor 12. It will be further appreciated by those skilled in the art that the spring element 22 may be used with other temporary packages used to test semiconductors.Having described the invention in detail and by reference to preferred embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims.
A microcontroller includes a central processing unit (CPU); a plurality of peripheral units; and a peripheral trigger generator comprising a user programmable state machine, wherein the peripheral trigger generator is configured to receive a plurality of input signals and is programmable to automate timing functions depending on at least one of said input signals and generate at least one output signal.
CLAIMS WHAT IS CLAIMED IS: 1. A microcontroller comprising: a central processing unit (CPU); a plurality of peripheral units; and a peripheral trigger generator comprising a user programmable state machine, wherein the peripheral trigger generator is configured to receive a plurality of input signals and is programmable to automate timing functions depending on at least one of said input signals and generate at least one output signal. 2. The microcontroller according to claim 1, wherein the peripheral trigger generator comprises a programmable step queue comprising a plurality of registers storing sequential programming steps. 3. The microcontroller according to claim 2, wherein the peripheral trigger generator comprises control registers coupled with a control logic and a command decoder coupled with said step queue. 4. The microcontroller according to claim 1, wherein the at least one output signal is a trigger signal that controls one of said peripheral units independendy from said CPU. 5. The microcontroller according to claim 4, wherein said one peripheral unit is an analog-to- digital converter. 6. An integrated circuit including a peripheral trigger generator comprising a user programmable state machine, wherein the peripheral trigger generator is configured to receive a plurality of input signals and is programmable to automate timing functions depending on at least one of said input signals and generate at least one output signal. 7. The integrated circuit according to claim 6, wherein the peripheral trigger generator comprises a programmable step queue comprising a plurality of registers storing sequential programming steps. 8. The integrated circuit according to claim 7, wherein the peripheral trigger generator comprises control registers coupled with a control logic and a command decoder coupled with said step queue. 9. The integrated circuit according to claim 6, wherein the at least one output signal is a trigger signal that controls one of said peripheral units independently from said CPU. 10. The integrated circuit according to claim 9, wherein said one peripheral unit is an analog- to-digital converter. 1 1. A microcontroller, comprising: a central processing unit; a plurality of peripheral devices; and a peripheral trigger generator configured to generate a plurality of trigger and interrupt signals and coordinate timing functions for the plurality of peripheral devices independent of the central processing unit. 12. The microcontroller in accordance with claim 1 1, the peripheral trigger generator including a programmable state machine for executing peripheral trigger generator commands. 13. The microcontroller in accordance with claim 12, the peripheral trigger generator including one or more step queues for storing peripheral trigger generator commands. 14. The microcontroller in accordance with claim 13, wherein the peripheral trigger generator comprises a plurality of control registers coupled with a control logic and a command decoder coupled with said one or more step queues. 15. The microcontroller according to claim 14, wherein said one peripheral unit is an analog-to-digital converter.
PERIPHERAL TRIGGER GENERATOR CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Provisional Application Serial No. 61/534,619, titled, "Peripheral Trigger Generator," filed September 14, 2011 , which is hereby incorporated by reference in its entirety as if fully set forth herein. TECHNICAL FIELD The present disclosure relates to a peripheral trigger generator, in particular, for use in a microcontroller. BACKGROUND Microcontrollers are used in a variety of control environments. It is often desirable in such environments to accurately generate complex signals, such as triggers for peripheral devices, that vary in time and frequency responsive to internal and external events. Typically, the microcontroller's processing core itself has been used to provide control over generating such signals. However, processor driven timing solutions are subject to processor latencies which cannot necessarily be predicted. This can result in inaccuracies and timing inconsistencies when time-critical events, requiring generation of responsive triggers, occur. Furthermore, to the extent that the processor core can be used to control such timing, the amount of processor overhead may be significant. As such, there is a need for improved systems and methods for generating signals responsive to time-driven events. There is a further need for improved systems and methods for generating trigger signals to coordinate peripheral actions. SUMMARY According to various embodiments, complex and accurate timing sequences can be generated with a peripheral trigger generator, which is adaptable to internal and external events without incurring the unpredictability and latencies of processor driven solutions. A peripheral trigger generator according to various embodiments provides a unique peripheralfunction that enables users to implement coordinated timing functions not possible with conventional microcontrollers. A microcontroller according to embodiments includes a central processing unit (CPU); a plurality of peripheral units; and a peripheral trigger generator comprising a user programmable state machine, wherein the peripheral trigger generator is configured to receive a plurality of input signals and is programmable to automate timing functions depending on at least one of said input signals and generate at least one output signal. In some embodiments, the peripheral trigger generator includes a programmable step queue comprising a plurality of registers storing sequential programming steps. In some embodiments, the peripheral trigger generator comprises control registers coupled with a control logic and a command decoder coupled with said step queue. In some embodiments, the at least one output signal is a trigger signal that controls one of said peripheral units independently from said CPU. In some embodiments, the one peripheral unit is an analog-to- digital converter. A microcontroller, according to some embodiments, includes a central processing unit; a plurality of peripheral devices; and a peripheral trigger generator configured to generate a plurality of trigger and interrupt signals and coordinate timing functions for the plurality of peripheral devices independent of the central processing unit. The peripheral trigger generator may include a programmable state machine for executing peripheral trigger generator commands. The peripheral trigger generator may further include one or more step queues for storing peripheral trigger generator commands. The peripheral trigger generator may include a plurality of control registers coupled with a control logic and a command decoder coupled with said one or more step queues. BRIEF DESCRIPTION OF THE DRAWINGS The drawings accompanying and forming part of this specification are included to depict certain aspects of the disclosure. It should be noted that the features illustrated in the drawings are not necessarily drawn to scale. A more complete understanding of the disclosure and the advantages thereof may be acquired by referring to the following description, taken in conjunction with the accompanying drawings in which like reference numbers indicate like features and wherein: FIG. 1 is a block diagram of a microcontroller in accordance with embodiments of the invention. FIG. 2 is a block diagram of a peripheral trigger generator according to an embodiment of the invention. FIG. 3 is a block diagram of a peripheral trigger generator according to an embodiment of the invention. FIG. 4 illustrates exemplary control and status registers for a PTG according to an embodiment of the invention. FIG. 5 illustrates exemplary STEP queues according to embodiments of the invention. FIG. 6A and FIG. 6B illustrate an example application using the PTG. FIG. 7A and FIG. 7B illustrate an example application using the PTG. FIG. 8A - FIG. 8D illustrate exemplary states of a PTG state machine. DETAILED DESCRIPTION The disclosure and various features and advantageous details thereof are explained more fully with reference to the exemplary, and therefore non-limiting, embodiments illustrated in the accompanying drawings and detailed in the following description. Descriptions of known programming techniques, computer software, hardware, operating platforms and protocols may be omitted so as not to unnecessarily obscure the disclosure in detail. It should be understood, however, that the detailed description and the specific examples, while indicating the preferred embodiments, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions and/orrearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure. As used herein, the terms "comprises," "comprising," "includes," "including," "has," "having," or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, process, article, or apparatus. Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of, any term or terms with which they are utilized. Instead these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized encompass other embodiments as well as implementations and adaptations thereof which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: "for example," "for instance," "e.g.," "in one embodiment," and the like. According to various embodiments, systems and methods can be provided to generate accurate and complex sequences of signals within a microcontroller to trigger, for example, an ADC (Analog Digital Converter) module to sample and convert analog signals in an application circuit. Using typical software methods is generally too imprecise and requires too much processor overhead. A peripheral trigger generator (PTG) according to various embodiments allows, without CPU intervention, events that occur in a peripheral to (1) Conditionally generate trigger(s) in another peripheral that vary in time and frequency; and (2) Reconfigure the operation of another peripheral (e.g. ATD input channel select). In some embodiments, the PTG (Peripheral Trigger Generator) is a user programmed state machine designed to "process" time driven events and output trigger signals thatcoordinate various peripheral actions. In other words, the PTG generates complex sequences of trigger signals in order to coordinate the action of other peripherals. While most microcontrollers process "data," the PTG calculates timing. As will be discussed in greater detail below, the PTG is primarily a "timing coordinator," rather than a timing module. Advantageously, the PTG can reduce processor workload and simplify software design by off-loading time critical tasks, such as triggering ADC sampling and conversions with precise timing; and automating complex applications involving external events and timing such as industrial automation. As will be explained in greater detail below, in some embodiments, the PTG can support up to 16 independent hardware trigger inputs and one software trigger input and generate up to thirty-two output trigger signals, in either individual or broadcast mode. In addition, in some embodiments, the PTG can generate up to sixteen unique interrupt signals. The Peripheral Trigger Generator (PTG) according to various embodiments is user programmable via a PTG assembly language. In some embodiments, the PTG operates independently of the processor. The PTG can monitor selected peripheral signaling and generate signaling to other peripherals and/or the processor. The PTG can provide timing accuracy not possible if implemented in software. The PTG may operate faster than the CPU. Consequently, the PTG can monitor a number of inputs and generate complex timing sequences with a time accuracy not possible via software. More particularly, turning now to FIG. 1, a diagram of an exemplary processor 10 employing a peripheral trigger generator in accordance with embodiments is shown. The processor 10 may be implemented as a microprocessor or microcontroller, or any suitable processing device. The processor 10 includes one or more central processing units 12 coupled via a bus 14 to one or more peripheral devices 18, 20. In addition, as will be explained in greater detail below, the processor 10 includes a peripheral trigger generator 16 in accordance with embodiments for generating complex timing signals for both on-chip and off-chip peripherals. One or more control registers may be provided to control operation of the PTG 16. In some embodiments, the peripheral devices can include ADCs, Input Capture, Output Compare, and Timers. Turning now to FIG. 2, a diagram of an exemplary PTG is shown and generally identified by the reference numeral 100. In the example illustrated, the PTG 100 includes aSTEP queue 1 10 coupled to a read bus 1 12 and a write bus 1 14. The STEP queue 110 is a small memory containing the instructions needed to implement the desired user functionality. A multiplexer 1 16 selects input signals that may be used to start or modify the program behavior in the STEP queue 110. The PTG 100 may receive inputs from external pins, analog comparators, pulse width modulator (PWM) timebase comparators; Output Compare events; Input Capture events; and ADC (analog to digital conversion) complete signals. The PTG 100 further includes one or more control registers 104, control logic 102, a queue pointer (QPTR) 106, watchdog timer 108, and command decoder 118. The command decoder 1 18 converts executed STEP command into actions (signals) that can be connected to other modules (not shown) such as ADCs, Input Capture, Output Compare, Timers, or external device pins. According to a particular embodiment, the PTG 100 may comprise the following outputs: ADC trigger inputs; PWM sync inputs; Input Capture sync; Output compare clock input; Output compare sync input; and Analog comparator mask signal inputs. Although any number of control and status registers may be used in conjunction with the PTG, according to some embodiments, the control and status registers 104 are: PTG Control/Status register (PTGCST); PTG control register (PTGCON); PTG broadcast trigger enable (PTGBTE); PTG hold register (PTGHOLD); PTG GP timer 0 register (PTGT0LIM); PTG GP timer 1 register (PTGTILIM); PTG step delay register (PTGSDLIM); PTG Loop CounterO (PTGT0LIM); PTG Loop Counterl (PTGCILIM); PTG adjust register (PTGADJ); PTG Literal register (PTGLO); PTG queue pointer (PTGQPTR). These are illustrated in greater detail with reference to FIG. 3. In addition, as will be explained in greater detail below, in some embodiments, the PTG 100 includes one or more general purpose timers 124, one or more loop counters 126, and one or more delay timers 128. In operation, according to some embodiments, the user writes 8-bit commands called "Steps" into the PTG queue registers 110. Each 8-bit step is made up of a four bit command code and a four bit parameter field. FIG. 4 illustrates the structure and encoding of a step command. The commands perform operations such as wait for an input trigger signal, generate an output trigger signal, and wait for the timer. More particularly, the commandsdefine a sequence of events that generate trigger output signals 122 to peripherals such as the ADC, Output Compare, Input Capture, and the Timer macros. The STEP commands may also be used to generate interrupt requests to the processor. STEP commands in the STEP queue 1 10 execute sequentially unless stopped by, e.g., a reset or by the Watchdog Timer 108. In addition, the STEP commands can be made to wait on a command, such as an input trigger edge, a software trigger, or a timer match, before continuing. The STEP queue pointer register 106 is a special function register and an internal pointer. The pointer addresses the currently active step in the step queue 110. Each command byte is read, decoded, and executed sequentially. While most instructions execute with predefined cycle count, the watchdog timer 108 is enabled during input trigger related step commands (all other commands execute and retire in 2 cycles). The WDT 108 is a free running, 9-bit counter that is reset when each step command retires (completes). During each PTG cycle, the WDT compares its value with a user selected timeout value, and will generate a WDT interrupt (ptg_wdto_intr), and halt step command execution should they ever match. The WDT 108 is intended to prevent PTG lockup should an expected input trigger event never arrive. The PTG module 100 can generate trigger, interrupt, and strobed data outputs by execution of specific Step commands. As noted above, the PTG module can generate up to 32 unique trigger output signals, in either an individual or broadcast mode. The PTG module can generate an individual output trigger on any one of the 32 trigger outputs. The individual trigger outputs are typically used to trigger individual ADC input conversion operations, but can be assigned to any function, including general purpose I/O ports. When the PTG module 100 is used with a compatible peripheral, such as the ADC module, the individual trigger output signals of the PTG 100 are individually assigned to specific analog input conversion controllers within the ADC module (not shown). The broadcast trigger output feature enables the user to simultaneously generate large number of (individual) trigger outputs with a single Step command. In some embodiments, two 16-bit Loop Counters 126 are provided that may be used by the as a block loop counter or delay generator. All internal counters are cleared when the device is in the reset state or when the PTG module 100 is disabled. Step commands exist that can load, modify or initialize the Loop Counter limit values. Each Loop Counter includes anincrementing counter (PTGCn) and an SFR limit register (PTGCnLIM). The SFR value may be changed by a CPU write (when the module is disabled) or by the PTG sequencer (when the module is enabled). The stored value in the SFR that corresponds to each Loop Counter is referred to as the counter limit value. The jump conditional command uses one of the Loop Counters to keep track of the number of times the command is executed, and may therefore be used to create code block loops. These are useful in applications where a sequence of peripheral events needs to be repeated several times. The jump command allows this to be achieved without requiring a large step queue to be implemented on the device. Each time the jump command is executed, the corresponding internal Loop Counter is compared to its limit value. If the counter has not reached the limit value, the target jump queue location is loaded into the step Queue Pointer (PTGQPTR) 106, and the counter is incremented by 1. The next command will be fetched from the new queue location. If the counter has reached the limit value, the sequencer will proceed to the next command (i.e. increment the queue pointer) as usual. In preparation for the next jump command loop, the corresponding Loop Counter will also be cleared at this time. The provision for two separate Loop Counters and associated jump (PTGJMPCn) instructions allows for nested loops to be supported (one level deep). There are no restrictions with regards to which PTGJMPCn instruction resides in the inner or outer loops. STEP commands are illustrated in FIG. 5. In some embodiments, each command in encoded into two four bit fields that make "hand assembly" of commands by a user a relatively simple task. In some embodiments, each 8-bit step command consists of a 4-bit command field (CMD[3:0]) and a 4-bit parameter field (OPTION[3:0]). In some embodiments, all commands execute in a single cycle, except for flow change commands, and commands that are waiting for an external input. The sequencer is simply pipelined such that while a command is executing, the next command is being read from the step queue and decoded.By default, each STEP command will execute in one PTG clock period. There are several techniques to slow the execution of the step commands: • Wait for a Trigger Input • Wait for a GP Timer (PTGTnLIM) · Insert a delay loop using PTGJMPCn and PTGCn • Enable and (automatically) insert a Step Delay after execution of each command In some embodiments, the PTG 100 can support up to 16 independent trigger inputs. The user may specify a step command that waits for a positive or negative edge, or a high or low level, of the selected input signal to occur. The operating mode is selected by a PTGITM[1:0] control field in the PTGCST register. The PTGWHI command looks for a positive edge or high state to occur on the selected trigger input. The PTGWLO command looks for a negative edge or low state to occur on the selected trigger input. PTG command sequencer will repeat the trigger input command (i.e. effectively wait) until the selected signal becomes valid before continuing step command execution. The minimum execution time of a "Wait for Trigger" command is one PTG clock. There is no limit to how long the PTG will wait for a trigger input (other than that enforced by the watchdog timer 108). In some embodiments, there are 4 input trigger command operating modes supported that are selected by the PTGITM[1:0] control field in the PTGCST register. Note that if the Step Delay is disabled, modes 0 and 1 are equivalent in operation, and modes 2 and 3 are equivalent in operation. Mode 0 is Continuous edge detect with Step Delay at exit. In this mode, the selected trigger input is continuously tested starting immediately when the PTGWHI or PTGWLO command is executed. When the trigger edge is detected, command execution completes. If the Step Delay counter 128 is enabled, the Step Delay will be inserted (once) after the valid edge is detected and the command execution has completed. If the Step Delay counter is not enabled, the command will complete after the valid edge is detected, and execution of the subsequent command will commence immediately. Mode 1 is Continuous edge detect with no Step Delay at exit. In this mode, the selected trigger input is continuously tested starting immediately when the PTGWHI orPTGWLO command is executed. When the trigger edge is detected, command execution completes. Irrespective of whether the Step Delay counter 126 is enabled or not, the Step Delay will not be inserted after command execution has completed. Mode 2 is Sampled level detect with Step Delay at exit. In this mode, the selected trigger input is sample tested for a valid level. Starting immediately when the PTGWHI or PTGWLO command is executed, and the trigger input is tested (once per PTG clock). If found not to be true and the Step Delay is enabled, the command waits for the Step Delay to expire before testing the trigger input again. When the trigger is found to be true, command execution completes and the Step Delay is inserted once more. If found not to be true and the Step Delay is disabled, the command immediately tests the trigger input again during the next PTG clock cycle. When the trigger is found to be true, command execution completes and execution of the subsequent command will commence immediately. Mode 3 is Sampled level detect without Step Delay at exit. In this mode, the selected trigger input is sample tested for a valid level. Starting immediately when the PTGWHI or PTGWLO command is executed, and the trigger input is tested (once per PTG clock). If found not to be true and the Step Delay is enabled, the command waits for the Step Delay to expire before testing the trigger input again. When the trigger is found to be true, command execution completes and execution of the subsequent command will commence immediately. The Step Delay is not inserted. If found not to be true and the Step Delay is disabled, the command immediately tests the trigger input again during the next PTG clock cycle. When the trigger is found to be true, command execution completes and execution of the subsequent command will commence immediately. In some embodiments, the user may specify a step command to wait for a software generated trigger. The software generated trigger is generated by setting a bit in the PTGCST register. The PTGCTRL SWTRGE command is sensitive only to the 0 to 1 transition of the PTGSWT bit. This transition must occur during command execution, otherwise the command will continue to wait (with PTGSWT in either state). The PTGSWT bit is automatically cleared by hardware upon completion of the PTGCTRL SWTRGE command, initializing the bit for the next software trigger command iteration. The PTGCTRL SWTRGL command is sensitive to the level of the PTGSWT bit. The command will wait until it observes PTGSWT= 1 at which time it will complete. It will complete immediately should PTGSWT = 1 upon entry to the command. If desired, the PTGSWT bit may be cleared by the user upon completion of the PTGCTRL SWTRGL command. The use of the PTGSWT bit in conjunction with a PTG step command that generates interrupt requests to the processor (PTGIRQ), allows the user to coordinate activity between the PTG module 100 and the application software. In some embodiments, there are two general purpose timers 124 (PTGT1, PTGTO) that may be used by the sequencer to wait for a specified period of time. All timers are cleared when the device is in the reset state or when the PTG module is disabled. Step commands exist that can load, modify or initialize the GP Timers. Each GP Timer 124 consists of an incrementing timer (PTGTn) and an SFR limit register (PTGTnLIM). The SFR value may be changed by a CPU write (when the module is disabled) or by the PTG sequencer (when the module is enabled). Data read from the SFR will depend upon the state of the Internal Visibility (PTGIVIS) bit. When operating, the timers increment on the rising edge of the PTG clock (which is defined in the PTGCST register). The user can specify a wait operation using a GP timer by executing the appropriate PTGCTRL PTGTn command (wait for selected GP timer[n]). The stored value in the SFR that corresponds to each GP Timer 124 is referred to as the timer limit value. The wait step command is stalled in state Six until such time that the timer reaches its limit value, at which point the command will complete and the next command will start. The timer is also cleared at this time in preparation for its next use. The Step Delay Timer (SDLY) 128 is a convenient method to make each step command take a specified amount of time. Often, the user will specify a step delay equal to the duration of a peripheral function such as the ADC conversion time. The step delay enables the user to generate trigger output signals at a controlled rate so as not to overload the target peripheral. The PTGSDLIM register defines the additional duration of each step command in units of PTG clocks. The Step Delay Timer is disabled by default. The user can enable and disable the Step Delay Timer via the PTGCTRL SDON or PTGCTRL SDOFF commands that may be placed into the step queue.When operating, the Step Delay Timer will increment at the PTG clock rate defined in the PTGCST register. The stored value in the PTGSDLIM SFR is referred to as the timer limit value. The Step Delay is inserted after each command is executed such that all step commands (using the Step Delay) are stalled until the PTGSD timer reaches its limit value, at which point the command will complete and the next command will start. The timer is also cleared during execution of each command, such that it is ready for the next command. As noted above, the PTG module 100 can generate trigger, interrupt and strobed data outputs through the execution of specific step commands. In some embodiments, the PTG 100 can generate a total of (up to) 32 unique output trigger signals as Individual or Broadcast outputs. The module can generate an individual trigger on any one of 32 trigger outputs using the PTGTRIG command. The individual output triggers are typically used to trigger individual ADC input conversion operations, but may be assigned (in the top-level device DOS) to any function, including GP I/O ports. When the PTG module is used with a compatible peripheral, the individual trigger output signals of the PTG 100 are individually assigned to specific analog input conversion controllers within the ADC module. The broadcast output trigger capability is specified by the PTGBTE register. Each bit in the PTGBTE register correspond to an associated individual trigger output on the low order half of the trigger bus (ptg_trig_out[(PTG_NUM_TRIG_OUT-l):0]). If a bit is set in the PTGBTE register and a broadcast trigger step command (PTGCTRL BTRIG) is executed, the corresponding individual trigger output is asserted. The trigger broadcast capability enables the user to simultaneously generate large numbers of trigger outputs with a single step command. The PTG module 100 can generate a total of up to 16 unique interrupt request signals. The interrupt request signals are useful for interacting with the application software to create more complex functions. The module can generate an individual IRQ pulse on the IRQ bus using the PTGIRQ step command. The PTG 100 supports a strobed data port that accepts data from several sources from within the module. A typical implementation would connect the strobe bus to an ADCchannel select input port, connecting as many strobe bus bits as there are channels. The PTG command sequence could then directly select which ADC channel to convert. The PTGSTRB command zero extends the LS 5-bits of command to 16-bits, then outputs the 16-bit value onto the ptg_strb_dout[15:0] data bus together with a strobe signal. The literal data is embedded within the command, so each PTGSTRB command instance may contain a different literal value. The PTGCTRL STRBLO command will write the contents of the PTGLO register onto the ptg_strb_dout[15:0] data bus together with a strobe signal. The PTGLO register may be modified using the PTGADD and PTGCOPY commands. The PTGCTRL STRBCO command will write the contents of the PTGCO loop counter register onto the ptg_strb_dout[15:0] data bus together with a strobe signal. The PTGCTRL STRBC1 command will write the contents of the PTGCl loop counter register onto the ptg_strb_dout[ 15:0] data bus together with a strobe signal. All trigger, IRQ and Data Strobe outputs are internally asserted by the PTG state machine 102 when the corresponding step command starts (i.e. before any additional time specified by the Step Delay Timer) on the rising edge of the PTG execution clock. When operating in pulsed mode (PTGTOGL = 0), the width of the trigger output signals is determined by the PTGPWD[3:0] bit field in the PTGCON register, and may be any value between 1 and 16 PTG clock cycles. The default value is 1 PTG clock cycle. When globally controlled by the PTGCTRL BTRIG broadcast trigger command, the TRIG output pulse width is determined by a PTGPWD[3:0] bit field in the PTGCON register, and may be any value between 1 and 16 PTG clock cycles. The default value is 1 PTG clock cycle. The strobe data outputs are asserted by the PTG state machine at the beginning of the first PTG execution clock of the corresponding data strobe step command before any additional time specified by the Step Delay Timer. The strobe clock signal (ptg_strb) is initiated by the state machine at the same time. Operation of embodiments is shown by way of example. In particular FIG. 6 illustrates timing for interleaving samples over multiple cycles. FIG. 6A shows an application where the customer needs to accurately measure the power in a system where the current load is highly dependent on temperature, voltage, and end consumer application. Thecurrent waveforms vary widely per customer usage, but over a few pwm cycles, the waveform is relatively stable. The goal is to take many current and/or voltage readings over several pwm cycles in an interleaved manner. The data is stored in the device system memory during acquisition and is later post processed (integrated) to yield an accurate power value. This example shows a situation where it would not be practical or possible for software to accurately schedule the ADC samples. Exemplary STEP programming for the timing sequence of FIG. 6A is shown in FIG. 6B. In the program illustrated, the following assumptions are made: 1. Trigger input #1 is connected to the PWM signal. This rising edge of the PWM signal starts the sequence. 2. Output trigger #3 is connected to the ADC. This signal commands the ADC to begin a sample and conversion process. 3. Interrupt #1 is used to signal the processor that a sub sequence has started. (provides status) 4. Interrupt #4 is used to signal the processor that the complete sequence has completed. 5. The ADC clock is selected as the PTG clock source. 6. The ADC clock is 14 MHz. 7. The initial trigger delay is 5 us. 8. The 2nd trigger delay is 6us. 9. In each PWM cycle, the ADC will be triggered 25 times. 10. The basic sequence is run twice. Initialize the following control registers: PTGT0LIM = 7010 (5μs x 14 clks/μs) PTGTILIM = l l10 ([1 μs x 14 clks/μs] - 3 step clocks) PTGT0LIM = 2410 (total of 25 inner loop iterations) PTGCILIM = 1 (total of 2 outer loop iterations) PTGHOLD = 7010 (5μs x 14 clks/μs) PTGADJ = 1410 (1μs x 14 clks/μs)PTGSDLIM = 0 (no step delay) PTGBTE « 0x0000 (no broadcast triggers) PTGQPTR = 0 (start of step queue) PTGCST = 0x8200 (after PTGQPTR is initialized) Another application example (for sampling at multiple rates) is shown in FIG. 7A. In this application, the goal is to sample one ADC input at a fast rate ( x rate), a second analog input at a slower rate (one-half rate), and analog inputs #3 - #7 at a one-eighth rate. The example is a motor control application using an SCR (Silicon Controlled Rectifier) which triggers at a specified time after the AC line zero crossing. While this example uses simple binary sampling ratios, the PTG can generate a very wide range of sample ratios to meet the requirements of an application. Exemplary STEP programming for the timing sequence of FIG. 7A is shown in FIG. 7B. In the program illustrated, the following assumptions are made: 1. Trigger input #0 is connected to the zero crossing detect. This rising edge of the zero crossing detect signal starts the sequence. 2. The trigger delay from trigger in #0 to the generation of trigger #1 output is 2 ms. 3. Trigger output #1 enables the SCR in the application circuit. 4. Trigger output #2 is connected to the ADC to trigger sampling of the current measurement at 1 ms intervals. 5. Trigger output #3 is connected to the ADC to trigger sampling of the supply voltage measurement at 2 ms intervals. 6. Trigger outputs #4, #5, #6, and #7 are connected to the ADC to sample other data values once per cycle. 7. The ADC clock is selected as the PTG clock source. 8. The ADC clock is 14 MHz. Initialize the following control registers: PTGTOLIM = 28000io (2ms x 14 clks/μs) PTGT1LIM = 14000io (1ms x 14 clks/μs) PTGT0LIM = 24i0 (total of 25 inner loop iterations)PTGCILIM = 1 (total of 2 outer loop iterations) PTGHOLD = 0 (not used) PTGADJ = 0 (not used) PTGSDLIM = 0 (no step delay) PTGBTE = OxOOFO (enable broadcast triggers 4-7) PTGQPTR = 0 (start of step queue) PTGCST = 0x8200 (after PTGQPTR is initialized) Because each step command takes at least two clocks, for more accurate timing, the PTGTDLY register should be programmed with a value that compensates for the delay of the wait for trigger command and the generate triggers #4-7 command, and the wait for trigger delay command. Therefore, the PTGTDLY initialization value really should be 28,000 - 6 = 27,994. Likewise, the PTGTMR register value should also be slightly smaller value of 14,000 - 4 = 13,996. The PTG finite state machine (FSM) based sequencer implemented in the control logic 102 is shown in FIGS. 8A-8D. States shown are defined by bits or settings in the PTGCON control register. The sequencer is clocked by the PTG clock as defined by the PTGCLK[2:0] clock source selection and PTGDIV[4:0] clock divider control bits. The sequencer advances one state on the positive edge of each clock. The sequencer will enter state SO (HALT) under any of the following conditions: l . PTGEN = 0 2. WDT event (via state Sw) 3. PTGSTRT = 0 4. Operating in debug mode && PTGSSEN = 1 && exiting the last state of a command The sequencer will remain in SO while PTGSTRT = 0. The sequencer is forced into state Sr when reset_n = 0. If the module is disabled by the user (PTGEN = 0) but not reset (reset_n = 1), the sequencer is also forced into Sr, but only after the current command has completed. An exception to this rule applies to states that are conditionally waiting for an event. These states are exited immediately should PTGEN becleared. That is, the following commands do not complete and exit immediately should the module be disabled by the user: • PTGWLO and PTGWHI • PTGCTRL SWTRGL · PTGCTRL SWTRGE • PTGCTRL PTGTO and PTGCTRL PTGT1 These same set of commands are also exited immediately when waiting for input and the user aborts the operation by clearing PTGSTRT. This applies irrespective of device or module operating mode. Although the foregoing specification describes specific embodiments, numerous changes in the details of the embodiments disclosed herein and additional embodiments will be apparent to, and may be made by, persons of ordinary skill in the art having reference to this description. In this context, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of this disclosure. Accordingly, the scope of the present disclosure should be determined by the following claims and their legal equivalents.
In some embodiments, the invention involves a framework for using virtualization technology to efficiently support a domain-specific run-time environment. In at least one embodiment, a framework is utilized to take advantage of virtualization technology (VT) to partition performance critical and non-performance critical tasks of the same domain-specific application. An embodiment of the invention utilizes a general-purpose operating system to execute non-performance critical aspects of a domain, and uses a privileged VT-root mode to execute performance critical aspects of the domain. Another embodiment uses one or more guest VMs to execute the performance critical aspects of the domain-specific run-time environment. Other embodiments are described and claimed.
WHAT IS CLAIMED IS: 1. A system for accelerating a domain-specific run time environment using virtualization technology, comprising: a platform having virtualization capabilities; at least one processor coupled to the platform, the at least one processor to run a general-purpose operating system (GPOS) in a first virtual machine (VM) on the platform; and a domain specific run-time environment (DSRTE) partitioned into at least two portions, wherein a first portion comprises non-performance critical processes to run under the GPOS in the first VM, and wherein at least one additional portion comprises at least one performance critical process to run on the platform outside of the first VM running the GPOS. 2. The system as recited in claim 1, further comprising: a privileged root domain comprising a virtual machine monitor (VMM) to control each VM on the platform, wherein the at least one performance critical process is to run in a second virtual machine, where the second VM is non-privileged VM. 3. The system as recited in claim 2, wherein the first VM and the second VM communicate via at least one communication method selected from the group of communication methods consisting of mailboxes, shared memory, and network packets. 4. The system as recited in claims 1 or 2, further comprising: at least one additional virtual machine to run one or more processes, wherein each of the one or more processes comprise one of a performance critical process and a non- performance critical process. 5. The system as recited in claim 1, further comprising: a privileged root domain, wherein the at least one performance critical process is to run in the privileged root domain. 6. The system as recited in claim 5, wherein the privileged root domain is to be entered in response to a VM-EXIT event, and is to return control to a VM in response to a VM-ENTER event, and wherein initiating a performance critical task is to generate a VM-EXIT from the GPOS in the first VM. 7. The system as recited in claim 5, wherein the privileged root domain comprises a domain specific run-time environment customized to execute the at least one performance critical process efficiently, wherein the GPOS comprises a GPOS that is not customized to execute the at least one performance critical process. 8. A computer implemented method for accelerating a domain- specific run time environment using virtualization technology, comprising: partitioning a domain specific run-time environment (DSRTE) into at least two partitions, a first partition comprising non-performance critical tasks to run under a general-purpose operating system (GPOS) in a virtual machine (VM) and a second partition comprising performance critical tasks, the DSRTE to reside on a platform having virtualization capabilities; and executing the performance critical tasks in the second partition, wherein the first partition is used to expand a set of services provided to the DSRTE. 9. The method as recited in claim 8, wherein the second partition comprises a primary and privileged execution environment of the DSRTE. 10. The method as recited in claim 8, wherein the second partition comprises a second non-privileged virtual machine (VM). 11. The method as recited in claims 8, 9 or 10, further comprising: entering the first partition when the performance critical tasks need a service of the GPOS; and exiting to the second partition to continue processing performance critical tasks. 12. The method as recited in claim 8, further comprising: generating an Exit to a privileged root domain from the first partition, in response to an event for executing a performance critical task; executing the performance critical task in the second partition; and returning control to the first partition in response to an Enter event. 13. The method as recited in claim 12, wherein the second partition comprises two or more partitions, each of the two or more partitions to run a performance critical operation corresponding to a specific input/output (I/O) device. 14. The method as recited in claims 8, 9 or 10, wherein the performance critical task comprises a network communication task. 15. The method as recited in claims 8, 9 or 10, wherein the non-performance critical tasks include a user interface, and the performance critical tasks include network packet communication. 16. The method as recited in claims 8, 9 or 10, wherein the platform comprises a set-top box environment, and the performance critical tasks include coding and decoding of audio-visual streams. 17. A machine readable medium having instructions that when executed cause the machine to: partition a domain specific run-time environment (DSRTE) into at least two partitions, a first partition comprising non-performance critical tasks to run under a general-purpose operating system (GPOS) in a virtual machine (VM) and a second partition comprising performance critical tasks, the DSRTE to reside on a platform having virtualization capabilities; and execute the performance critical tasks in the second partition, wherein the first partition is used to expand a set of services provided to the DSRTE. 18. The medium as recited in claim 17, wherein the second partition comprises a primary and privileged execution environment of the DSRTE, 19. The medium as recited in claim 17, further comprising instructions to: enter the first partition when the performance critical tasks need a service of the GPOS; and exit to the second partition to continue processing performance critical tasks. 20. The medium as recited in claim 17, further comprising instructions to: generate an Exit to a privileged root domain from the first partition, in response to an event for executing a performance critical task; execute the performance critical task in the second partition; and return control to the first partition in response to an Enter event.
FRAMEWORK FOR DOMAIN-SPECIFIC RUN-TIME ENVIRONMENT ACCELERATION USING VIRTUALIZATION TECHNOLOGYFIELD OF THE INVENTIONAn embodiment of the present invention relates generally to computing environments using virtualization technology, and more specifically, to a framework for using virtualization technology to efficiently support a domain-specific run-time environment.BACKGROUND INFORMATIONVarious mechanisms exist for implementing virtual machines in a single platform. A class of software known as virtual machine monitors (VMMs) enables a single platform/processor to simultaneously support multiple guest operating systems. Intel(R) Corporation's Virtualization Technology (VT) enables the efficient execution of VMMs on Intel(R) Architecture (IA) processors (and eventually platforms). [0003] In VT environments, guest operating systems (OSs) are each provided a "virtual machine" (VM) view of the processor and platform and the guest OS is typically unaware that it is not controlling all of the processor or platform resources. The motivations for utilizing VMMs have included consolidation of physical hardware (e.g., one hardware platform consolidates the software previously executed on multiple physical platforms) and resource partitioning for any combination of manageability, security, and quality reasons (e.g., a platform hosting multiple guests can use a VMM to provide isolation and better service to those hosted applications which pay higher fees). [0004] Intel(R) Corporation's Virtualization Technology (VT) environments enable creation of a new "higher" (more-privileged) privilege level, called "root mode", which enables the VMM software to control processor and platform resources and present a view of the hardware resources to existing guest operating systems that the guest OS is in control. [0005] Currently, VT is used to create VMM software that schedules and isolates the execution of multiple guest operating systems. The computational model is that both performance-critical and non-performance critical code for a domain or application is run in the same guest operating system (VT non-root mode) and the software in VT root mode is only there to ensure isolation and fairness between the guest operating systems. [0006] As has been noticed by industry practitioners, there have been performance issues with using general-purpose platforms to be used as embedded, or domain-specific, devices such as networking devices. Types of devices may include intrusion detection or XML acceleration, but may apply to other domains, as well. The problems relate to applications that need access to services from a general-purpose operating system (GPOS), for instance Linux(R), BSD (R), or BSD-variants like FreeBSD(R), NetBSD(R), or OpenBSD, Windows(R). Performance of such domain-specific applications running under the general-purpose OSs tends to be poor. For network devices in particular, problems included too many interrupts or a large number of buffer copies. To counteract this, vendors have made significant modifications to the general-purpose OS to accommodate the networking applications. In other words, vendors have gotten around the problem by implementing customized domain-specific run-time environments (DSRTE) tightly integrated with the GPOS. These platforms are very difficult to maintain. When an update to the general-purpose OS was made, it often had a "domino" effect requiring changes to the DSRTE. Some changes to the GPOS may be modifications to kernel modules, similar to a dynamic link library (dll) for the kernel module, but also changes to the scheduler or network stack. Non-dll modifications, or direct changes to the GPOS, are extremely difficult to maintain when the GPOS is updated or modified.BRIEF DESCRIPTION OF THE DRAWINGSThe features and advantages of the present invention will become apparent from the following detailed description of the present invention in which: [0008] Figure 1 is a block diagram of an exemplary platform on which embodiments of the present invention may be implemented;Figure 2 is a block diagram illustrating traditional hypervisor virtual machine monitor (VMM) architecture platform, according to embodiments of the invention;] Figures 3A-B are block diagrams illustrating options for domain-specific runtime environment architectures; [0011] Figure 4 is a block diagram illustrating a framework for executing a DSRTE that has both control of platform resources (processors, memory, I/O) and co-exists with an unmodified general-purpose OS, according to embodiments of the invention; [0012] Figure 5 is a flow diagram illustrating a method for implementing an efficient domain-specific run-time environment (DSRTE ) using virtualization, according to an embodiment of the invention;Figure 6 is a block diagram illustrating a framework for executing a DSRTE that uses multiple guest VMs to partition tasks of one application, according to an embodiment of the invention; andFigure 7 is a block diagram of memory mapping as may be used by an embodiment of the invention.DETAILED DESCRIPTION [0015] An embodiment of the present invention is a system and method relating to domain- specific run-time environments. In at least one embodiment, the present invention is intended to utilize a framework for a different usage of virtualization technology (VT) than is used in existing systems. Instead of supporting multiple guest operating systems, embodiments of the present invention describe a framework for using VT to efficiently support a domain-specific run-time environment (DSRTE), as are often found in embedded systems for specific domains like networking devices, while maintaining transparency to both the application and existing general-purpose operating system. Embodiments of a run-time environment allow performance-critical portions of applications executing in the DSRTE to run in the privileged VT-root mode of an Intel(R) Architecture (IA) processor or in a separate VM that has special privileges appropriate for the particular domain. The application and operating system (OS) are unaware of this change; hence when OS services are required the services are still available, but the runtime environment may now control the processor and platform resources in a manner tuned to its particular domain. This method is difficult, or sometimes impossible, with a general-purpose operating system (GPOS). Embodiments of the present invention expand the reach of virtualization technology into domains not currently well suited by the general purpose nature of Intel(R) Architecture (IA) and the general-purpose operating systems which currently run on IA. Other platform architectures may benefit, as well. [0016] Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment.For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that embodiments of the present invention may be practiced without the specific details presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the present invention. Various examples may be given throughout this description. These are merely descriptions of specific embodiments of the invention. The scope of the invention is not limited to the examples given. [0018] Figure 1 is a block diagram of an exemplary platform on which embodiments of the present invention may be implemented. Processor 110 communicates with a memory controller hub (MCH) 114, also known as North bridge, via the front side bus 101. The MCH 114 communicates with system memory 112 via a memory bus 103. The MCH 114 may also communicate with an advanced graphics port (AGP) 116 via a graphics bus 105. The MCH 114 communicates with an I/O controller hub (ICH) 120, also known as South bridge, via a peripheral component interconnect (PCI) bus 107. The ICH 120 may be coupled to one or more components such as PCI hard drives (not shown), legacy components such as IDE 122, USB 124, LAN 126 and Audiol28, and a Super I/O (SIO) controllerl56 via a low pin count (LPC) bus 156. [0019] Processor 110 may be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. Though Figure 1 shows only one such processor 110, there may be one or more processors in platform hardware 100 and one or more of the processors may include multiple threads, multiple cores, or the like. [0020] Memory 112 may be a hard disk, a floppy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of medium readable by processor 110. Memory 112 may store instructions for performing the execution of method embodiments of the present invention. [0021] Non- volatile memory, such as Flash memory 152, may be coupled to the IO controller via a low pin count (LPC) bus 109. The BIOS firmware 154 typically resides in the Flash memory 152 and boot up will execute instructions from the Flash, or firmware. [0022] In some embodiments, platform 100 is a server enabling server management tasks. This platform embodiment may have a baseboard management controller (BMC) 150 coupled to the ICH 120 via the LPC 109.Figure 2 is a block diagram illustrating traditional hypervisor VMM architecture platform 200. A number of guest VMs 201, 203, 205, and 207 may be running on the platform 200 at the same time. A virtual machine monitor (VMM) 210 controls the guest VMs' access to the hardware 220 via the processor / platform virtualization layer 211. A number of virtual device models 213 and 215 may exist within the VMM 210. The VMM 210 may operate at the highest privilege level. The VMM 210 controls access to the file system, memory and all devices, as discussed further below. The VMM 210 typically has a device driver 219 for each hardware device on the platform.The VMM 210 and guest VMs 201, 203, 205 and 207 execute on platform hardware 220. The platform hardware 220 may include a processor 222, memory 224 and one or more I/O devices 226 and 228. The platform hardware 220 may be a personal computer (PC), mainframe, handheld device, portable computer, set top box, or any other computing system.Processor 222 may be any type of processor capable of executing software, such as a microprocessor, digital signal processor, microcontroller, or the like. Though Figure 2 shows only one such processor 222, there may be one or more processors in platform hardware 220 and one or more of the processors may include multiple threads, multiple cores, or the like.Memory 224 may be a hard disk, a floppy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of medium readable by processor 222. Memory 224 may store instructions for performing the execution of method embodiments of the present invention. [0027] The one or more I/O devices 226 and 228 may be, for example, network interface cards, communication ports, video controllers, disk controllers on system buses (e.g., Peripheral Component Interconnect (PCI), Industry Standard Architecture (ISA), Advanced Graphics Port (AGP)), devices integrated into the chipset logic or processor (e.g., real-time clocks, programmable timers, performance counters), or any other device on the platform hardware 220. The one or more I/O devices 226 and 228 may be accessed through I/O instructions, or memory mapped I/O accesses or through any other means known in the art. [0028] Figures 3A-B are block diagrams illustrating options for domain-specific run-time environment architectures. Domain-specific applications (DS-App) 301 and runtime environments 303 require some services from an operating system (OS) 305. For best performance, the domain-specific application needs control of other aspects of the platform 100. For example, embedded packet processing systems may benefit from custom packet-aware schedulers, memory managers, and network interface card (NIC) servicing (I/O servicing). Occasionally, access to a file-system may be necessary, also. [0029] A first option for a domain-specific run-time environment architecture is shown in Figure 3A. The domain-specific run-time environment (DSRTE) 303, on a platform 100, executes on top of an existing OS 305 and its applications. This method often performs poorly, as the assumptions of a GPOS are not typically appropriate for an environment such as a network packet processing environment. For example, general- purpose operating systems use interrupts to communicate between network devices and the CPU, and paging is used to manage virtual memory. These techniques provide little benefit and may exhibit significant performance degradation to packet processing applications.Another option for a DSRTE architecture is shown in Figure 3B. Here the DSRTE 313 is tightly-coupled with the OS 315. While this provides the domain-specific performance and control aspects to the DSRTE and its applications, it has the undesirable cost of extra testing and maintenance of the modified OS. This method also prevents the use of a closed-source GPOS, which may not offer enough control to achieve this tight coupling.In the following discussion, a network packet processing domain is used to illustrate embodiments of the invention. It will be apparent to one of ordinary skill in the art that any domain-specific run-time which is currently limited by the general purpose nature of existing operating systems could fit within the invention framework.In one embodiment, the platform may be split into two domains. In one domain resides the unmodified GPOS with code/modules needing the services of the GPOS, the code/modules being selected by the vendor. These modules are typically non- performance critical processes, for instance, occasional access to a hard drive or USB port, or access to a GUI. The other domain may contain the performance critical processes which may "run on the bare metal." In other words, there are few layers between the processes and the hardware itself. For purposes of this discussion, one domain is referred to as VT-root mode and the other is non-VT-root mode. Performance critical tasks will run in VT-root mode. In alternative embodiments, performance critical tasks may run in one non- VT-root VM and non-performance critical tasks may run in another non-VT-root VM with a VMM in VT-root mode controlling the two VMs. [0033] Referring now to Figure 4, there is shown a block diagram illustrating a framework for executing a DSRTE that has both control of platform resources(processors, memory, I/O) and co-exists with an unmodified general-purpose OS (GPOS), according to embodiments of the invention. The OS 405 may run in a guest virtual machine (VM) 201 and be unaware that the DSRTE 403 running in guest VM 203 is actually in control of the platform resources such as scheduling, memory management, and I/O, and yet the domain-specific run-time can defer to the guest OS (201) when its applications 401 require non-performance critical services. In embodiments of the invention, the DSRTE 403 still presents a view that the application 401 is executing on top of a DSRTE 403 which is in turn on top of an OS 405. However, in embodiments, the domain specific applications (DS-App) 401 execute the non-performance critical tasks. The performance critical tasks of the DS-App are executed in the VT-root DSRTE 407, as shown by ovals 409. The DS-Apps 409 in VT-root mode 407 communicate with the DS- Apps 401 running in the GPOS portion of the DSRTE 403. [0034] However, the DSRTE 403 may run performance-critical portions of its applications in VT-root mode and begin to execute these applications 401 in a manner optimized for the domain of the application. When the application 401 requests OS services not supported by the VT-root portion 407 of the DSRTE 403, the guest OS 201/203 is scheduled and allowed to service the request. In one embodiment, VT-root mode 407 is the primary execution environment of the application and a single guest OS is used to expand the set of services provided to the application. This invention is transparent to the existing applications and the OS.Figure 5 is a flow diagram illustrating a method for implementing an efficient domain-specific run-time environment (DSRTE ) using virtualization, according to an embodiment of the invention. In an embodiment of the invention, at initialization, the DSRTE may enable a VT-root execution and assume the traditional place of a virtual machine monitor (VMM), in block 501. Unmodified application binaries may be supplied to the DSRTE and loaded into memory by the VT-root DSRTE component, in block 503. The application binaries may be allowed to execute in the VT-root mode, in block 505. Then, an unmodified GPOS may be loaded and run in non-VT-root mode, in block 507, allowing the non-performance critical components of the application to run. The performance critical parts of the application may then run in VT-root mode at the same time as the non-performance critical parts of the application that are running under the GPOS in the non- VT-root mode, as in block 509. Based on the description herein, it will be apparent to one of skill in the art that this may be achieved in a number of ways. In one embodiment, the different parts of the application run simultaneously via timesharing. In another embodiment, the different parts of the application are run on different processor contexts, threads, or cores. [0036] In one implementation for a wireless router, for instance, processes running on the router may run in two domains. A GUI interface for controlling the router settings may be run in the non- VT-root (non-performance critical) domain. Network packet communication and processing may run in the VT-root (performance critical) domain. It will be apparent to one of skill in the art that various techniques may be used for communication between the two domains. For instance, in virtualization technology, mailboxes, shared memory, or a software mechanism to send network packets between the two domains may be utilized in different implementations. Some techniques for passing information between VMs may be found in conjunction with Xen, which is an Enterprise-Grade Open Source Virtualization application, on the public Internet at URL www.xensource.com. Other information related to virtualization technology may be found at www.intel.com/cd/ids/developer/asmo-na/eng/dc/enterprise/ technologies/197668.htm.When an operation or event occurs while executing a VM in non- VT-root mode that has special significance, the processor may jump to VT-root mode. In VT terminology, this may be referred to as a VM-EXIT (exit from a VM). When the root domain finishes processing the special case, control may be transferred back to the previously executing VM, with a VM-ENTER.For instance, in an embodiment for network packet switching, the large majority of packets may be handled by the DSRTE, whether it be in a VT-root domain or specialized VM. Thus, communication with the GPOS VM will be minimal, and performance critical operations remain in the DSRTE.In some embodiments, the VMM is minimized to enable the performance critical tasks to access I/O drivers/devices directly, or with little overhead. As discussed above, in one embodiment, the DSRTE is part of the VMM and in another embodiment, the DSRTE is part of a guest OS/guest VM. Referring to Figure 6, a VMM 611 operates in VT-root mode on the platform 100. A GPOS executes in a guest VM 605. One or more DS-Apps 601 execute within the GPOS guest VM in a DSRTE 603. However, performance critical aspects of the DS-Apps 609 are actually performed by a second guest VM 607. Control may pass to the second guest VM 605 via a VM-EXIT through the VMM 611 caused by the DS-App 601 running in the first guest VM 607. In this embodiment, the VMM 611 determines that control should be transferred to the performance critical DS-App portion in the second guest VM. Also, the portions of the application that are running on different partitions may communicate with each other via shared memory or message passing.In another embodiment, at least one additional guest VM may run on the platform that is unrelated to the DSRTE. In this case the additional guest VM may perform either performance critical or non-performance critical tasks unrelated to the DSRTE. In yet another embodiment, the DSRTE may require several various performance critical tasks. Each of these tasks may run in its own guest VM, in for instance, the non-VT-root embodiment (Fig. 6). For instance, an embedded device with many I/O ports, each port may be configured to perform a different function. Each port's activities may be deemed to be performance critical and their performance critical DSRTE may operate in their own guest VM. [0041] While the above description has been illustrated with a network communication example, embodiments of the present invention may be adapted to be used with a variety of applications. Applications that exhibit a noticeable difference between performance critical and non-performance critical aspects of the application may be good candidates for using this method. For instance, in a set-top box environment, the coding and decoding of audio-visual streams may be performed in the performance critical DSTRE and the user interface or download of program guides or update of software may be performed in the GPOS VM.In another embodiment, the I/O devices are polled for activity rather than relying on interrupts. This may increase performance. Page faults are expensive, so virtual memory facilities may be disabled for the performance critical code. The performance critical DSRTE may perform better when accessing devices directly. Therefore, in embodiments where the performance DSRTE runs in a guest VM rather than in VT-root mode, the guest VM will need to know the memory address offset to the devices to properly access them directly. The PCI devices are memory mapped into physical space. Communication between the two domains may be necessary to effect this requirement.Referring to Figure 7, there is shown a block diagram illustrating a memory map. When the performance critical DSRTE portion is run in non-root mode, the guest operating system (guest VM) believes it controls a chunk of physical memory starting at address 0x00000000, when in fact it only has a portion of the system's actual physical memory, starting at some offset decided by the VMM running in VT-root mode. VMMs that use VT have mechanisms must ensure that the page tables set up by the guest VM are set up to point to the correct regions of physical memory instead of the physical memory as seen by the guest. For example, if the guest OS sets up the page table to map virtual address 0x40000000 to physical address 0x10000000 (using the mappings above), the VMM will correct this in the page table so that virtual address 0x40000000 actually maps to physical address 0x30000000. This is transparent to the guest OS. This technique handles any memory reads/writes performed by the guest OS. But I/O devices such as the Network Interface Card (NIC) also read and write physical memory using Direct Memory Access (DMA). Without any virtualization, the OS would notify the NIC of the physical address of where it should put an incoming packet. With virtualization, this is complicated by the fact that the guest operating system does not typically know the actual physical offset of its view of physical memory. So the guest OS does not know where to tell the NIC to DMA an incoming packet. In the environment described above, however, the performance critical DSRTE has knowledge of being run in a virtualized environment. Thus, the DSRTE may use this knowledge to its advantage. With the addition of a VMCALL to the VMM to ask for its physical memory offset, the guest OS may compute and notify the NIC of the correct physical address of where to DMA an incoming packet.The techniques described herein are not limited to any particular hardware or software configuration; they may find applicability in any computing, consumer electronics, or processing environment. The techniques may be implemented in hardware, software, or a combination of the two.For simulations, program code may represent hardware using a hardware description language or another functional description language which essentially provides a model of how designed hardware is expected to perform. Program code may be assembly or machine language, or data that may be compiled and/or interpreted. Furthermore, it is common in the art to speak of software, in one form or another as taking an action or causing a result. Such expressions are merely a shorthand way of stating execution of program code by a processing system which causes a processor to perform an action or produce a result.Each program may be implemented in a high level procedural or object-oriented programming language to communicate with a processing system. However, programs may be implemented in assembly or machine language, if desired. In any case, the language may be compiled or interpreted. [0047] Program instructions may be used to cause a general-purpose or special-purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include a machine accessible medium having stored thereon instructions that may be used to program a processing system or other electronic device to perform the methods. [0048] Program code, or instructions, may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory, hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile discs (DVDs), etc., as well as more exotic mediums such as machine-accessible biological state preserving storage. A machine readable medium may include any mechanism for storing, transmitting, or receiving information in a form readable by a machine, and the medium may include a tangible medium through which electrical, optical, acoustical or other form of propagated signals or carrier wave encoding the program code may pass, such as antennas, optical fibers, communications interfaces, etc. Program code may be transmitted in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format. [0049] Program code may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, consumer electronics devices (including DVD players, personal video recorders, personal video players, satellite receivers, stereo receivers, cable TV receivers), and other electronic devices, each including a processor, volatile and/or non- volatile memory readable by the processor, at least one input device and/or one or more output devices. Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multiprocessor or multiple-core processor systems, minicomputers, mainframe computers, as well as pervasive or miniature computers or processors that may be embedded into virtually any device. Embodiments of the disclosed subject matter can also be practiced in distributed computing environments where tasks or portions thereof may be performed by remote processing devices that are linked through a communications network. [0050] Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally and/or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. Program code may be used by or in conjunction with embedded controllers. [0051] While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention.
This disclosure describes techniques for using a non-volatile-memory device such as flash memory to store memory data during hibernation or suspend. By so doing, hard drives and/or data are safer, and less power may be used.
CLAIMS What is claimed is: 1. A method comprising: monitoring operating status of a computing device, the operating status including a suspend mode; directing a non-volatile-memory controller to copy memory data from a volatile system-memory into a non-volatile-memory device in response to detecting a notification of the suspend mode; and directing the non-volatile-memory controller to copy the memory data from the non-volatile-memory device into the volatile system-memory in response to receiving a request to resume from the suspend mode. 2. The method as recited in claim 1, further comprising: receiving, from the non-volatile-memory controller, a notification that the memory data has been copied to the non-volatile-memory device; responsive to receiving the notification, shutting down power to the volatile system-memory; and responsive to receiving a request to wake from the suspend mode, powering up the volatile system-memory. 3. The method as recited in claim 1, further comprising receiving, from the non-volatile-memory controller, a notification that the memory data has been copied from the non-volatile-memory device into the volatile system-memory, the notification indicating that the memory data is available for an operating system of the computing device to use to resume from the suspend mode. 4. The method as recited in claim 1, wherein the notification of the suspend mode is received from an operating system of the computing device and comprises: a request to enter the suspend mode; or a notification that the computing device is going to suspend. 5. The method as recited in claim 1, wherein the volatile system-memory includes static random access memory (SRAM). 6. The method as recited in claim 1, wherein the non-volatile-memory device includes flash memory. 7. The method as recited in claim 1 , wherein: directing the non-volatile-memory controller to copy the memory data from the volatile system-memory into the non-volatile-memory device includes communicating a source address corresponding to a location within the volatile system-memory to the non-volatile-memory controller; and directing the non-volatile-memory controller to copy the memory data from the non-volatile-memory device into the volatile system-memory includes communicating a destination address corresponding to a location within the volatile system-memory to the non-volatile-memory controller. 8. A method comprising: monitoring operating status of a computing device, the operating status including a hibernation mode; receiving a notification of the hibernation mode from an operating system of the computing device; and in response to the notification, copying memory data from a volatile system-memory to a non-volatile-memory device as a hibernation file, wherein the non-volatile-memory device is not a memory device from which the operating system is booted. 9. The method as recited in claim 8, wherein receiving the notification of the hibernation mode includes intercepting a command from the operating system to a memory-dump driver, the command comprising instructions to save the memory data from the volatile system-memory to a hard-drive. 10. The method as recited in claim 8, wherein copying the memory data from the volatile system-memory to the non-volatile-memory device as the hibernation file includes intercepting a data write command from a memory-dump driver to a hard drive and redirecting the data write command to the non-volatile-memory device. 11. The method as recited in claim 8, wherein receiving the notification includes receiving a command from the operating system, the command comprising instructions to save the memory data from the volatile system-memory to the non-volatile-memory device as the hibernation file. 12. The method as recited in claim 8, wherein the non-volatile-memory device includes flash memory. 13. The method as recited in claim 8, further comprising requesting a hard drive to spin-down, the requesting performed prior to copying the memory data from the volatile system-memory to the non-volatile-memory device. 14. The method as recited in claim 8, wherein copying the memory data from the volatile system-memory to the non-volatile-memory device as the hibernation file enables the operating system to resume from the hibernation mode using the memory data that was saved in the hibernation file. 15. The method as recited in claim 14, wherein a Basic Input Output System (BIOS) is modified to redirect a call from an OS boot loader and the modified BIOS causes the hibernation file to be read from the non-volatile-memory device instead of a hard disk. 16. The method as recited in claim 15, wherein the call that is redirected is a BIOS interrupt 13hex call. 17. A computing device comprising: a non-volatile-memory device; a volatile system-memory; a processor; a hibernation- file handler configured to save a hibernation file to the nonvolatile-memory device instead of a hard drive or solid-state-disk from which an operating system is booted; and a BIOS-side hibernation handler configured to redirect a BIOS interrupt 13hex call and cause the hibernation file to be read from the non-volatile-memory device. 18. The computing device as recited in claim 17, further comprising: a non-volatile-memory controller; and a suspend handler configured to direct the non-volatile-memory controller to: copy memory data from the volatile system-memory into the nonvolatile-memory device in response to a notification of a suspend mode; and copy the memory data from the non-volatile-memory device into the volatile system-memory in response to a notification of a resume from the suspend mode. 19. The computing device as recited in claim 18, further comprising: an on-chip accelerator located within the non-volatile-memory controller, the on-chip accelerator configured to: compress the memory data when copying the memory data into the non-volatile-memory device; and decompress the memory data when copying the memory data into the volatile system-memory. 20. The computing device as recited in claim 17, wherein the hibernation- file handler is a Small-Computer-System-Interface (SCSI) miniport-driver configured to: intercept a data write command from a memory-dump driver to the hard drive; and redirect the data write command to the non-volatile-memory device.
METHOD AND SYSTEM FOR HIBERNATION OR SUSPEND USING A NON- VOLATILE-MEMORY DEVICE RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Patent Application Serial No. 61/142,502 filed January 5th, 2009, the disclosure of which is incorporated by reference herein in its entirety. This application also claims priority to U.S. Provisional Patent Application Serial No. 61/142,699 filed January 6th, 2009, the disclosure of which is incorporated by reference herein in its entirety. This application also claims priority to U.S. Provisional Patent Application Serial No. 61/143,548 filed January 9th, 2009, the disclosure of which is incorporated by reference herein in its entirety. BACKGROUND [0002] Modern computing devices employ power-saving modes when not in use. Hibernation is a mode in which an operating system (OS) saves memory data from volatile system-memory to an OS boot partition of a hard drive in the form of a hibernation file, after which the computing device is shut down. When turnedback on, the basic input output system (BIOS) of the computing device posts and then loads an OS boot loader. The OS boot loader copies the memory data within the hibernation file back into the volatile system-memory. The OS boot loader then resumes operation of the operating system where the operating system was paused instead of booting the operating system as normal. This allows for currently running applications to retain their data even if the data was not saved before hibernation. [0003] Suspend is a mode in which the operating system shuts down power to most of the devices within the computing device but not to the volatile system- memory such that the memory data is preserved. To resume full use, the operating system powers on the devices and resumes operation using the memory data that was preserved. Suspend uses significantly more power than hibernation but is much faster. [0004] While these modes serve their respective purposes they have undesired limitations. Hibernation mode can be slow to begin and can lose data. Hibernation mode can be slow to begin because it is limited by the speed at which the hard drive can save the memory data to the hibernation file. A computer's data is often stored on a spinning-media hard-disk drive, which is spinning while the hibernation mode is beginning; this presents a data-safety issue. Any substantial movement of the computing device is potentially hazardous while the drive is still spinning. A user who selects to hibernate his laptop, closes the lid, and goes on his way may damage the hard drive and the data it contains. Furthermore, during the time that the hard drive is spinning, both it and the computing device are usingpower. This is undesirable if the hibernation occurred due to a critical battery alarm because the computing device may run out of power before the hibernation is complete. Even if the device's battery does not fail, using additional power contradicts the point of a power-saving mode. [0005] Suspend mode also has undesired limitations. While significantly faster than hibernation, it uses more power because the volatile system-memory remains powered. Furthermore, if the computing device's power source is lost while suspended, the memory data may not be recovered and any information not saved to the hard disk will likely be lost. This can easily occur, such as when the user unplugs the computing device or when a power source fails. [0006] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. SUMMARY [0007] This summary is provided to introduce subject matter that is further described below in the Detailed Description and Drawings. Accordingly, this Summary should not be considered to describe essential features nor used to limit the scope of the claimed subject matter. [0008] In one embodiment, a method is described that comprises monitoring operating status of a computing device, the operating status including a suspendmode, directing a non-volatile-memory controller to copy memory data from a volatile system-memory into a non-volatile-memory device in response to detecting a notification of the suspend mode, and directing the non-volatile- memory controller to copy the memory data from the non-volatile-memory device into the volatile system-memory in response to receiving a request to resume from the suspend mode. This embodiment may include receiving, from the nonvolatile-memory controller, a notification that the memory data has been copied to the non-volatile-memory device, responsive to receiving the notification, shutting down power to the volatile system-memory, and responsive to receiving a request to wake from the suspend mode, powering up the volatile system-memory. This embodiment may include receiving, from the non-volatile-memory controller, a notification that the memory data has been copied from the non-volatile-memory device into the volatile system-memory, the notification indicating that the memory data is available for an operating system to use to resume from the suspend mode. [0009] In another embodiment, a method is described that comprises monitoring operating status of a computing device, the operating status including a hibernation mode, receiving a notification of the hibernation mode from an operating system of the computing device, and in response to the notification, copying memory data from a volatile system-memory to a non-volatile-memory device as a hibernation file, wherein the non-volatile-memory device is not a memory device from which the operating system is booted. This embodiment may include requesting a hard drive to spin-down, the requesting performed prior tocopying the memory data from the volatile system-memory to the non-volatile- memory device. [0010] In still another embodiment, a system is described that comprises a non-volatile-memory device, a volatile system-memory, a processor, a hibernation-file handler configured to save a hibernation file to the non-volatile- memory device instead of a hard drive or solid-state-disk from which an operating system is booted, and a BIOS-side hibernation handler configured to redirect a BIOS interrupt 13hex call and cause the hibernation file to be read from the nonvolatile-memory device. This embodiment may include a non-volatile-memory controller and a suspend handler configured to direct the non-volatile-memory controller to copy memory data from the volatile system-memory into the nonvolatile-memory device in response to a notification of a suspend mode and copy the memory data from the non-volatile-memory device into the volatile system- memory in response to a notification of a resume from the suspend mode. This embodiment may additionally include an on-chip accelerator located within the non-volatile-memory controller, the on-chip accelerator configured to compress the memory data when copying the memory data into the non-volatile-memory device and decompress the memory data when copying the memory data into the volatile system-memory. BRIEF DESCRIPTION OF THE DRAWINGS [0011] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of thesame reference numbers in different instances in the description and the figures indicate similar or identical items. [0012] Fig. 1 illustrates an example operating environment configured to enable hibernation or suspend using a non-volatile-memory device. [0013] Fig. 2 illustrates a method for suspending or resuming a computing device using a non-volatile-memory device. [0014] Fig. 3 illustrates a method for hibernating a computing device using a non-volatile-memory device. [0015] Fig. 4 illustrates a method for resuming a computing device from hibernation using a non-volatile-memory device. DETAILED DESCRIPTION [0016] As noted in the Background above, conventional methods of implementing hibernation and suspend modes have undesired limitations. Hibernation mode is slow and may damage hard drives. Suspend mode uses more power and memory data may not be preserved during a power failure. The present disclosure describes techniques for using a non-volatile-memory device such as flash memory to store the memory data during hibernation or suspend. By so doing, hard drives and/or data are safer, and less power may be used. [0017] In the discussion that follows, an example operating environment is described. Example methods are also described that may be employed in the example operating environment as well as other environments. In the discussion below, reference will be made to the environment by way of example only and,therefore, implementations described below are not limited to the example environment. [0018] Example Operating Environment Fig. 1 illustrates an example operating environment 100 having a computing device 102. Computing device 102 includes one or more processors 104, one or more computer-readable media 106, volatile system-memory 108, and nonvolatile-memory device 110. Computer-readable media 106 may include various kinds of media, such as volatile (e.g., Static Random Access Memory, or SRAM) and non-volatile memory (e.g. flash memory, BIOS Chip, solid state disk, spinning-media hard-disk drive, or CD/DVD). Computer-readable media 106 may include volatile system-memory 108, non-volatile-memory device 110, and/or any other computer-readable media. Volatile system-memory 108 loses data when power is removed. [0019] Non-volatile-memory device 110 retains data when power is removed. Non- volatile-memory device 110 may include non- volatile memory such as flash memory or solid state disks. Non-volatile-memory device 110 can have a storage capacity as small as the storage capacity of volatile system-memory 108 or even smaller if compression is used. In computing devices that implement hibernation, non-volatile-memory device 110 does not include an OS bootable spinning-media hard-disk drive or an OS bootable solid state disk. OS bootable memory are those that include a bootable partition for operating system 112 and OS boot-loader 114. In computing devices that implement both hibernation andsuspend, there may be a separate non-volatile-memory device for each of hibernation and suspend. [0020] Mechanisms for interfacing non-volatile-memory device 110 with other components of computing device 102 may include, but are not limited to, the use of an internal data bus, a Small Computer System Interface (SCSI), an Internet Small Computer System Interface (iSCSI), a Peripheral Component Interconnect bus (PCI), a PCI extended bus (PCI-X), a Peripheral Component Interconnect Express bus (PCIe), a Universal Serial Bus (USB), a Serial ATA bus (SATA), a Serial Attached SCSI bus (SAS), or a Fiber Channel network (FC). For example, non- volatile-memory device 110 may be a flash memory device that is communicatively attached to the other components of computing device 102 through a PCIe connection. [0021] Computer-readable media 106 is shown including operating system (OS) 112, OS boot-loader 114, hibernation- file handler 116, memory-dump driver 118, and BIOS 120. Operating system 112 is configured to operate computing device 102. OS boot-loader 114 is typically installed when installing operating system 112. During a normal boot of computing device 102, BIOS 120 loads OS boot-loader 114, which then loads operating system 112. [0022] Hibernation- file handler 116 can be software and/or hardware that intercepts memory data 122 to be written to a hard drive and redirects it to nonvolatile-memory device 110. In some cases, hibernation-file handler 116 is a SCSI miniport-driver that intercepts memory data 122 from memory-dump driver 118 when driver 118 attempts to write memory data 122 to the hard drive. In othercases, hibernation-file handler 116 is software that intercepts a command from operating system 112 to memory-dump driver 118 to request storage of a hibernation file on a spinning -media hard-disk drive. Hibernation-file handler 116 intercepts this command and saves the hibernation file to memory device 110 instead of the hibernation file being stored on the spinning-media hard-disk drive. Alternatively, memory-dump driver 118 is configured to store the hibernation file on memory device 110. In such a case, environment 100 may not include a separate hibernation-file handler 116. In systems that do not implement a hibernation feature, environment 100 may not include hibernation-file handler 116 and memory-dump driver 118. [0023] BIOS 120 is shown including suspend handler 124 and BIOS-side hibernation handler 126, though both may not be present if only one of hibernation or suspend is implemented. Suspend handler 124 comprises computer instructions configured to request that non-volatile-memory controller 128 copy memory data 122 in and out of volatile system-memory 108 and non-volatile-memory device 110. Suspend handler 124 may also provide non-volatile-memory controller 128 with the address or addresses of one or more locations within volatile system- memory 108. [0024] BIOS-side hibernation handler 126 comprises computer instructions configured to redirect requests from OS boot-loader 114. The requests may include BIOS INT 13hex requests, which are redirected to read from non-volatile- memory device 110 instead of a hard disk intended by the requests. This redirection causes the hibernation file to be loaded from non-volatile-memorydevice 110 on system boot up. Operating system 112 then uses the loaded memory data 122 to resume from hibernation. [0025] Volatile system-memory 108 is shown including memory data 122, which is preserved in non-volatile-memory device 110 during hibernation or suspend. Non-volatile-memory device 110 includes non- volatile-memory controller 128. Non-volatile-memory controller 128 is configured to copy memory data 122 into and out of non-volatile-memory device 110. Memory data 122 may be compressed prior to being saved to non-volatile-memory device 110 and decompressed prior to being saved to volatile system-memory 108. This compression is performed or aided by software within suspend handler 124 or by an on-chip accelerator, or a combination of both. The on-chip accelerator may be located within non-volatile-memory controller 128. Note that one or more of the entities shown in Fig. 1 may be further divided, combined, and so on. Thus, environment 100 illustrates some of many possible environments capable of employing the described techniques. [0026] Example Methods The present disclosure describes techniques for suspending or hibernating a computing device using a non-volatile-memory device to preserve the memory data. This allows for additional power savings as well as enhanced data safety. These techniques are described using three different methods, though they may act independently or in combination. Aspects of these methods may be implemented in hardware, firmware, software, or a combination thereof. The methods areshown as a set of acts that specify operations performed by one or more entities and are not necessarily limited to the order shown. [0027] Fig. 2 illustrates a method 200 for suspending or resuming a computing device using a non-volatile-memory device. At 202, an operating status of a computing device is monitored. The operating status includes a suspend mode. At 204, a notification of the suspend mode is received from an operating system (OS) of the computing device. For example, suspend handler 124 (Fig. 1) within BIOS 120 receives the notification of suspend from operating system 112. [0028] At 206, a non-volatile-memory controller is directed to copy memory data from volatile system-memory into a non-volatile-memory device. For example, suspend handler 124 issues a request to have non-volatile-memory controller 128 copy memory data 122 from volatile system-memory 108 into nonvolatile-memory device 110. The request may include one or more addresses of corresponding locations within volatile system-memory 108 to copy. In this case, controller 128 copies memory data 122 that is addressed. In alternative cases, if no addresses are provided, controller 128 copies all of memory data 122. [0029] Storing memory data 122 in non-volatile-memory device 110 provides extra data security because non-volatile-memory device 110 retains its data during a power failure. A power failure during a conventional suspend can result in a hard shut-down of computing device 102, which results in memory data 122 being lost. Powering on from this state requires a normal system boot, which takes considerably longer than resuming from a suspended state.[0030] At 208, a notification indicating that the memory data has been copied is received from the non-volatile-memory controller. For example, suspend handler 124 receives a notification from controller 128 that memory data 122 has been copied into non-volatile-memory device 110. [0031] At 210, power to the volatile system-memory is shut down in response to receiving the notification at 208. Here, suspend handler 124 or other components within BIOS 120 shut down power to volatile system-memory 108. This saves additional power while computing device 102 is in suspend mode. [0032] At 212, the volatile system-memory is powered on in response to receiving a request to resume from the suspend mode. This is implemented if 210 is implemented. The request to resume from suspend may come from various sources, such as a keyboard key press, a mouse movement or click, a system wake- up event, or a wake-on-LAN request. For example, suspend handler 124 receives a request to resume and powers on volatile system-memory 108. [0033] At 214, the non-volatile-memory controller is directed to copy the memory data from the non-volatile-memory device into the volatile system- memory. The directing at 214 is similar to the directing at 206 except that the memory data is requested to be copied into volatile system-memory instead of out of volatile system-memory. At 214, the contents of the volatile system-memory are restored to their pre-suspend state. This will allow the operating system to continue operation from where it left off before the suspend paused its operation. Continuing the example, suspend handler 124 requests that non- volatile-memorycontroller 128 copy memory data 122 from non-volatile-memory device 110 into volatile system-memory 108. [0034] The techniques may optionally compress and decompress the memory data prior to storing it on the non-volatile-memory device and the volatile system-memory, such as through use of an on-chip accelerator added to nonvolatile-memory controller 128 or use of suspend handler 124. Compressing the memory data permits use of fewer memory resources to store the contents of the volatile-system-memory. If the compression and decompression is sufficiently fast, it may speed up the storage and retrieval of the memory data because less time is spent saving and retrieving the memory data on the non-volatile-memory device. [0035] At 216, a notification indicating that the memory data has been copied is received. After such notification, the conventional method of resuming from suspend can, but is not required to, be implemented as if a non-volatile- memory device was not used to save the memory data. For example, suspend handler 124 receives a notification from controller 128 indicating that memory data 122 is restored to volatile system-memory 108. Suspend handler 124 or other components within BIOS 120 may continue to act to resume computing device 102 using conventional methods. Operating system 112 is able to power back on any devices that were turned off during the suspend and use memory data 122 to continue operation as though no suspend occurred.[0036] Fig. 3 illustrates a method 300 for hibernating a computing device using a non-volatile-memory device to store a hibernation file. Method 300 may be implemented on a computing device that also implements method 200. [0037] At 302, an operating status of a computing device is monitored. The operating status includes a hibernation mode. At 304, a notification of the hibernation mode is received from an operating system (OS) of the computing device. At 306, memory data from volatile system-memory is copied to a nonvolatile-memory device in the form of a hibernation file. During conventional hibernation, the operating system loads a memory-dump driver, which dumps memory data to a hibernation file on an OS boot partition located on a hard drive. In this method, the memory-dump driver is modified and/or a hibernation-file handler is used. The hibernation-file handler may comprise computer software added to a computing device, such as computing device 102 of Fig. 1. [0038] By way of example, hibernation-file handler 116 receives a notification of hibernation. Hibernation-file handler 116 intercepts each write command from memory -dump driver 118 and redirects each write command to non- volatile-memory device 110. This causes the hibernation file to be stored on non- volatile-memory device 110 instead of a hard drive intended by the write command(s). [0039] In another example, hibernation- file hander 116 intercepts a command from operating system 112 to memory-dump driver 118. The command is originally intended to instruct memory -dump driver 118 to save memory data 122 to a hard drive. Hibernation- file handler 116 intercepts this command anddumps memory data 122 to a hibernation file. The hibernation file is saved to nonvolatile-memory device 110 instead of the hard drive intended by the command. [0040] In another example, memory-dump driver 118 is modified to copy memory data 122 to non-volatile-memory device 110 instead of the OS boot partition of the hard drive. In this case, the method forgoes the use of a separate hibernation-file handler 116. [0041] At 308, the operating system is informed that the hibernation file has been saved. This allows the operating system to continue shutting down power to various components of the computing device. In the example in which hibernation-file handler 116 intercepts each write command from memory-dump driver 118, handler 116 will notify driver 118, which in turn will notify operating system 112. In the example in which hibernation-file hander 116 intercepts a command from operating system 112 to memory-dump driver 118, handler 116 sends operating system 112 a notification when done copying memory data 122. In the example in which memory-dump driver 118 has been modified, driver 118 will notify operating system 112. The notification at 308 allows operating system 112 to continue shutting down the system in hibernation mode. [0042] Storing the hibernation file on a non-volatile-memory device that is separate from an OS boot disk allows for faster hibernation and resumption in cases where the non-volatile-memory device is faster than the OS boot disk. The OS boot disk is a disk that has a boot partition for the OS. In an example in which the OS boot disk is a solid-state-disk, this method can still be beneficial as faster memory can be used to store the hibernation file. The size requirements for the OSboot disk are quite large and thus it can be cost prohibitive to use large amounts of the fastest memory. Furthermore, use of a solid-state-disk as the OS boot disk is often cost prohibitive because the cost per capacity is greater than with spinning- media hard-disk drives. The non- volatile-memory device of method 300 can be small enough to only store a hibernation file. In such a case it is often economical to use the fastest memory available. [0043] Additionally the techniques may request one or more spinning-media hard-disk drives in the computing device to spin-down. This allows for additional power savings and data safety. As a separate non-volatile-memory device is used to store the hibernation file, spinning-media hard-disk drives are no longer needed as soon as the hibernation process begins and can safely be spun down. This act is performed prior to 306 if maximum power savings and data safety are desired. [0044] Fig. 4 illustrates a method 400 for resuming a computing device from hibernation, which may be implemented on a computing device that also implements method(s) 200 and/or 300. [0045] At 402, a request to read a hibernation file from a boot disk is received. In some conventional approaches, when hibernation is to be terminated, the OS boot loader uses BIOS interrupt 13hex to read the hibernation file from the OS boot disk. In this method, a BIOS-side hibernation handler can instead receive the request. For example, BIOS-side hibernation handler 126 of Fig. 1 receives the request to read all or part of the hibernation file. The request is received from OS boot-loader 114.[0046] At 404, the request to read the hibernation file from the OS boot disk is redirected and the hibernation file is read from a non-volatile-memory device. Continuing the example, BIOS-side hibernation handler 126 reads the hibernation file from non-volatile-memory device 110 instead of from the OS boot disk as requested. If the hibernation file is not found, BIOS-side hibernation handler 126 may then read from the OS boot disk. Once memory data 122 within the hibernation file is loaded into volatile system-memory 108, operating system 112 resumes operation using memory data 122. [0047] Note that OS boot-loader 114 may be modified to read the hibernation file from non-volatile-memory device 110 and as such, no redirection is used. In this case, BIOS-side hibernation handler 126 is part of OS boot-loader 114, such that the modified OS boot-loader requests the hibernation file from the non-volatile-memory device and loads the memory data that is within the hibernation file into volatile system-memory. [0048] One or more of the techniques described above can be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Generally, the techniques can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software components. In one implementation, the methods are implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, the methods can take the form of a computer program product accessible from a computer-usable or computer-readable medium providingprogram code for use by or in connection with a computer or any instruction execution system. [0049] For the purposes of this description, a computer-usable or computer- readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W) and DVD. [0050] Although the subject matter has been described in language specific to structural features and/or methodological techniques and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features, techniques, or acts described above, including orders in which they are performed.
An electrode arrangement has a matrix with rows and columns of capacitive touch sensors arranged in a single layer, wherein each touch sensor has a first electrode and an associated second electrode, wherein the first electrodes in each row of the matrix are connected and the second electrodes in each column of the matrix are connected, and wherein the electrode 5 arrangement further has a capacitive coupling operable to feed an alternating transmission signal only to the top and bottom row of connected first electrodes and to the most left and most right column of connected second electrodes.
CLAIMS1. An electrode arrangement comprising:a matrix with rows and columns of capacitive touch sensors arranged in a single layer, wherein each touch sensor comprises a first electrode and an associated second electrode, wherein the first electrodes in each row of the matrix are connected and the second electrodes in each column of the matrix are connected, and wherein the electrode arrangement further comprises a capacitive coupling operable to feed an alternating transmission signal only to the top and bottom row of connected first electrodes and to the most left and most right column of connected second electrodes.2. The electrode arrangement according to claim 1, wherein the capacitive coupling comprises first, second, third, and fourth capacitors, wherein a first terminal of the first capacitor is connected to the top row electrodes, a first terminal of the second capacitor is connected to the bottom row electrodes, a first terminal of the third capacitor is connected to the most left column electrodes, and a first terminal of the fourth capacitor is connected to the most right column electrodes, and wherein second terminals of the first, second, third, and fourth capacitors are connected together and receive the alternating transmission signal.3. The electrode arrangement according to claim 1, further comprising a contact area comprising a plurality of feeding lines configured to provide electrical connection to the rows and columns.4. The electrode arrangement according to claim 1, further comprising a substrate on a top side of which said first and second electrodes are arranged.5. The electrode arrangement according to claim 4, wherein the substrate is a flexible substrate.6. The electrode arrangement according to claim 4, further comprising a switching circuitry which in a first operating mode couples the rows and columns with a touch detection device and in a second operating mode couples the top row, bottom row, most left column, and most right column, respectively with respective inputs of a non-touching gesture detection device.7. The electrode arrangement according to claim 6, wherein the first and second electrodes operate as projective capacitive touch sensors in the first operating mode. 8. The electrode arrangement according to claim 7, wherein four electrodes are formed by the top row, bottom row, most left column, and most right column receive a continuous alternating transmission signal through the capacitive coupling during the second operating mode and are evaluated by determining a loading of each of the four electrodes. 9. The electrode arrangement according to claim 6, wherein in the second operating mode unused electrodes are switched together to receive the alternating transmission signal.10. The electrode arrangement according to claim 1, wherein the first and second electrodes are each comb shaped and arranged in interdigital fashion.11. A sensor arrangement comprising an electrode arrangement according to claim 1 , wherein the electrode arrangement further is arranged on top of a substrate and comprises a connection area comprising a plurality of feeding lines configured to connect said rows and column electrodes with a connector.12. The sensor arrangement according to claim 1 1, further comprising a controller connected with the feeding lines, wherein the controller is configured to operate in first mode or in a second mode, wherein the first mode uses electrode formed by the top row, bottom row, most right column, and most left column for a touch- less gesture detection and the second mode uses the first and second electrodes as projective capacitive touch sensors for a touch based detection mode.13. The sensor arrangement according to claim 11, wherein the capacitive coupling comprises first, second, third, and fourth capacitors, wherein a first terminal of the first capacitor is connected to the top row electrodes, a first terminal of the second capacitor is connected to the bottom row electrodes, a first terminal of the third capacitor is connected to the most left column electrodes, and a first terminal of the fourth capacitor is connected to the most right column electrodes, and wherein second terminals of the first, second, third, and fourth capacitors are connected together and receive the alternating transmission signal.14. The sensor arrangement according to claim 11 , further comprising a contact area comprising a plurality of feeding lines configured to provide electrical connection to the rows and columns.15. The sensor arrangement according to claim 11, further comprising a substrate on a top side of which said first and second electrodes are arranged.16. The sensor arrangement according to claim 15, wherein the substrate is a flexible substrate.17. The sensor arrangement according to claim 15, further comprising a switching circuitry which in a first operating mode couples the rows and columns with a touch detection device and in a second operating mode couples the top row, bottom row, most left column, and most right column, respectively with respective inputs of a non-touching gesture detection device. 18. The sensor arrangement according to claim 17, wherein the first and second electrodes operate as projective capacitive touch sensors in the first operating mode.19. The sensor arrangement according to claim 18, wherein four electrodes are formed by the top row, bottom row, most left column, and most right column receive a continuous alternating transmission signal through the capacitive coupling during the second operating mode and are evaluated by determining a loading of each of the four electrodes.20. The sensor arrangement according to claim 17, wherein in the second operating mode unused electrodes are switched together to receive the alternating transmission signal.21. The sensor arrangement according to claim 11, wherein the first and second electrodes are each comb shaped and arranged in interdigital fashion.22. A method for operating a sensor arrangement comprising a matrix with rows and columns of capacitive touch sensors arranged in a single layer, wherein each touch sensor comprises a first electrode and an associated second electrode, wherein the first electrodes in each row of the matrix are connected and the second electrodes in each column of the matrix are connected, the method comprising:in a first operating mode, during a measurement cycle feeding a continuous alternating transmission signal through a capacitive coupling only to gesture detection electrodes formed by top and bottom row of connected first electrodes and most left and most right column of connected second electrodes, and evaluating a loading of said gesture detection electrodes by processing signals from the gesture detection electrodes to determine a three-dimensional location of an object entering an electric field created by the gesture detection electrodes; in a second operating mode, turning off said alternating transmission signal and measuring a capacitance of each capacitive touch sensor to determine whether a capacitive touch sensor has been touched.23. The method according to claim 22, wherein in said first mode the alternating transmission signal is also fed capacitively to each otherwise unused first and second electrode of the matrix.
ELECTRODE ARRANGEMENT FOR GESTURE DETECTION AND TRACKINGRELATED PATENT APPLICATIONThis application claims priority to commonly owned U.S. Provisional Patent Application No. 62/039,734 filed August 20, 2014, which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to capacitive sensing systems and method of operating such, in particular to an electrode arrangement for a capacitive sensing system using electric field effects.BACKGROUNDThe "GestiC®" integrated circuit, also known as MGC3130 manufactured by the assignee of this application, is a highly sensitive capacitive sensing technology that can be used for three-dimensional touch-less gesture detection and tracking using a quasi-static alternating electric near field, for example around 100-200kHz. Such a system usually uses a transmitting electrode receiving an alternating signal such as a sinusoidal or square wave signal to generate the electric field. A plurality of receiving electrodes are arranged, for example, above the transmitting electrode in a frame like fashion, and from received signals a three-dimensional position of an object can be reconstructed within an integrated circuit device through signal processing.Human interface devices (HID) that use such an integrated circuit device require sensor electrodes that are often formed in layers of conductive material, e.g. stripes of copper of a printed circuit board layer (PCB). These electrodes are electrically connected to a detection unit in the integrated circuit. For a detection system a conventional electrode arrangement can be formed on a multi-layer printed circuit board, wherein the bottom layer is often in its entirety or a significant portion of it used as a transmitter and smaller receiving electrodes and compensation electrodes can be formed on the top layer. More than two layers can be provided to build an electrode which also may increase the manufacturing cost for such electrode arrangements. The gesture detection unit's measurement value, among others, depends on the position of a target object (finger/hand) in the sensor electrode's vicinity which influences the capacitive coupling between electrode and target, yielding a target measurement signal depending on the distortion of the alternating electric field. The gestures are performed above a detection area without touching any area of the respective device. In addition, touch detection may also be required for performing/initiating certain functions of the device.Flatness of the industrial design and manufacturing costs are driving projective capacitive touch displays in consumer and other industries. Today, an increasing number of touch panels in consumer display applications are single-layer electrode designs, which are easier to manufacturer, achieve higher yields, are thinner and of significant lower cost. Furthermore single layer designs may offer better optical characteristics (higher transparency). Today's two layer GestIC® electrode design is a barrier accessing such early mass volume markets with 3D hand position tracking and gesture recognition.SUMMARY Hence, there is a need for a less expensive electrode arrangement. According to an embodiment, an electrode arrangement may comprise a matrix with rows and columns of capacitive touch sensors arranged in a single layer, wherein each touch sensor comprises a first electrode and an associated second electrode, wherein the first electrodes in each row of the matrix are connected and the second electrodes in each column of the matrix are connected, and wherein the electrode arrangement further comprises a capacitive coupling operable to feed an alternating transmission signal only to the top and bottom row of connected first electrodes and to the most left and most right column of connected second electrodes.According to a further embodiment, the capacitive coupling may comprise first, second, third, and fourth capacitors, wherein a first terminal of the first capacitor is connected to the top row electrodes, a first terminal of the second capacitor is connected to the bottom row electrodes, a first terminal of the third capacitor is connected to the most left column electrodes, and a first terminal of the fourth capacitor is connected to the most right column electrodes, and wherein second terminals of the first, second, third, and fourth capacitors are connected together and receive the alternating transmission signal. According to a further embodiment, the electrode arrangement may further comprise a contact area comprising a plurality of feeding lines configured to provide electrical connection to the rows and columns. According to a further embodiment, the electrode arrangement may further comprise a substrate on a top side of which the first and second electrodes are arranged. According to a further embodiment, the substrate can be a flexible substrate. According to a further embodiment, the electrode arrangement may further comprise a switching circuitry which in a first operating mode couples the rows and columns with a touch detection device and in a second operating mode couples the top row, bottom row, most left column, and most right column, respectively with respective inputs of a non-touching gesture detection device. According to a further embodiment, the first and second electrodes may operate as projective capacitive touch sensors in the first operating mode. According to a further embodiment, four electrodes may be formed by the top row, bottom row, most left column, and most right column receive a continuous alternating transmission signal through the capacitive coupling during the second operating mode and are evaluated by determining a loading of each of the four electrodes. According to a further embodiment, in the second operating mode unused electrodes are switched together to receive the alternating transmission signal. According to a further embodiment, the first and second electrodes are each comb shaped and arranged in interdigital fashion.According to another embodiment, a sensor arrangement may comprise an electrode arrangement as described above, wherein the electrode arrangement is further arranged on top of a substrate and comprises a connection area comprising a plurality of feeding lines configured to connect the rows and column electrodes with a connector.According to a further embodiment, the sensor arrangement may further comprise a controller connected with the feeding lines, wherein the controller is configured to operate in first mode or in a second mode, wherein the first mode uses electrode formed by the top row, bottom row, most right column, and most left column for a touch-less gesture detection and the second mode uses the first and second electrodes as projective capacitive touch sensors for a touch based detection mode.According to yet another embodiment, a method for operating a sensor arrangement comprising a matrix with rows and columns of capacitive touch sensors arranged in a single layer, wherein each touch sensor comprises a first electrode and an associated second electrode, wherein the first electrodes in each row of the matrix are connected and the second electrodes in each column of the matrix are connected, may comprise the steps of: in a first operating mode, during a measurement cycle, feeding a continuous alternating transmission signal through a capacitive coupling only to gesture detection electrodes formed by top and bottom row of connected first electrodes and most left and most right column of connected second electrodes, and evaluating a loading of the gesture detection electrodes by processing signals from the gesture detection electrodes to determine a three-dimensional location of an object entering an electric field created by the gesture detection electrodes; and in a second operating mode, turning off the alternating transmission signal and measuring a capacitance of each capacitive touch sensor to determine whether a capacitive touch sensor has been touched. According to a further embodiment of the above method, in the first mode the alternating transmission signal is also fed capacitively to each otherwise unused first and second electrode of the matrix.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 and 2 show conventional sensor arrangements for capacitive three-dimensional gesture detection.Fig. 3 shows a simplified equivalent circuit of a sensor arrangement according to Fig.1 or 2.Fig. 4 shows a first embodiment of a one-layer sensor arrangement with a grid shaped center electrode. Fig. 5 shows a second embodiment of a one-layer sensor arrangement with a transmission center electrode.Fig. 6 shows a third embodiment of a one-layer sensor arrangement with a plurality of projected capacitive touch sensors electrodes.Fig. 7 shows a fourth embodiment of a one-layer sensor arrangement with a plurality of projected capacitive touch sensors electrodes.Fig. 8 shows a first embodiment of a sensor circuit using a one-layer sensor arrangement with a plurality of projected capacitive touch sensors electrodes. Fig. 9 shows a second embodiment of a sensor circuit using a one-layer sensor arrangement with a plurality of projected capacitive touch sensors electrodes.Fig. 10 shows a third embodiment of a sensor circuit using a one-layer sensor arrangement with a plurality of projected capacitive touch sensors electrodes. DETAILED DESCRIPTIONAccording to various embodiments, a sensor arrangement, in particular a sensor arrangement for a non-touching three dimensional gesture detection system using effects of an quasi-static alternating electric near field can be designed that provides for lower material and manufacturing costs, thinner sensor designs, and a better optical performance of transparent designs.As mentioned above, a three-dimensional capacitive non-touching detection system generates a quasi-static electric field wherein disturbances in that field caused by an object entering it are evaluated. The evaluation allows it to determine a three-dimensional location of the object, such as a finger of a user, and to track its position to further determine whether a gesture from a predefined pool of gestures has been performed. Such a system can also operate as a three-dimensional touchless mouse or control any kind of suitable operations. Such a system usually uses a transmitting electrode receiving an alternating signal such as a sinusoidal or square wave signal, for example having a frequency of 100-200kHz, to generate the quasi- static alternating electric field. Contrary to, for example, mutual or self capacitance measurements, the transmitting electrode is supplied permanently with the generator signal and disturbances in the generated field are measured while the field is permanently upheld during a measurement. The system does not evaluate single pulses, voltages generated by single or multiple pulses and associated charge changes of the sensor electrodes as it is common in capacitance measurement systems, for example, a capacitive voltage divider or a charge time measurement unit used for mutual or self capacitance measurements. In some embodiments, a plurality of receiving electrodes are arranged, for example in a frame-like fashion to evaluate the quasi-static electric field generated by the transmitting electrode, and from received signals a three-dimensional position of an object can be reconstructed within an integrated circuit device through signal processing. In other embodiments, the same electrodes are used for transmitting and receiving and while still the same electric field is generated, the evaluation measures a load on each transmitter/receiver electrode caused by a disturbance in the electric field.The various embodiments disclosed provide solutions to eliminate one of two electrode layers in an electrode design, such as for example an electrode arrangement for Microchip's GestIC®3D hand tracking and gesture recognition technology. However, the disclosed design may be useful for other type of sensor devices and is not limited to the GestIC®3D hand tracking and gesture recognition technology. According to various embodiments, techniques will be described of how TX and RX electrodes can be integrated in only one single electrode layer. The described techniques apply to any electrode system using similar electrode designs as proposed for the GestIC®system but are not limited to such a system. Further on, solutions are disclosed how single layer electrodes can be integrated into one layer projected capacitive (pCAP) touch matrix designs.Fig. 1 and 2 shows a conventional two-layer electrode arrangement. The design shown in Fig. 1 includes a center receiving electrode RXcenter whereas the embodiment shown in Fig. 2 uses a frame design with an open center area. In both designs, there are four receiving electrodes RXNort , R East, RXsout , and RXwest at different top layer locations which provide spatial information about an object, e.g., a hand, that performs a gesture in an area above the electrode arrangement. These receiving electrodes (RX) receive an alternating electric field generated by an underlying transmission electrode TXbottom. Non-conductive carrier material 110, 120 (e.g. of plastic, PCB material, glass, etc.) isolates the RX electrodes from the transmission electrode(s) (TX). TX electrode TXbottom in the bottom layer both excites the E- field and shields the RX electrodes from backside noise. The electric field can for example be generated by a 100kHz square -wave signal fed to the TX electrode TXbottom. A respective electric field is then projected by the transmission electrode TXbottom in an area, for example, approximately up to 10-15cm, above the carrier material 110, 120. A user performing a gesture, e.g. with his/her hand, within this area disturbs the electric field and these disturbances can be detected by the four RX electrodes RXNort , RXEast, RXsouth, and RXwest. From the received signals, a three-dimensional position can be processed. The signal deviation, the first and second derivation as well as the calculated, linearized distance to each electrode can be used to perform a gesture comparison/evaluation. Fig. 3 shows a simplified equivalent circuit. CRXTxrepresents the capacitance between an RX and the TX electrode and can be around 10 - 30pF. CRXG represents the capacitance from an RX electrode to ground and can be around 10-30 pF. CH represents the capacitance between a user hand and an RX electrode and can be around IfF - lpF. Ceuf represents a high impedance input capacitance of an RX input buffer coupled with the electrode and can be around 6pF.A non-touching near field detection system, such as the one used in GestIC®technology, measures the RX input amplitude change caused by the influence of the users hand to the electrical field excited via the TX electrode. Design target is to maximize the signal deviation of the received signal.In two layer electrode designs the stacked electrode setup provides both good shielding of the RX electrodes to subjacent noise sources such as electronic circuits and liquid crystal displays and to ground in general.In an optimum electrode design CRXTxand CRXG capacitances are small and of similar size. This scenario is described, for example, in "GestIC®Design guide, Electrodes and System Design MGC3130", available from Microchip Technology Inc. and incorporated hereby by reference, wherein the lower limit of CRXG is the input capacitance of the detection circuit (e.g. 4-5pF). In the two layer design the Rx - TX electrode distance and a low permittivity of the insulating carrier material allow small CRXTx, wherein the shielding TX layer assures for small CRXG values representing the RX electrode capacitance to ground.In the Single Layer Design according to various embodiments, where TX and RX electrodes are per definition in the same layer sufficient E-field propagation in the z-dimension must be ensured.TX electrodes for these type of detection circuits can, according to various embodiments, be: a) Separate TX structures in the same layer as the RX electrodes; b) The RX electrodes itself; c) The electrode structure of capacitive or resistive touch panel in the same layer. In single layer designs, the routing of feeding lines is particularly important since interlayer through hole connections aren't possible by definition. Optimum designs do not have any feeding line intersections at all. The proposed various embodiments show examples of how to realize such designs. Bridges can be allowed in certain electrode technologies, e.g. ITO on foil or glass, printed foils, etc.. However, such technologies are expensive. Bridges can be realized on the flex cable connecting the electrode board. Furthermore, bridges can be realized on the PCB and the chip connected to the electrodes.The design of Fig. 4 shows a solution of integrating the TX electrode into the RX electrode layer. The TX electrode 410 flow ring-like around the RX electrodes 420 from both sides to lower ground influences. One interruption 430 of the TX ring 410 per RX electrode 420 allow the connection of the respective RX electrode feeding line 440. Only one RXfeeding line 440 per electrode is required as shown in Fig. 4.The TX rings 410 around each RX electrode 420 shield ground from outside device parts, e.g. a metal housing and thus maintain sensitivity. Compared to a conventional design, for example a GestIC®design as shown in Figs. 1 and 2, the TX electrode ring 410 provides no shielding from ground underneath. To maintain about similar CRXTX and CRXG values as mentioned above, the TX ring 410 must be closer to the RX electrodes 420 for smaller ground distances underneath the RX electrode 420. Ground can be, e.g., a display below a transparent one-layer electrode structure.The individual frame electrode TX rings 410 form also the TX structure for an optional RX center electrode 450 as for example used in a GestIC®design. In case no RX center electrode is required, e.g. for center touch detection, the center area can be advantageously filled by the TX electrode 510 as shown in Fig. 5. The E-field distribution and the sensitivity of the system increases. In the proposed design of Fig. 5 only one TX feeding line 520 is required. The center electrode 510 is directly connected to the ring structure 410 surrounding electrode 420 and/or any other accessible ring structure as shown in Fig. 5.According to some embodiments, a complete one-layer projective capacitive touch matrix can be integrated in the center area of such a frame electrode structure as shown in Fig. 6. All electrode feeding lines from near field receiving electrodes 420 and the interior pCAP electrodes 610 are routed through one corner 620 without any intersection. This design saves costs since it requires only one connector from the electrodes to the electronic circuit board and doesn't require bridges on the one layer electrode board or glass 605. Any necessary connection can be formed on the controller PCB or within the connector as indicated in Fig. 6.The required bridges for the pCAP matrix to form electrodes columns and rows are made either on the flex connector, the electronic circuit board (PCB) and the touch controller chip according to the state of the art. In Fig. 6 connections are shown by dots. All other crossing require bridges. In case of time-multiplexed operation between PCAP and GestIC®to avoid interference between both measurements, the complete touch matrix may be driven with the GTX signal during GestIC®operation (GTX is in the following the GestIC®TX transmission signal). Thus the touch electrodes 610 are switched together to form a single transmission electrode connected to the ring structure 410. This switching is performed external to the board 605, e.g., by respective switching circuitry. This has the advantage of a defined and strong E-field during GestIC®operation and fastest handover between pCAP and GestIC®. No remaining charges on the pCAP electrodes 610 may influence the very sensitive GestIC®measurement. Typically an analog multiplexer which can be internal on the controller chip may be used to allow this operation mode. E.g., the GestIC®chip or any other suitable touchless detection device may be designed to perform this function, or it may be implemented externally using, for example, analog multiplexer chips.Fig. 7 shows another example of a pCAP one layer touch matrix design in combination with the one-layer electrode arrangement. The pCAP TX and RX electrodes 710 are realized as comb structures here. Fig. 7 further shows that the substrate 605 can be extended or designed to provide for a connector section 720 that allows for connection of the individual feeding lines that connect to the transmitting electrode, the receiving electrodes and the plurality of pCAP electrodes 710.The substrate 605 in any embodiment can be a rigid printed circuit board comprising and area that receives the connector 720 or may comprise a section 720 that directly forms a printed circuit connector as known in the art. Alternatively, the substrate can be a flexible substrate that provides either for a connector or the flexible PCB forms a connection section 720 that can be inserted into a connector.Yet another solution according to some embodiments is shown in Fig. 8 and is the discrete realization of the RX-TX capacitance of the GestIC®system. The TX signal is coupled onto the RX electrodes 420 via discrete CTX coupling capacitances 810 for each electrode 420. The CTX capacitances 810 can be either discrete components or integrated on the GestIC®chip. To fulfill the optimization criteria (CTX=CRXTX) equals CRXG the coupling capacitances CTX Nort to CTX west should be individually tunable (e.g. 5pF, lOpF, 15pF, .., 50pF). Fig. 8 shows that no dedicated TX electrode is required. The RX electrodes 420 distribute the E-field and are sensitive to the E-field changes caused by the users hand. Hence, each electrode 420 operates as a transmitter and receiver at the same time wherein the receiving function is performed by determining a load on each electrode 420. This solution is simpler and easier to realize because CTX tuning can be done by approximation. No E-field simulations are required to match CRXTX and CRXG. On the other hand, ground shielding may be of lower effect because of the high impedance TX signal on the RX electrodes 420. Using the center area for a GestIC®touch area (GestIC®center electrode 450) and for a pCAP matrix (610, 710) is the same as for the solutions before.As shown in the embodiments of Fig. 7 and 8, a pCAP electrode 710 is formed by an upper and lower comb-like structure arranged in an interdigital fashion. Such pCAP electrodes 710 can be arranged in a matrix as shown in Figs. 7-9. By combining certain upper and lower electrodes rows and columns can be formed which can be used for a dual function. In a gesture detection mode (also referred to GestIC®-mode hereinafter), an entire row or column can be separately used to form an electrode similar to electrode 420. In pCAP-mode these electrodes are used as originally intended. Switching circuitry, which is preferably arranged outside the sensor board can then be used to operate the panel in either mode.Fig. 9 shows how a single layer touch matrix, with for example, 15 pCAP sensors 710, can be shared between pCAP-mode and GestIC®signal acquisition mode. Here, the GestIC®electrodes are formed by the elements of the comb structure electrodes and then used as transmitting and receiving structures. Each pCAP sensor 710 consists of an upper and lower comb structured electrode. The upper electrodes of the top row are connected together to form the NORTH electrode 930. The lower elements of the last pCAP sensors in each row are connected together to form the EAST electrode 940. The upper pCAP electrodes of the bottom row are connected to form the SOUTH electrode 950 and the lower electrodes of each first pCAP electrode in each row are connected together to form the WEST electrode 960. This connection scheme still allows to evaluate each pCAP electrode pair separately when the system operates in pCAP-mode due to the fact that the lower electrodes are connected to form columns and the upper electrodes are connected to form rows. The gesture detection TX signal GTX can be coupled via a switch 920 capacitively into the gesture detection GRX electrodes (here CTX East, CTX South, CTX west, CTX North) when the system operates in gesture detection mode.Those outer electrodes 930. . .960 are used as outputs in gesture detection mode and must be set to high impedance during pCAP measurement. This can be done by an analog switch/multiplexer circuit that turns off the GTX signal.The advantage of this solution is a more compact electrode design where the active pCAP touch area is up to the boundaries. In this design it may be necessary to assure that the electrode pattern (e.g. comb) is more sensitive than the longer feeding lines. Therefore the surface of the feeding line should be much smaller than the one of the electrode. In general, feeding lines should be very thin (e.g. using "Nanowire" technology).With respect to Fig. 10, to achieve a high GestIC®-mode sensitivity, it can be advantageous again that the inside electrode area is driven with transmission signal GTX during a GestIC®-mode operation which provides better shielding against ground and better E-field distribution in the z-direction. By this method remaining charges on the inner electrodes from pCAP operation are effectively put to a defined potential and no transfer effects between pCAP- mode and GestIC®-mode occur. The switches/analog multiplexers 1010 as shown in Fig. 10 show how the electrodes can be switched between pCAP and GestIC®operation. In general, GestIC®GTX and pcapTX can be different signals according to some embodiments. For simplicity reasons (lower HW and FW complexity) both signals can be the same, too according to other embodiments. In Fig. 10, different input stages are shown for pCAP-mode and GestIC®-mode operation. It is also possible to use (partly) the same input and signal conditioning circuits for GestIC®-mode and pCAP-mode.According to various embodiments, various electrode arrangements can be used for touch panel and display applications (e.g., up to 10" in diagonal) with, for example, an MGC3130 and successor 3D gesture and touch controllers, e.g. MGC3430. As mentioned above, the GestIC®-technology is used in most examples to implement a touchless gesture detection system. However, the various embodiments are not limited to such a system. Other systems that generate a quasi-static alternating electric field and detect disturbances as well as other capacitive 3D detection system may benefit from a similar sensor arrangement.
PROBLEM TO BE SOLVED: To provide a system, method, and computer program for delivering services to wireless communication devices.SOLUTION: The system tailors the services on the basis of capability of a wireless device and services subscribed by a user. A server or other computer device receives from wireless device capability data, or "flags," indicating capability of the wireless device to access data or download and receive applications provided over the network.
A system for interfacing with a communication device in a wireless communication environment, the system comprising: at least one application download server in a wireless network; and at least one wireless device in selective communication with the application download server. The wireless device may attempt to access one or more applications stored in the application download server via the wireless network, the wireless device may be a hardware platform, a physical of the hardware platform, and Application execution run-time environment of said hardware platform configured to command a generic hardware element, and one or more resident software The wireless device transmits an access request for one or more applications and wireless device capability data to the application download server, and the access request from the wireless device and the wireless device In response to receiving the capability data, the application download server is executed in the application execution runtime environment of the hardware platform based on the wireless device capability data of the wireless device attempting to access Determining access to one or more applications configured as described above, the wireless device capability data being the application of the hardware platform of the wireless device. Sufficient information including, system identifies the emissions Runtime Environment.The wireless device capability data comprises one or more flags transmitted from the wireless device, and the application download server uses the one or more flags to select an application to be made accessible by the wireless device. The system of claim 1.The system of claim 1, wherein the wireless device capability data provides subscriber information.The system of claim 1, wherein the wireless device capability data provides information regarding the hardware platform of the wireless device.The system of claim 1, wherein the wireless device capability data provides information regarding resident software on the hardware platform of the wireless device.The system of claim 1, wherein the one or more applications for which access is attempted by the wireless device reside on the application download server.The one or more applications for which access is attempted by the wireless device reside on one other computer device on the wireless network, and the application download server receives the access received by the application download server The system of claim 1, wherein access to the one or more applications resident on the other computing device is determined based on the wireless device capability data of the wireless device attempting to.A system for interfacing with a communication device in a wireless communication environment, comprising: at least one application download means for downloading one or more applications via a wireless network; the application via the wireless network At least one wireless communication means for attempting to access one or more applications stored in the download means, wherein the at least one wireless communication means is a hardware platform, and physical properties of the hardware platform An application execution run time environment of the hardware platform configured to command a specific hardware element; Ares communication means transmits access request and capability data for one or more applications to the application download means, and in response to receipt of the access request and capability data from the wireless communication means, the application download means Determining access to one or more applications configured to be executed in the application execution runtime environment based on the capability data, wherein the capability data is associated with the hardware platform of the wireless communication means. A system that contains enough information to identify the application execution runtime environment.A method for customizing software applications available to a wireless device via a wireless network, comprising: generating wireless device capability data in the wireless device; and the wireless device is a hardware platform. An application execution runtime environment configured to direct physical elements of the hardware platform, and one or more resident software applications, wherein the wireless device capability data comprises the hardware platform of the wireless device. Sending an access request from the wireless device to the application download server, including information sufficient to identify the application execution runtime environment of Attempting to access the one or more applications stored in the application download server from the wireless device via the wireless network, and wherein the one or more applications execute in the application execution runtime environment Transmitting the wireless device capability data from the wireless device to the application download server, and based on the wireless device capability data, from the application download server Download the application of, and the way equipped with.10. The method of claim 9, further comprising downloading the one or more applications to the wireless device.The method of claim 9, wherein generating the wireless device capability data comprises: generating the one or more flags on the wireless device.10. The method of claim 9, wherein transmitting the wireless device capability data is transmitting subscriber information.The method of claim 9, wherein transmitting the wireless device capability data comprises transmitting at least information regarding the hardware platform of the wireless device.The method of claim 9, wherein transmitting the wireless device capability data comprises transmitting at least information regarding resident software on the hardware platform of the wireless device.The method of claim 9, wherein attempting to access one or more applications via the wireless network comprises attempting to access one or more applications resident on the application download server.The attempting to access one or more applications via the wireless network comprises attempting to access one or more applications resident on a first application download server on the wireless network, Transmitting the wireless device capability data comprises transmitting the wireless device capability data to a second computer device on the wireless network, and accessing the one or more software applications of the wireless device. Determining comprises determining, at the second computer device, access to the one or more software applications of the wireless device based on the wireless device capability data. 9 ways.A method for providing a software application to a wireless device via a wireless network, comprising: generating wireless device capability data at the wireless device; and wherein the wireless device is a hardware platform; An application execution runtime environment configured to direct the physical elements of the wear platform, and one or more resident software applications, the wireless device capability data comprising: the application of the hardware platform of the wireless device By transmitting an access request from the wireless device to the application download server, including information sufficient to identify an execution runtime environment. Accessing one or more applications stored on the application download server from the wireless device via an earless network, and wherein the one or more applications are configured to be executed in the application execution runtime environment Transmitting the wireless device capability data from the wireless device to the application download server; and downloading the one or more applications from the application download server based on the wireless device capability data. And how to have it.A wireless device that selectively communicates with an application download server via a wireless network and attempts to access one or more applications stored on the application download server, the one or more applications being applications An execution runtime environment configured to execute, the wireless device comprising: at least one hardware platform; an application execution runtime environment configured to direct physical elements of the hardware platform; One resident software application, the wireless device may request access to one or more applications and wireless device capability data to the application; Send a tio down download server, the wireless device capability data includes sufficient information to identify the application execution runtime environment of the hardware platform of the wireless device, a wireless device.19. The apparatus of claim 18, wherein the wireless device capability data comprises one or more flags transmitted from the wireless communication device.The apparatus of claim 18, wherein the wireless device capability data provides subscriber information.The apparatus of claim 18, wherein the wireless device capability data provides information regarding the hardware platform of the wireless device.The apparatus of claim 18, wherein the wireless device capability data provides information regarding software resident on the hardware platform of the wireless device.The apparatus of claim 18, wherein the wireless device transmits the wireless device capability data when attempting to access the application download server.The apparatus of claim 18, wherein the wireless device transmits the wireless device capability data when attempting to access an application resident on the application download server.A computer readable storage medium storing a computer program, wherein the program, when executed, generates wireless device capability data to a wireless device including a hardware platform and one or more software applications. And wherein said wireless device capability data includes information sufficient to identify an application execution runtime environment, said application execution runtime environment instructing physical elements of said hardware platform of said wireless device. Configuring the application download server via the wireless network by sending an access request from the wireless device to an application download server Attempting to access one or more applications stored in the server, wherein the one or more applications are configured to be executed in the application execution runtime environment, the application download from the wireless device Sending the wireless device capability data to a server and performing the operations of: a computer readable storage medium.26. The computer readable storage medium of claim 25, wherein the program further causes the wireless device to download the one or more applications to the wireless device.26. The computer readable storage medium of claim 25, wherein the program causes the wireless device to perform the operation of generating the wireless device capability data to include the operation of generating one or more flags on the wireless device.26. The computer readable storage medium of claim 25, wherein the program causes the wireless device to perform the operation of transmitting the wireless device capability data to include an operation of transmitting subscriber information.26. The computer readable storage medium of claim 25, wherein the program causes the wireless device to perform the operation of transmitting the wireless device capability data to comprise at least the operation of transmitting information regarding the hardware platform of the wireless device. .26. The program as recited in claim 25, wherein the program causes the wireless device to execute the wireless device capability data to comprise at least the act of transmitting information regarding software resident on the hardware platform of the wireless device. Computer readable storage medium.The program transmits the wireless device capability data to include the act of transmitting the wireless device capability data from the wireless device to the application download server when attempting to access the one or more applications. 26. The computer readable storage medium of claim 25, wherein the computer readable storage medium comprises:The program performs an operation of transmitting the wireless device capability data to comprise an operation of transmitting the wireless device capability data from the wireless device to the application download server when attempting to download the application. 26. The computer readable storage medium of claim 25.A computer readable storage medium storing a computer program, wherein the program, when executed, provides the wireless device with access to one or more applications via the wireless network. Receiving from the wireless device an access request for access attempts to the one or more applications stored in the device, the one or more applications further comprising: the application execution runtime environment The application execution run time environment is configured to direct physical elements of the hardware platform of the wireless device. Receiving wireless device capability data from the wireless device, and wherein the wireless device capability data includes information sufficient to identify the application execution runtime environment of the wireless device, and the access request from the wireless device and the access request. Determining the access of the wireless device to the one or more applications based on the wireless device capability data in response to receiving the wireless device capability data.34. The computer readable storage medium of claim 33, wherein the program further instructs the computing device to send the one or more applications to the wireless device.34. The computer readable storage medium of claim 33, wherein the program causes the computer device to perform an operation of receiving the wireless device capability data to comprise an operation of receiving a flag from the wireless device.The program includes an operation of determining access to the one or more applications of the wireless device based on the wireless device capability data to provide the computer device with an operation of determining access based at least on subscriber information. 34. The computer readable storage medium of claim 33, wherein the medium is executable.The program comprises the operation of determining the access to the computer device based at least on information about the hardware platform of the wireless device to the one or more applications of the wireless device based on the wireless device capability data. 34. The computer readable storage medium of claim 33, wherein the computer readable storage medium performs an act of determining the access of the.The program comprises the wireless device capability data to provide the computer device with an operation to determine access based at least on information regarding software resident on the hardware platform of the wireless device. 34. The computer readable storage medium of claim 33, performing an act of determining access to the one or more applications.The program causes the computer device to perform an operation of transmitting the one or more applications to the wireless device such that the computer device comprises an operation of transmitting the one or more applications resident on the computer device to the wireless device. 35. The computer readable storage medium of item 34.The program sends the one or more applications to the wireless device such that the computer device has an operation of sending the one or more applications residing on other computer devices on the wireless network to the wireless device. 35. The computer readable storage medium of claim 34, which performs an act ofA device for customizing a software application that can be used in a wireless device via a wireless network, comprising: means for generating wireless device capability data in the wireless device, wherein the wireless device comprises at least A platform, and an application execution runtime environment operating on the hardware platform, the wireless device capability data including information about the hardware platform of the wireless device and the application execution runtime environment, the application execution runtime An environment configured to command physical hardware elements of the hardware platform of the wireless device Access from the wireless device to one or more software applications stored on the application download server via the wireless network by sending an access request to the mobile switching center via the mobile station Means for trying, means for transmitting, with the access request, the wireless device capability data via the base station to the device switching center, and based on the wireless device capability data from the application download server Means for downloading the one or more applications.An apparatus for providing a software application via a wireless network to a wireless device having a hardware platform and an application execution runtime environment operating on the hardware platform, the base station in the wireless network Means for receiving an access request for one or more software applications from the wireless device via the device; means for receiving wireless device capability data from the wireless device via the base station; and the wireless device capability data The wireless platform, including information on the hardware platform of the wireless device and the application execution runtime environment, the application execution runtime environment including: Configured to command physical hardware elements of the hardware platform of the wireless device, the wireless device capability data based on the wireless device capability data in response to receiving the access request and the wireless device capability data Means for determining access to the one or more software applications of the device.
System and method for application and application metadata filtering based on wireless device capabilitiesThe present invention relates generally to wireless communications. In particular, the present invention relates to data communication between remote computing devices via wireless networks.Wireless communication technology is developing rapidly. Rather than in the distant past, when wireless communication devices such as mobile phones first appeared on the market, they all used analog technology. One analog technology that has been used is the Advanced Mobile Phone Service (AMPS). Initially, communication between the wireless handset and the base station (BS) is based on frequency division multiplex access (FDMA) technology, and the number of specific users in the communication cell is available Limited by the number of channels. Today, similar communications are time division multiplex access (TDMA), code division multiplex access (CDMA), or global system for mobile communications (GSM) (GSM Various techniques can be used, such as (registered trademark). Analog technology is about to be replaced by digital technology in many places. Therefore, wireless devices communicate voice and data in packets via digital networks.As the number of users using wireless communication has increased, the number of different types of wireless devices has also substantially increased. Today, there are hundreds of models of mobile telephone devices available on the market, and some of the telecommunications service providers offer even their own proprietary models. Currently, little information is passed between wireless devices and cellular base stations, often just enough to synchronize the timing signals needed for data packet communications. And, in many cases, the wireless device is nothing more than a display device that receives commands from the base station. With the introduction of more advanced wireless devices with advanced capabilities, the need for service providers to know more about the capabilities of wireless devices used by their subscribers to deliver better services has increased. ing.Thus, it would be beneficial to provide a system and method for wireless service providers to know about the ability of wireless devices to exchange information with a base station. Such systems and methods should provide service providers with the appropriate capability data, and will not interfere with the functionality of the wireless device or increase the complexity of manufacturing the device. Therefore, what the present invention is primarily directed to is such a system and method for communication wireless device capabilities.The present invention is a system, method, and computer program for delivering services to a wireless communication device based on the capabilities of the wireless device and the service subscribed by the user. In the system, at least one wireless device is in selective communication with the application download server, and the wireless device selectively attempts to access one or more applications via the wireless network; The application may reside on the application download server or on one other computer device on the wireless network. The wireless device includes a computer platform and one or more resident software applications to selectively communicate wireless device capability data to an application download server, and the application download server is attempting to access the wireless Selectively determining access or download to one or more applications based on wireless device capability data of the device. The system can customize the applications, data, graphics, and so forth sent to the wireless device accordingly, with the knowledge that the device has the necessary capabilities to execute the data.A method for customizing software applications available to a wireless device via a wireless network includes generating wireless device capability data at the wireless device, one or more of wireless devices from the wireless device via the wireless network. Attempting to access the application, transmitting wireless device capability data from the wireless device to the application download server, and based on the wireless device capability data for purposes such as download, execution and display, Determining the access to the one or more applications of the wireless device. The wireless device capability data may be a capability flag sent from the wireless device at the first contact with the application download server, or when the wireless device attempts to access or download a particular application or data. May occur.The present invention, therefore, advantageously provides a system and method for the wireless service provider to learn about the ability of the wireless device to exchange information with the server and to selectively download applications and data therefrom. . The system and method thus ensure that the applications and data available to the wireless device computer platform are compatible. Further, wireless device capability data can be communicated without increasing the overhead to wireless device operation.Other objects, advantages and features of the present invention will become apparent after review of the hereinafter set forth Brief Description of the Drawings, Detailed Description of the Invention and the appended claims.FIG. 1 illustrates the architecture of a wireless communication network.FIG. 2 is a block diagram illustrating the architecture of a system using a wireless device that includes a runtime environment.FIG. 3 is a flowchart of a boot process for the wireless device.FIG. 4 is a flowchart of the registration process in the MSC.FIG. 5 is a flow chart of the feature activation process of the wireless device.FIG. 6 is a flow chart of a process that executes on the MSC to check for feature activation requests.Detailed Description of the InventionIn this description, the terms "communication device", "wireless device", "handheld phone" and "receiver" are used interchangeably; and the term "application" refers to any discrete segment of software, eg, data , Is meant to encompass executables, graphics, menus, libraries and others. FIG. 1 illustrates a communication network 100 used in accordance with the present invention. Communication network 100 includes a wireless communication network, a public switched telephone network (PSTN) 110, and the Internet 120. Wireless devices, such as cellular telephones, pagers, personal digital assistants (PDAs), and other computing devices with wireless connectivity, themselves increase their capabilities and, as a result, differ It has a computer platform and a runtime environment for running software provided by vendors. In addition to receiving e-mail, paging messages, and answering machines at the wireless device, the user browses the Internet, either via the cell itself, other wireless devices in the cell, or via an Internet connection. Applications and data can be downloaded from an application download server accessible fromThe end-user of the wireless device can thus enjoy the various services offered by his wireless service provider by subscribing to the various services offered by the service provider. For certain services, if the wireless device has the ability to receive those services, the user can only access certain applications. For example, to browse the Internet, wireless devices must have some type of browser to view web pages.When implemented in a cellular telecommunications environment, the wireless communication network includes a plurality of communication towers 102 each connected to a base station (BS) 104 and serving users with communication devices 106. Communication device (handset) 106 may be a cellular telephone, pager, PDA, laptop computer, or other handheld, fixed or portable communication device using wireless and cellular telecommunications networks. The commands and data input by each user are transmitted as digital data to communication tower 102. Communication between a user using the communication device 106 and the communication tower 102 may be performed by various techniques, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), mobile communication (GSM), or other protocols used in wireless communication networks or data communication networks. Data from each user is sent from the communication tower 102 to the base station (BS) 104 and forwarded to the mobile switching center (MSC) 108. The MSC 108 may be connected to a Public Switched Telephone Network (PSTN) 110.A user can use his communication device 106 to set up voice communication with a telephone set connected to the PSTN 110 or other handset 106 in a wireless network. Also, users can request special applications or features from the MSC 108.FIG. 2 illustrates a structure 200 of communication between the MSC 108 and the communication device 106. The MSC 108 is connected to a server 112 in which a special application is stored. Communication device 106 includes a hardware platform 214 and a runtime environment 212 operating on the hardware platform 214. Runtime environment 212 is an operating system-like layer of executing software that dictates the physical hardware elements of the wireless device. The presence of runtime environment 212 facilitates the development of other software applications 206 and enables communication device 106 to support a variety of user applications 206.A communication device 106 having a runtime environment 212 can download special applications 206 from the MSC 108, which run locally on the communication device 106 itself. The special application 206 may be a web browser, a television game, a multi-user game, etc. Each application 206 can be tailored to a particular hardware platform.Communication device 106 communicates with MSC 108 via uplink 210 and downlink 208. In one embodiment, during voice communication between the communication device 106 and the other handset, there are voice and control channels configured on each link. The MSC 108 has access to an application 206 and at least one application download server 112 where data is stored. The MSC 108 thus receives the communication attempt or request from the uplink 210 for the application 206 and routes the application 206 to the communication device 106 via the downlink 208. In response to the request to ensure that the wireless device has access to the executable or authoritative application 206, the wireless device may be on the server 112 or of the wireless network where the server 112 can control access thereto. The capability data is communicated to the server 112 for appropriate access to the application residing on another computing device.FIG. 3 is a block diagram 300 of one embodiment of a registration process for the communication device 106. After activation (step 302), the communication device 106 performs a self-diagnosis as shown in step 304 and sends a registration request 306 to the service provider serving the area. As part of the registration procedure, as shown in step 308, the communication device 106 sends capability data as a string of capability flags to the MSC 108. The capability flag conveys the hardware device, runtime environment and / or end user information to the MSC. Capability flags can be set at the manufacturer, while others can be set by the end user or service provider. For example, flags indicating the hardware model or other information regarding the computer platform can be set when the communication device 106 is manufactured. The flag indicating the release version of the runtime environment can be initially set by the manufacturer and can be changed when a new version of the runtime environment is installed on the communication device 106. The wireless service provider may select the type of service or price subscription that the user subscribes to, the end-user's age, or any other metadata filtering of wireless device capabilities to determine application access (metadata filtering You can set a flag that reflects).FIG. 4 is a flow chart 400 of the registration procedure in the MSC 108. The MSC 108 continually checks the registration request, as indicated in step 402. If a registration request is received, the MSC receives registration information as indicated in step 404. The registration information may include a mobile identification number (MIN) and an electronic identification number (EIN). After receiving the subscriber information, as shown in step 406, the MSC retrieves and verifies the subscriber information. If the user is moving away from his home network, his subscriber information is retrieved from his home location register and maintained in a visitor location register (VLR) be able to. The information retrieved from the HLR generally determines the available services to the end-user.However, the services available to the user may be further modified according to the physical capabilities of the communication device used by the user. As shown in step 408, information regarding the communication device is passed to the MSC via the capability flag, and as shown in step 410, the capability flag is saved by the MSC.FIG. 5 is a flow chart 500 of an activation request process in wireless communication device 106. The user enables the function, as shown in step 502, and the communication device sends a request to the MSC (step 504). After sending the request, as shown in step 506, the communication device checks if software for the function has been received. If the function is received, the communication device performs the function, as shown in step 508, and if the function is not received, the communication device is a user, as shown in step 510. Display a message onAs communication devices increase computing power and increase wireless transmission bandwidth, it is often more efficient to execute functions on the wireless communication device 106 itself instead of running on a server connected to the MSC. It is. When the functions are performed on a server, the wireless communication device 106 performs the basic functions of the input and output devices. However, for functions performed locally on the communication device, the function software must be compatible with the communication device, and the MSC must know which type of communication device the user is using.FIG. 6 is a flow chart 600 illustrating one embodiment of a request process at the MSC. After receiving the request from the communication device, as shown in step 602, the MSC receives user information, as shown in step 604. As shown in step 606, the MSC checks the capability flag and compares the capability flag with the requirements for the requested function to determine if a download to the wireless communication device 106 is recommended. The MSC can also check at this step whether the user has subscribed to the requested function. If the user is eligible to receive the requested function, then, as shown in step 608, the MSC selects a version of the function or application that is compatible with the user's communication device. After selecting the appropriate version of the application, as shown in step 610, the MSC dispatches the functional application to the user, eg, a menu, graphic screen, or version of the application. If the user is not eligible to receive the function, an error message is sent to the user, as indicated at step 612.In execution, an end-user using communication device 106 with a factory-integrated runtime environment can upgrade the runtime environment by downloading the runtime environment from the service provider. After upgrading the runtime environment to a new version, the communication device 106 updates the internal flag to reflect the new version of the runtime environment. The service provider (MSC) also updates its memory to reflect the new version of the runtime environment in parallel with the update on the wireless communication device 106. Alternatively, the service provider can download the subscriber information to the communication device 106 by setting an appropriate flag to reflect the subscriber information. By storing subscriber information in the communication device 106, roaming is made simpler as the serving wireless service provider only needs to retrieve little information from the HLR.In another embodiment, for a subscriber having multiple communication devices 106 under one service plan, such as a family plan, the subscriber can specify special features for different communication devices . For example, the subscriber may be able to prevent designated communication device 106 from accessing adult programming for minor use.In a further embodiment, if the subscriber is moving away from the subscriber's home coverage area, activating his communication device causes the communication device to first perform a self-diagnosis and then the local wireless. Register yourself with your service provider. When registering with the wireless service provider (MSC), the communication device sends capability data, such as a string of flags, to the wireless service provider. The flag informs the wireless service provider of information about the hardware, the version of the runtime environment, and the user information. The flag can then be used by the wireless service provider to filter the software and provide it to the communication device 106 at the beginning of the first mutual information exchange with the cell.Wireless device capability data may be retransmitted from server 112 to other computing devices on the wireless network as needed. For example, the end-user can request a multi-user interactive game from a wireless service provider that includes information about the desired opponent. Before checking if the opponent is available and wants to join, the wireless service provider checks if the user is eligible to play this game. That is, the wireless service provider checks the information from the flag to verify that the user has already subscribed to the game and that the user is authorized to play the game. The wireless device then relays any available opponents' ability data that would be available to the end-user with such knowledge.For example, if the end-user communication device 106 is operating on a device that is faster than the opponent's device, the server 112 relays the capability data to display as such. Or, if the opponent chooses another language, the wireless service provider can indicate that any interaction has been translated and therefore may not translate well into the end-user's language.As such, the system provides a method for customizing available software applications to communication device 106 via the wireless network via the step of generating wireless device capability data at communication device 106. Communication device 106 includes computer platform 214 and one or more resident software applications, and attempts to access one or more applications from the wireless device via the wireless network, and from communication device 106 to server 112. The wireless device capability data is sent to an application download server such as, and then the access of the communication device 106 to one or more applications is determined based on the wireless device capability data. The method may further include downloading one or more applications to computer platform 214 of communication device 106.Generating the wireless device capability data comprises: generating one or more flags on the communication device 106, transmitting at least subscriber information, transmitting at least information regarding the computer platform 214 of the wireless communication device 106; Or at least transmitting information regarding software resident on computer platform 214 of communication device 106. Further, attempting to access one or more applications via the wireless network may attempt to access one or more applications resident on the application download server, such as server 112, or Attempting to access one or more applications resident on a first application download server on the wireless network. If the step is to access the first application download server, the step of transmitting the wireless device capability data is transmitting the wireless device capability data to the second computer device over the wireless network The step of determining the access of the communication device 106 to the one or more applications is determining the access of the communication device 106 to the one or more applications at the second computer device based on the wireless device capability data.In terms of methods that are executable on a computer platform of a computing device such as server 112, the present invention includes a program that resides in a computer readable medium. In a computer readable medium, a program instructs a server or other computing device having a computer platform to perform the steps of the method. The computer readable medium may be the memory of server 112, the memory of communication device 106, or in a coupled database. Furthermore, computer readable media may be in secondary storage media. Secondary storage media can be loaded onto the wireless device computer platform, such as magnetic disks or tapes, optical disks, hard disks, flash memory, or other storage media as known in the art.In the context of FIGS. 3-6, the method may be performed by the operating portion (s) of the wireless network executing a sequence of machine readable instructions, such as, for example, the server 112. The instructions may reside in various types of main, secondary or third media that contain signals or store data. The medium comprises, for example, a RAM (not shown) accessible by or resident in components of the wireless network. The instructions, whether contained in RAM, disk, or other secondary storage medium, may be DASD storage devices (eg, conventional "hard drive" or RAID arrays), magnetic tape, electronic reads- Only memory (eg ROM, EPROM or EEPROM), flash memory card, optical storage (eg CD-ROM, WORM, DVD, digital optical tape), paper "punch" card or digital and analog It can be stored on a variety of machine readable data storage media, such as other corresponding data storage media including transmission media.Although the present invention has been shown and described in detail with reference to its preferred embodiments, various modifications in form and detail may be made without departing from the spirit and scope of the invention as set forth in the claims. Things are understood by those with knowledge in this field. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Embodiments disclosed herein include nanowire and nanoribbon devices with non-uniform dielectric thicknesses. The semiconductor device comprises a substrate (106) and a plurality of first semiconductor layers (110A) in a vertical stack over the substrate. The first semiconductor layers have a first spacing (SA). A first dielectric (115A) surrounds each of the first semiconductor layers, and the first dielectric has a first thickness (TDA). The semiconductor device further comprises a plurality of second semiconductor layers (110B) in a vertical stack over the substrate, where the second semiconductor layers may have a second spacing (SB) that is greater than the first spacing. A second dielectric (115B) surrounds each of the second semiconductor layers, and the second dielectric has a second thickness (TDB) that is greater than the first thickness.
A semiconductor device, comprising:a substrate;a plurality of first semiconductor layers in a vertical stack over the substrate, wherein the first semiconductor layers have a first spacing;a first dielectric surrounding each of the first semiconductor layers, wherein the first dielectric has a first thickness;a plurality of second semiconductor layers in a vertical stack over the substrate, wherein the second semiconductor layers have a second spacing that is greater than the first spacing; anda second dielectric surrounding each of the second semiconductor layers, wherein the second dielectric has a second thickness that is greater than the first thickness.The semiconductor device of claim 1, wherein the first semiconductor layers and the second semiconductor layers are nanowires or nanoribbons.The semiconductor device of claim 1 or 2, wherein a surface facing the substrate of a bottommost first semiconductor layer is aligned with a surface facing the substrate of a bottommost second semiconductor layer.The semiconductor device of claim 1 or 2, wherein a surface facing the substrate of a bottommost first semiconductor layer is misaligned with a surface facing the substrate of a bottommost second semiconductor layer.The semiconductor device of claim 1, 2, 3 or 4, wherein the second gate dielectric comprises:a first dielectric layer over the second semiconductor layers; anda second dielectric layer over the first dielectric layer.The semiconductor device of claim 5, wherein the first dielectric layer is an oxide, and wherein the second dielectric layer is a dipole material.The semiconductor device of claim 6, wherein the first dielectric layer is SiO2 or HfO2 and wherein the second dielectric layer comprises one or more of La2O3, ZrO2, and TiO2.A method of forming a semiconductor device, comprising:disposing a multilayer stack of alternating semiconductor layers and sacrificial layers over a substrate, wherein the multilayer stack comprises a first region and a second region, and wherein the multilayer stack in the first region is different than the multilayer stack in the second region;patterning the multi-layer stack into a plurality of fins, wherein a first fin is in the first region and a second fin is in the second region;disposing a sacrificial gate structure over each of the first fin and the second fin, wherein the sacrificial gates define a first channel region of the first fin and a second channel region of the second fin;disposing pairs of source/drain regions on opposite ends of each sacrificial gate structure;removing the sacrificial layers from the channel regions of the first fin and the second fin;disposing a first gate dielectric over the semiconductor layers in the first channel region, wherein the first gate dielectric has a first thickness; anddisposing a second gate dielectric over the semiconductor layers in the second channel region, wherein the second gate dielectric has a second thickness that is greater than the first thickness.The method of claim 8, wherein a thickness of the semiconductor layers in the first region is substantially similar to a thickness of the semiconductor layers in the second region, and wherein a spacing between semiconductor layers in the second region is greater than a spacing between semiconductor layers in the first region.The method of claim 9, wherein the second gate dielectric is disposed with an atomic layer deposition (ALD) process.The method of claim 8, wherein a thickness of the semiconductor layers in the first region is smaller than a thickness of the semiconductor layers in the second region, and wherein a spacing between semiconductor layers in the first region is substantially similar to a spacing between semiconductor layers in the second region.The method of claim 11, wherein the second gate dielectric is disposed by oxidizing the semiconductor layers in the second channel region.The method of claim 8, 9, 10, 11 or 12, wherein a number of semiconductor layers in the first region is an integer multiple of the number of semiconductor layers in the second region.
TECHNICAL FIELDEmbodiments of the present disclosure relate to semiconductor devices, and more particularly to high voltage nanoribbon and nanowire transistors with thick gate dielectrics.BACKGROUNDAs integrated device manufacturers continue to shrink the feature sizes of transistor devices to achieve greater circuit density and higher performance, there is a need to manage transistor drive currents while reducing short-channel effects, parasitic capacitance, and off-state leakage in next-generation devices. Non-planar transistors, such as fin and nanowire-based devices, enable improved control of short channel effects. For example, in nanowire-based transistors the gate stack wraps around the full perimeter of the nanowire, enabling fuller depletion in the channel region, and reducing short-channel effects due to steeper sub-threshold current swing (SS) and smaller drain induced barrier lowering (DIBL).Different functional blocks within a die may need optimization for different electrical parameters. In some instances high voltage transistors for power applications need to be implemented in conjunction with high speed transistors for logic applications. High voltage transistors typically suffer from high leakage current. Accordingly, high voltage applications typically rely on fin-based transistors. Fin-based transistors allow thicker gate dielectrics compared to nanowire devices. In nanowire devices, a thicker oxide results in the space between nanowires being reduced to the point that little or no gate metal can be disposed between the nanowires.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A is a cross-sectional illustration of stacked nanoribbons with variable gate dielectric thicknesses, in accordance with an embodiment.Figure 1B is a cross-sectional illustration of Figure 1A along line B-B', in accordance with an embodiment.Figure 1C is a cross-sectional illustration of Figure 1A along line C-C', in accordance with an embodiment.Figure 2A is a cross-sectional illustration of stacked nanoribbons with variable gate dielectric thicknesses, in accordance with an additional embodiment.Figure 2B is a cross-sectional illustration of Figure 2A along line B-B', in accordance with an embodiment.Figure 3A is a zoomed in cross-sectional illustration of a nanoribbon surrounded by a gate dielectric layer, in accordance with an embodiment.Figures 3B-3D are cross-sectional illustrations that more clearly depict the structure of the gate dielectric layer, in accordance with an embodiment.Figures 4A-4P are cross-sectional illustrations depicting a process to form nanoribbon transistors with non-uniform gate dielectric thicknesses, where the gate dielectric is disposed with an atomic layer deposition (ALD) process, in accordance with an embodiment.Figures 5A-5L are cross-sectional illustrations depicting a process to form nanoribbon transistors with non-uniform gate dielectric thicknesses, where the gate dielectric is disposed with an oxidation process, in accordance with an embodiment.Figure 6 illustrates a computing device in accordance with one implementation of an embodiment of the disclosure.Figure 7 is an interposer implementing one or more embodiments of the disclosure.EMBODIMENTS OF THE PRESENT DISCLOSUREDescribed herein are semiconductor devices with high voltage nanoribbon and nanowire transistors with thick gate dielectrics, in accordance with various embodiments. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.As noted above, high-voltage transistors are susceptible to high leakage currents. Such transistors are typically implemented with fin-based transistors that allow for thicker gate dielectrics. Fin-based transistors do not provide the same benefits of nanowire devices (e.g., improved short channel effects), and therefore are not an optimal solution. Accordingly, embodiments disclosed herein include nanoribbon (or nanowire) devices with increased gate dielectric thicknesses to reduce leakage. Embodiments disclosed herein provide additional clearance between the nanoribbons to allow the formation of thick gate dielectrics. Such embodiments may also be fabricated in parallel with logic devices that require a smaller spacing between the nanoribbon channels.In an embodiment, the high-voltage devices may be fabricated in parallel with logic devices by forming a material stack that is segmented into a first region and a second region. In one embodiment, the first region includes semiconductor layers that are spaced at a first spacing, and the second region includes semiconductor layers that are spaced at a second, larger, spacing. The increased spacing in the second region provides clearance for deposition of a thick gate dielectric using an atomic layer deposition (ALD) process. In another embodiment, the first region includes semiconductor layers that have a first thickness, and the second region includes semiconductor layers that have a second, larger, thickness. The increased thickness of the semiconductor layers in the second region provide additional margin for an oxidation process. That is, a portion of the thicker semiconductor layers in the second region is consumed to form a thick gate dielectric.Nanoribbon devices are described in greater detail below. However, it is to be appreciated that substantially similar devices may be formed with nanowire channels. A nanowire device may include devices where the channel has a width dimension and a thickness dimension that are substantially similar, whereas a nanoribbon device may include a channel that has a width dimension that is substantially larger or substantially smaller than a thickness dimension. As used herein, "high-voltage" may refer to voltages of approximately 1.0V or higher. Particular embodiments may include high-voltage devices that operate at approximately 1.2V or greater.Referring now to Figure 1A , a cross-sectional illustration of an electronic device 100 is shown, in accordance with an embodiment. In an embodiment, the electronic device 100 is formed on a substrate 106. The substrate 106 may include a semiconductor substrate and an isolation layer over the semiconductor substrate. In an embodiment, an underlying semiconductor substrate represents a general workpiece object used to manufacture integrated circuits. The semiconductor substrate often includes a wafer or other piece of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polycrystalline silicon and silicon on insulator (SOI), as well as similar substrates formed of other semiconductor materials, such as substrates including germanium, carbon, or group III-V materials. The substrate 106 may also comprise an insulating material (e.g., an oxide or the like) that provides isolation between neighboring transistor devices.In Figure 1A , a cross-sectional illustration of a plurality of processed fins 108 are shown. That is, the residual nanoribbons 110 are shown following the removal of sacrificial layers (not shown) between the nanoribbons 110. For example, the cross-sectional illustration in Figure 1A may be representative of a cross-section through a channel region of nanoribbon transistors, with the gate electrode removed. The nanoribbons 110 may comprise any suitable semiconductor materials. For example, the nanoribbons 110 may comprise silicon or III-V group materials.In an embodiment, the first nanoribbons 110A may have dimensions that are substantially similar to the second nanoribbons 110B. For example, the first nanoribbons 110A may have a thickness TSA and the second nanoribbons 110B may have a thickness TSB that is substantially similar to the thickness TSA. The widths of the first nanoribbons 110A and 110B may be similar to each other in some embodiments.In an embodiment, first fins 108A may be used for logic devices, and second fins 108B may be used for high-voltage devices. In order to provide optimal performance, a thickness TDA of the dielectric 115A around the nanoribbons 110A may be less than a thickness TDB of the dielectric 115B around the nanoribbons 110B. The dielectric 115A may have a thickness TDA that is approximately 3nm or less, and the dielectric 115B may have a thickness TDB that is approximately 3nm or greater. In a particular embodiment, the thickness TDB may be approximately 6nm or greater.As noted above, the larger thickness of the dielectric 115B will lead to pinching off or otherwise preventing the gaps between the nanoribbons 110 from being filled with gate metal. For example, the spacing SA between nanoribbons in the first fins 108A may be representative of a typical spacing for nanoribbon logic devices (e.g., between approximately 3nm and approximately 8nm). As such, the thick dielectrics 115B will merge when such a spacing is used. In order to accommodate the dielectric 115B, the second fins 108B comprise nanoribbons 110B that have a spacing SB that is greater than the spacing SA. The spacing SB may be 8nm or greater, or 12nm or greater. In some embodiments, the spacing SB may be an integer multiple of the thickness TSA of the first nanoribbons 110A. In a particular embodiment, the spacing SB may be twice the thickness TSA of the first nanoribbons 110A.In an embodiment, a bottommost first nanoribbon 110A in a first fin 108A is aligned with a bottommost second nanoribbon 110B in a second fin 108B. For example, the bottom surfaces 111 (i.e., the surfaces facing toward the substrate 106) may be substantially coplanar with each other. In an embodiment, one or more of the second nanoribbons 110B in a second fin 108B may be misaligned from first nanoribbons 110A in a first fin 108A. For example, the topmost second nanoribbon 110B in a second fin 108B is positioned (in the Z-direction) between first nanoribbons 110A in a first fin 108A.In the illustrated embodiment, a number of first nanoribbons 110A in a first fin 108A may be different than a number of second nanoribbons 110B in a second fin 108B. For example, the number of first nanoribbons 110A in each first fin 108A is greater than the number of second nanoribbons 110B in each second fin 108B. In a particular embodiment, the number of first nanoribbons 110A in each first fin 108A is an integer multiple (e.g., 2X, 3X, etc.) of the number of second nanoribbons 110B in each second fin 108B. For example, Figure 1A illustrates four first nanoribbons 110A in each first fin 108A and two second nanoribbons 110B in each second fin 108B.Referring now to Figures 1B and 1C , cross-sectional illustrations of Figure 1A along line B-B' and C-C' are shown, respectively, in accordance with an embodiment. Figures 1B and 1C include more detail than Figure 1A . Particularly, Figures 1B and 1C provide an illustration of transistor devices 103A and 103B, respectively, that are formed along the fins 108A and 108B.Referring now to Figure 1B , a cross-sectional illustration of a first nanoribbon transistor 103A is shown, in accordance with an embodiment. The nanoribbon transistor 103A may comprise a vertical stack of nanoribbons 110A. The nanoribbons 110A extend between source/drain regions 120. A gate structure may define a channel region of the transistor 103A. The gate structure may comprise a gate dielectric 115A and a gate electrode 130. The gate dielectric 115A may surround the nanoribbons 110A and line the spacers 122 on either side of the gate electrode 130. In an embodiment, the gate electrode 130 surrounds the nanoribbons 110A to provide gate all around (GAA) control of the transistor 103A. In an embodiment, the first nanoribbon transistor 103A is used as part of a logic block. Accordingly, the first nanoribbon transistor 103A is optimized for fast switching speeds, and may have a substantially thin gate dielectric 115A.In an embodiment, the source/drain regions 120 may comprise an epitaxially grown semiconductor material. The source/drain regions 120 may comprise a silicon alloy. In some implementations, the source/drain regions 120 comprise a silicon alloy that may be in-situ doped silicon germanium, in-situ doped silicon carbide, or in-situ doped silicon. In alternate implementations, other silicon alloys may be used. For instance, alternate silicon alloy materials that may be used include, but are not limited to, nickel silicide, titanium silicide, cobalt silicide, and possibly may be doped with one or more of boron and/or aluminum. In other embodiments, the source/drain regions 120 may comprise alternative semiconductor materials (e.g., semiconductors comprising group III-V elements and alloys thereof) or conductive materials.In an embodiment, the gate dielectric 115A may be, for example, any suitable oxide such as silicon dioxide or high-k gate dielectric materials. Examples of high-k gate dielectric materials include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer to improve its quality when a high-k material is used.In an embodiment, the gate electrode 130 may comprise a work function metal. For example, when the metal gate electrode 130 will serve as an N-type workfunction metal, the gate electrode 130 preferably has a workfunction that is between about 3.9 eV and about 4.2 eV. N-type materials that may be used to form the metal gate electrode 130 include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, and metal carbides that include these elements, e.g., titanium carbide, zirconium carbide, tantalum carbide, hafnium carbide and aluminum carbide. Alternatively, when the metal gate electrode 130 will serve as a P-type workfunction metal, the gate electrode 130 preferable has a workfunction that is between about 4.9 eV and about 5.2 eV. P-type materials that may be used to form the metal gate electrode 130 include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. The gate electrode 130 may also comprise a workfunction metal and a fill metal (e.g., tungsten) over the workfunction metal.Referring now to Figure 1C , a cross-sectional illustration of a second transistor 103B is shown, in accordance with an embodiment. In an embodiment, the second transistor 103B may be similar to the first transistor 103A, with the exception that fewer nanoribbons 110B are included and the thickness of the gate dielectric 115B is increased. In an embodiment, the thicker gate dielectric 115B allows for higher voltage operation of the second transistor 103B compared to the first transistor 103A.Referring now to Figure 2A , a cross-sectional illustration of a plurality of processed fins 208 are shown, in accordance with an embodiment. The processed fins 208 may each comprise a vertical stack of nanoribbons 210 over a substrate 206. In an embodiment, first fins 208A are suitable for high speed applications (e.g., logic devices), and second fins 208B are suitable for high voltage applications. In an embodiment, the first fins 208A may be substantially similar to the first fins 108A in Figure 1A . That is, the first fins 208A may comprise first nanoribbons 210A and first dielectrics 215A surrounding the first nanoribbons 210A.In an embodiment, the second fins 208B in Figure 2A have a different structure than the second fins 108B in Figure 1A . The different structure, is attributable, at least in part, due to the method used to form the dielectrics 215B. For example, the second fins 108B include dielectrics 115B that are deposited using ALD, whereas the second fins 208B include dielectrics 215B that are formed with an oxidization process. Particularly, the second fins 208B may have nanoribbons 210B that are originally larger than the nanoribbons 210A of the first fins 208A, and which are partially converted into the dielectric 215B by an oxidation process. The oxidation process provides dielectric 215B with a thickness TDB that is greater than the thickness TDA of the dielectric 215A.The conversion of portions of the second nanoribbons 210B results in the spacing SB being larger than the spacing SA in first nanoribbons 210A. In some embodiments, the second nanoribbons 210B may have a thickness TSB. The thickness TSB may be similar to the thickness TSA of the first nanoribbons 210A. In other embodiments, the thickness TSB may be different than the thickness TSA of the first nanoribbons 210A. In an embodiment, the oxidation process may also shrink the width WB of the second nanoribbons 210B. For example, the width WB of the second nanoribbons 210B may be smaller than a width WA of the first nanoribbons 210A. However, in other embodiments, the second fins 208B are originally formed with a larger width, and the oxidation process may result in the second nanoribbons 210B having a width WB that is substantially similar to the width WA of the first nanoribbons 210A.Embodiments also includes second fins 208B that have second nanoribbons 210B that are not aligned with the first nanoribbons 210A of the first fins 208A. For example, the bottom surface 211 (i.e., the surface facing the substrate 206) of the bottommost second nanoribbon 210B is not aligned with a bottom surface 211 (i.e., the surface facing the substrate 206) of the bottommost first nanoribbon 210A.Referring now to Figure 2B , a cross-sectional illustration of the semiconductor device 200 in Figure 2A along line B-B' is shown, in accordance with an embodiment. Figure 2B includes more detail than Figure 2A . Particularly, Figure 2B provides an illustration of a transistor device 203B that is formed along a second fin 208B. The transistor device along a first fin 208A would have a cross-section substantially similar to the transistor device 103A in Figure 1B , and is therefore not repeated here.Referring now to Figure 2B , a cross-sectional illustration of a second nanoribbon transistor 203B is shown, in accordance with an embodiment. The second nanoribbon transistor 203B comprises source/drain regions 220 on opposite ends of a gate structure. The gate structure may comprise a gate electrode 230 and spacers 222 between the source/drain regions 220 and the gate electrode 230.In an embodiment, the nanoribbons 210B may have a non-uniform thickness. For example, the nanoribbons 210B may have a first thickness T1 in the portions passing through the spacers 222 and a second thickness T2 in the channel region (i.e., the portion surrounded by the gate electrode 230). The second thickness T2 is larger than the first thickness T1 and is the original thickness of the nanoribbon prior to the oxidation process. As such, the second thickness T2 plus twice the thickness TD (i.e., above and below the nanoribbon 210B) may be substantially equal to the first thickness T1. In an embodiment, since the dielectric 215B is disposed with an oxidation process, the spacers 222 may also be free from the dielectric 215B. As such, the spacers 222 may be in direct contact with the gate electrode 230 in some embodiments.Referring now to Figure 3A , an isolated cross-sectional illustration of nanoribbons 310 is shown, in accordance with an embodiment. In an embodiment, the nanoribbons 310 may comprise a dielectric 315 that surrounds the nanoribbon 310. Embodiments disclosed herein include various configurations and material choices for the dielectric 315 in order to provide voltage threshold (VT) and maximum voltage (VMAX) modulation in order to improve the performance of the high-voltage nanowire transistors. Particularly, various dielectric 315 configurations disclosed herein allow for the VT and VMAX to be tuned while using a single workfunction metal for different gate electrodes. Additionally, the various dielectric 315 configurations may allow for higher dielectric constants (k), which can lessen the need for thicker dielectrics 315. A zoomed in illustration of a portion 307 is shown in Figures 3B-3D in accordance with various embodiments.Referring now to Figure 3B , a cross-sectional illustration of portion 307 is shown, in accordance with an embodiment. In an embodiment, the portion 307 comprises a first dielectric 3151 that is in direct contact with the nanoribbon 310. In an embodiment, the first dielectric 315 comprises SiO2. That is, the dielectric 315 may be a single material layer. In an embodiment, the first dielectric 3151 may be formed with an ALD process or an oxidation process.In some embodiments, the first dielectric 3151 may also be subject to an annealing process. Controlling the time and temperatures of the anneal allow for VT variation of the device. For example, an anneal may move the VT of the N-type device and the P-type device in the same or opposite directions. In an embodiment, the anneal may be implemented in an NH3 environment. Accordingly, an excess of nitrogen is detectable in the resulting first dielectric 3151. For example, analysis techniques such as, XSEM, TEM, or SIMS may be used to detect the presence of nitrogen in the first dielectric 3151 in order to verify that such an annealing process was used to modify the first dielectric 3151.Referring now to Figure 3C , a cross-sectional illustration of portion 307 is shown, in accordance with an embodiment. In an embodiment, the portion 307 comprises a first dielectric 3151 that is in direct contact with the nanoribbon 310 and a second dielectric 3152 that is over the first dielectric 3151. In an embodiment, the first dielectric 3151 may be SiO2, and the second dielectric 3152 may be a dipole material. In an embodiment, the first dielectric 3151 can be formed with an ALD process or an oxidation process. The use of a dipole material provides a layer with a significantly higher dielectric constant (k) than the first dielectric 3151. For example, SiO2 may have a dielectric constant (k) of 3.9, and the second dielectric 3152 may have a dielectric constant (k) of 10 or higher. Dipole materials for the second dielectric 3152 may include, but are not limited to, La2O3, ZrO2, Y2O3, and TiO2. In an embodiment, one or both of the first dielectric 3151 and the second dielectric 3152 may be subject to an annealing process in order to modulate the VT.Referring now to Figure 3D , a cross-sectional illustration of portion 307 is shown, in accordance with an embodiment. In an embodiment, the portion 307 comprises a third dielectric 3153 that is in direct contact with the nanoribbon 310 and a second dielectric 3152 that is over the third dielectric 3153. In an embodiment, the third dielectric 3153 may be dielectrics other than SiO2 that are known to have good interfacial properties over silicon and which have a higher dielectric constant (k) than SiO2. For example, the third dielectric 3153 may comprise HfO2. In an embodiment, the third dielectric 3153 may be deposited with an ALD process or the like. In an embodiment, the second dielectric 3152 may be a dipole material, similar to those described above with respect to Figure 3C . In an embodiment, one or both of the third dielectric 3153 and the second dielectric 3152 may be subject to an annealing process in order to modulate the VT.In Figures 3C and 3D , the first dielectric 3151 and the third dielectric 3153 provide a buffer between the nanoribbon 310 and the second dielectric 3152. That is, materials with known good reliability at the interface with the nanoribbon 310 are provided. The high-k dipole materials of the second dielectric 3152 can then be deposited without having concern for reliability issues that may arise when the second dielectric 3152 is formed directly over the nanoribbon 310. However, it is to be appreciated that the stacking order of the dielectrics 315 may be in any order to while still providing substantially similar functionality.Referring now to Figures 4A-4P , a series of cross-sectional illustrations depicting a process for forming a pair of nanoribbon transistors with different gate dielectric thicknesses is shown, in accordance with an embodiment. The process illustrated includes the use of an ALD process to deposit the gate dielectric. Accordingly, the spacing between nanoribbons of the transistor with a thick gate dielectric needs to be increased.Referring now to Figure 4A , a cross-sectional illustration of a device 400 is shown, in accordance with an embodiment. In an embodiment, the device 400 may comprise a substrate 406 and a stack of alternating layers. The substrate 406 may be any substrate such as those described above. The alternating layers may comprise semiconductor layers 438 and sacrificial layers 437. The semiconductor layers 438 are the material that will form the nanowires. The semiconductor layers 438 and sacrificial layers 437 may each be a material such as, but not limited to, silicon, germanium, SiGe, GaAs, InSb, GaP, GaSb, InAlAs, InGaAs, GaSbP, GaAsSb, and InP. In a specific embodiment, the semiconductor layers 438 are silicon and the sacrificial layers 437 are SiGe. In another specific embodiment, the semiconductor layers 438 are germanium, and the sacrificial layers 437 are SiGe. In the illustrated embodiment, a first layer 441 is a sacrificial layer 437, a second layer 442 is a semiconductor layer 438, a third layer 443 is a sacrificial layer 437, and a fourth layer 444 is a semiconductor layer 438.Referring now to Figure 4B , a cross-sectional illustration of the device 400 after a first patterning operation is shown, in accordance with an embodiment. In an embodiment, the patterning process may include disposing a resist 470 and patterning the resist 470. The remaining portion of the resist 470 may cover the stack in a first region 404A and expose a second region 404B. The resist 470 is used as a mask during an etching process that etches away the portion of the semiconductor layer 438 in the fourth layer 444 of the second region 404B. As shown, a portion of the sacrificial layer 437 in the third layer 443 is now the topmost layer in the second region 404B.Referring now to Figure 4C , a cross-sectional illustration of the device 400 after an additional sacrificial layer 437 is disposed over the top surfaces of the stack after the resist 470 is removed is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 404A and the second region 404B, the deposited sacrificial layer 437 is split between two of the layers. Particularly, the deposited sacrificial layer 437 in the first region 404A is in the fifth layer 445, and the deposited sacrificial layer 437 in the second region 404B is in the fourth layer 444.Referring now to Figure 4D , a cross-sectional illustration of the device 400 after an additional semiconductor layer 438 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 404A and the second region 404B, the deposited semiconductor layer 438 is split between two of the layers. Particularly, the deposited semiconductor layer 438 in the first region 404A is in the sixth layer 446, and the deposited semiconductor layer 438 in the second region 404B is in the fifth layer 445.Referring now to Figure 4E , a cross-sectional illustration of the device 400 after an additional sacrificial layer 437 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 404A and the second region 404B, the deposited sacrificial layer 437 is split between two of the layers. Particularly, the deposited sacrificial layer 437 in the first region 404A is in the seventh layer 447, and the deposited sacrificial layer 437 in the second region 404B is in the sixth layer 446.Referring now to Figure 4F , a cross-sectional illustration of the device 400 after an additional semiconductor layer 438 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 404A and the second region 404B, the deposited semiconductor layer 438 is split between two of the layers. Particularly, the deposited semiconductor layer 438 in the first region 404A is in the eighth layer 448, and the deposited semiconductor layer 438 in the second region 404B is in the seventh layer 447.Referring now to Figure 4G , a cross-sectional illustration of the device 400 after a second patterning operation is shown, in accordance with an embodiment. In an embodiment, the patterning process may include disposing a resist 470 and patterning the resist 470. The remaining portion of the resist 470 may cover the stack in the first region 404A and expose the second region 404B. The resist 470 is used as a mask during an etching process that etches away the portion of the semiconductor layer 438 in the seventh layer 447 of the second region 404B. As shown, a portion of the sacrificial layer 437 in the sixth layer 446 is now the topmost layer in the second region 404B.Referring now to Figure 4H , a cross-sectional illustration of the device 400 after an additional sacrificial layer 437 is disposed over the top surfaces of the stack after the resist 470 is removed is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 404A and the second region 404B, the deposited sacrificial layer 437 is split between two of the layers. Particularly, the deposited sacrificial layer 437 in the first region 404A is in the ninth layer 449, and the deposited sacrificial layer 437 in the second region 404B is in the seventh layer 447.Referring now to Figure 4I , a cross-sectional illustration of the device 400 after a capping layer 461 is disposed over the stack of layers. The capping layer 461 may be polished to have a planar top surface. This results in a portion of the capping layer 461 over the first region 404A having a smaller thickness than a portion of the capping layer 461 over the second region 404B. In an embodiment, the stack of layers in the first region 404A may be referred to as stack 435 and the stack of layers in the second region 404B may be referred to as stack 436.Such a patterning process results in non-uniform spacing between the semiconductor layers 438. In the first region 404A, the semiconductor layers 438 in stack 435 are spaced apart from each other by a single sacrificial layer 437 (e.g., the sacrificial layer 437 in the third layer 443 separates the semiconductor layer 438 in the fourth layer 444 from the semiconductor layer 438 in the second layer 442). In the second region 404B, the semiconductor layers 438 in stack 436 are spaced apart by a pair of sacrificial layers 437 (e.g., the sacrificial layers 437 in the third layer 443 and the fourth layer 444 separate the semiconductor layer 438 in the second layer 442 from the semiconductor layer 438 in the fifth layer 445). Additionally, each of the resulting semiconductor layers 438 (in both the first region 404A and the second region 404B) have a substantially similar thickness.Referring now to Figure 4J , a cross-sectional illustration of the device 400 after the stacks 435 and 436 are patterned to form a plurality of fins 408 is shown, in accordance with an embodiment. In the illustrated embodiment, first fins 408A are formed in the first region 404A and second fins 408B are formed in the second region 404B. The patterned semiconductor layers 438 are now referred to as nanoribbons 410 (i.e., first nanoribbons 410A and second nanoribbons 410B). Accordingly, fins 408 with a non-uniform nanoribbon spacing are provided on the same substrate 406, using a single process flow.The illustrated embodiment depicts the formation of two semiconductor layers 438 in the second region 404B. However, it is to be appreciated that the previous processing operations may be repeated any number of times to provide a desired number of semiconductor layers 438 in the second region 404B. In an embodiment, the number of semiconductor layers 438 in the first region 404A may be an integer multiple of the number of semiconductor layers 438 in the second region 404B.In the illustrated embodiment, the etching process etches through the alternating layers down into the substrate 406. In an embodiment, an isolation layer (not shown) may fill the channels between the fins 408. In the case where the fins 408 extend into the substrate 406, the isolation layer may extend up to approximately the bottommost sacrificial layer 437. In the illustrated embodiment, the fins 408 are depicted as having substantially vertical sidewalls along their entire height. In some embodiments, the sidewalls of the fins 408 may include non-vertical portions. For example, the bottom of the fins proximate to the substrate 406 may have a footing or other similar structural feature typical of high aspect ratio features formed with dry etching processes. Additionally, the profile of all fins 408 may not be uniform. For example, a nested fin 408 may have a different profile than an isolated fin 408 or a fin 408 that is the outermost fin 408 of a grouping of fins 408.Referring now to Figure 4K , a cross-sectional illustration of a device 400 along the length of the fins 408A and 408B is shown, in accordance with an embodiment. The illustrated embodiment depicts a break 404 along the length of the substrate 406. The break 404 may be at some point along a single fin 408. That is, the first fin 408A and the second fin 408B may be part of a single fin that has both types of nanoribbon spacing. Alternatively, the second fin 408B may be located on a different fin than the first fin 408A. That is, in some embodiments, the break 404 does not represent a gap within a single fin 208.Referring now to Figure 4L , a cross-sectional illustration after sacrificial gate structures 471 are disposed over the fins 408 and the fins 408 are patterned to form source/drain openings 472 is shown, in accordance with an embodiment. In an embodiment, a sacrificial gate 471 is disposed over each fin 408A and 408B. Following formation of the sacrificial gate 471, portions of the fins 408A and 408B may be removed to form source/drain openings 472. A spacer 422 may also be disposed on opposite ends of the sacrificial gate 471. The spacer 422 may cover sidewall portions of the sacrificial layers 437, and the nanoribbons 410A and 410B may pass through the spacer 422. It is to be appreciated that the sacrificial gate 471 and the spacers 422 will wrap down over sidewalls of the fins 408 (i.e., into and out of the plane of Figure 4L ).Referring now to Figure 4M , a cross-sectional illustration after source/drain regions 420 are formed is shown, in accordance with an embodiment. In an embodiment, the source/drain regions 420 may be formed with an epitaxial growth process. The source/drain regions 420 may be formed with materials and processes such as those described in greater detail above.Referring now to Figure 4N , a cross-sectional illustration after the sacrificial gates 471 are removed is shown, in accordance with an embodiment. The sacrificial gates 471 may be removed with any suitable etching process. After removal of the sacrificial gates 471 the remaining portions of the sacrificial layers 437 are removed. In an embodiment, an etching process selective to the sacrificial layer 437 with respect to the nanoribbons 410A and 410B is used to remove the sacrificial layers 437. In an embodiment, the selectivity of the etchant is greater than 100:1. In an embodiment where nanoribbons 410 are silicon and sacrificial layers 437 are silicon germanium, sacrificial layers 437 are selectively removed using a wet etchant such as, but not limited to, aqueous carboxylic acid/nitric acid/HF solution and aqueous citric acid/nitric acid/HF solution. In an embodiment where nanoribbons 410 are germanium and sacrificial layers 437 are silicon germanium, sacrificial layers 437 are selectively removed using a wet etchant such as, but not limited to, ammonium hydroxide (NH4OH), tetramethylammonium hydroxide (TMAH), ethylenediamine pyrocatechol (EDP), or potassium hydroxide (KOH) solution. In another embodiment, sacrificial layers 437 are removed by a combination of wet and dry etch processes. In an embodiment, the removal of the sacrificial gates 471 and the sacrificial layers 437 provides openings 473 between the spacers 422.The openings 473 expose the nanoribbons 410. As shown, the first nanoribbons 410A include a first spacing SA that is less than a second spacing SB of the second nanoribbons 410B. Accordingly, there is more room around the second nanoribbons 410B to grow a thicker gate dielectric.Referring now to Figure 4O , a cross-sectional illustration after a gate dielectric layer 415 is disposed over the nanoribbons 410 is shown, in accordance with an embodiment. In the illustrated embodiment, the first gate dielectric 415A has a first thickness TDA that is less than a second thickness TDB of the second gate dielectric 415B. In an embodiment, the gate dielectrics 415 may be deposited with an ALD process. As such, the gate dielectrics 415 may also deposit along the substrate 406 and the interior sidewalls of the spacers 422.In an embodiment, the first and second gate dielectrics 415A and 415B may be deposited with different processes and materials. For example, the first nanoribbons 410A may be masked during the deposition of the second gate dielectric 415B, and the second nanoribbons 410B may be masked during the deposition of the first gate dielectric 415A. In other embodiments, the first gate dielectric 415A and the second gate dielectric 415B may be deposited at the same time. When the desired thickness of the first gate dielectric 415A is reached, the first nanoribbons 410A are masked and the deposition may continue to increase the thickness of the second gate dielectric 415B.Referring now to Figure 4P , a cross-sectional illustration after a gate electrode 430 is disposed around the nanoribbons 410 is shown, in accordance with an embodiment. In an embodiment, the gate electrode 430 wraps around each of the nanoribbons 410 in order to provide GAA control of each nanoribbon 410. The gate electrode material may be deposited with any suitable deposition process (e.g., chemical vapor deposition (CVD), ALD, etc.).In an embodiment, a single material may be used for the gate electrode 430 even between N-type and P-type transistors. Such embodiments are possible by controlling the VT of the devices using different gate dielectric configurations and treatments. For example, anneals of various gate dielectric materials, such as those described above with respect to Figures 3A-3D , may be used to modulate the VT.Referring now to Figures 5A-5L , a series of cross-sectional illustrations depicting a process for forming a pair of nanoribbon transistors with different gate dielectric thicknesses is shown, in accordance with an embodiment. The process illustrated includes the use of an oxidation process to deposit the gate dielectric. In order to provide a thick gate dielectric with an oxidation process that consumes a portion of the nanoribbon, the original thickness of the nanoribbon needs to be increased. Accordingly, the process in Figures 5A-5L include the formation of a stack with non-uniform nanoribbon thicknesses.Referring now to Figure 5A , a cross-sectional illustration of a device 500 is shown, in accordance with an embodiment. In an embodiment, the device 500 may comprise a substrate 506 and a stack of alternating layers. The substrate 506 may be any substrate such as those described above. The alternating layers may comprise semiconductor layers 538 and sacrificial layers 537. The semiconductor layers 438 are the material that will form the nanoribbons. In a specific embodiment, the semiconductor layers 538 are silicon and the sacrificial layers 537 are SiGe. In another specific embodiment, the semiconductor layers 538 are germanium, and the sacrificial layers 537 are SiGe. In the illustrated embodiment, a first layer 541 is a sacrificial layer 537, a second layer 542 is a semiconductor layer 538, and a third layer 543 is a sacrificial layer 537.Referring now to Figure 5B , a cross-sectional illustration of the device 500 after a first patterning operation is shown, in accordance with an embodiment. In an embodiment, the patterning process may include disposing a resist 570 and patterning the resist 570. The remaining portion of the resist 570 may cover the stack in a first region 504A and expose a second region 504B. The resist 570 is used as a mask during an etching process that etches away the portion of the sacrificial layer 537 in the third layer 543 of the second region 504B. As shown, a portion of the sacrificial layer 537 in the second layer 542 is now the topmost layer in the second region 504B.Referring now to Figure 5C , a cross-sectional illustration of the device 500 after an additional semiconductor layer 538 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 504A and the second region 504B, the deposited semiconductor layer 538 is split between two of the layers. Particularly, the deposited semiconductor layer 538 in the first region 504A is in the fourth layer 544, and the deposited semiconductor layer 538 in the second region 504B is in the third layer 543.Referring now to Figure 5D , a cross-sectional illustration of the device 500 after an additional sacrificial layer 537 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 504A and the second region 504B, the deposited sacrificial layer 537 is split between two of the layers. Particularly, the deposited sacrificial layer 537 in the first region 504A is in the fifth layer 545, and the deposited sacrificial layer 537 in the second region 504B is in the fourth layer 544.Referring now to Figure 5E , a cross-sectional illustration of the device 500 after a second patterning operation is shown, in accordance with an embodiment. In an embodiment, the patterning process may include disposing a resist 570 and patterning the resist 570. The remaining portion of the resist 570 may cover the stack in the first region 504A and expose the second region 504B. The resist 570 is used as a mask during an etching process that etches away the portion of the sacrificial layer 537 in the fourth layer 544 of the second region 504B. As shown, a portion of the semiconductor layer 538 in the third layer 543 is now the topmost layer in the second region 504B.Referring now to Figure 5F , a cross-sectional illustration of the device 500 after an additional semiconductor layer 538 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 504A and the second region 504B, the deposited semiconductor layer 538 is split between two of the layers. Particularly, the deposited semiconductor layer 538 in the first region 504A is in the sixth layer 546, and the deposited semiconductor layer 538 in the second region 504B is in the fourth layer 544.Referring now to Figure 5G , a cross-sectional illustration of the device 500 after an additional sacrificial layer 537 is disposed over the top surfaces of the stack is shown, in accordance with an embodiment. Due to the non-uniform height between the first region 504A and the second region 504B, the deposited sacrificial layer 537 is split between two of the layers. Particularly, the deposited sacrificial layer 537 in the first region 504A is in the seventh layer 547, and the deposited sacrificial layer 537 in the second region 504B is in the fifth layer 545.In an embodiment, the processing operations in Figures 5A-5G may be repeated any number of times to provide a desired number of thick semiconductor layers in the second region 504B. For example, Figure 5H is a cross-sectional illustration after a pair of thick semiconductor layers 538 are formed in the second region 504B. In Figure 5H , the height of the first region 504A and the height of the second region 504B are shown as being planar with each other. For example, a planarization process may have been implemented in order to reduce the height of the first region 504A back to the sacrificial layer 537 in the ninth layer 549.In an embodiment, each thick semiconductor layer 538 in the second region 504B may have a thickness that is greater than a thickness of the semiconductor layers 538 in the first region 504A. In a particular embodiment, the semiconductor layers 538 in the second region 504B are three times larger than the thickness of the semiconductor layers 538 in the first region 504A. For example, each of the semiconductor layers 538 in the second region 504B extend into three layers (e.g., layers 542-544 or layers 546-548). In an embodiment, the thickness of the semiconductor layers 538 in the second region 504B is an integer multiple of the thickness of the semiconductor layers 538 in the first region 504A.Subsequent to the formation of the layers 541-549 over the substrate 506, the layers may be patterned into a plurality of fins having a profile similar to the profile of fins 408 illustrated in Figure 4J .Referring now to Figure 5I , a cross-sectional illustration of a device 500 along the length of a first fin 508A and a second fin 508B is shown, in accordance with an embodiment. The first fin 508A will have a stack of first nanoribbons 510A and sacrificial layers 537 similar to the stack in the first region 504A of Figure 5H , and the second fin 508B will have a stack of second nanoribbons 510B and sacrificial layers 537 similar to the stack in the second region 504B of Figure 5H . That is, the second fin 508B will have second nanoribbons 510B that have a thickness greater than a thickness of the first nanoribbons 510A.The illustrated embodiment depicts a break 504 along the length of the substrate 506. The break 504 may be at some point along a single fin 508. That is, the first fin 508A and the second fin 508B may be part of a single fin that has both types of nanoribbon thicknesses. Alternatively, the second fin 508B may be located on a different fin than the first fin 508A. That is, in some embodiments, the break 504 does not represent a gap within a single fin 508.Referring now to Figure 5J , a cross-sectional of the device 500 after various processing operations have been implemented to provide openings 573 over channel regions of the first nanoribbons 510A and the second nanoribbons 510B is shown, in accordance with an embodiment. In an embodiment, the processing operations implemented between Figure 5I and Figure 5J may be substantially similar to the processing operations shown and described with respect to Figures 4L-4N . To briefly summarize the processing operations, a sacrificial gate (not shown) is disposed and spacers 522 are formed. Openings for source/drain regions 520 are formed, and the source/drain regions 520 are grown (e.g., with an epitaxial process). The sacrificial gate is then removed and the sacrificial layers 537 are selectively etched to expose the first nanoribbons 510A and the second nanoribbons 510B.In an embodiment, the first nanoribbons 510A have a first thickness TA and the second nanoribbons 510B have a second thickness TB that is greater than the first thickness TA. In some embodiments, a first spacing between the first nanoribbons 510A is substantially similar to a second spacing between the second nanoribbons 510B.Referring now to Figure 5K , a cross-sectional illustration of the device 500 after a first gate dielectric 515A and a second gate dielectric 515B is disposed is shown, in accordance with an embodiment. In a particular embodiment, the second gate dielectric 515B may be formed with an oxidation process. That is, portions of the second nanoribbons 510B may be consumed to provide the second gate dielectric 515B. Since the second gate dielectric 515B is disposed with an oxidation process, the interior sidewalls of the spacers 522 are not covered by the second gate dielectric 515B. While not illustrated, in some embodiments, portions of the substrate 506 may also be oxidized. In an embodiment, the second gate dielectric 515B may also be annealed after formation.In an embodiment, the spacers 522 protect portions of the second nanoribbons 510B from being oxidized. Accordingly, the portion of the second nanoribbons 510B within the spacer 522 may have the original thickness, and the portion of the nanoribbons 510B in the channel region will have a smaller thickness.In an embodiment, the first and second gate dielectrics 515A and 515B may be deposited with different processes and materials. For example, the first nanoribbons 510A may be masked during the oxidation process used to form the second gate dielectric 515B, and the second nanoribbons 510B may be masked during the deposition of the first gate dielectric 515A. In other embodiments, the first gate dielectric 515A and the second gate dielectric 515B may be formed with a single oxidation process. When the desired thickness of the first gate dielectric 515A is reached, the first nanoribbons 510A are masked and the oxidation may continue to increase the thickness of the second gate dielectric 515B.Referring now to Figure 5L , a cross-sectional illustration after a gate electrode 530 is disposed around the nanoribbons 510 is shown, in accordance with an embodiment. In an embodiment, the gate electrode 530 wraps around each of the nanoribbons 510 in order to provide GAA control of each nanoribbon 510. The gate electrode material may be deposited with any suitable deposition process (e.g., chemical vapor deposition (CVD), ALD, etc.).In an embodiment, a single material may be used for the gate electrode 530 even between N-type and P-type transistors. Such embodiments are possible by controlling the VT of the devices using different gate dielectric configurations and treatments. For example, anneals of various gate dielectric materials, such as those described above with respect to Figures 3A-3D , may be used to modulate the VT.Figure 6 illustrates a computing device 600 in accordance with one implementation of an embodiment of the disclosure. The computing device 600 houses a board 602. The board 602 may include a number of components, including but not limited to a processor 604 and at least one communication chip 606. The processor 604 is physically and electrically coupled to the board 602. In some implementations the at least one communication chip 606 is also physically and electrically coupled to the board 602. In further implementations, the communication chip 606 is part of the processor 604.Depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to the board 602. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 606 enables wireless communications for the transfer of data to and from the computing device 600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 606 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 600 may include a plurality of communication chips 606. For instance, a first communication chip 606 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 606 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 604 of the computing device 600 includes an integrated circuit die packaged within the processor 604. In an embodiment, the integrated circuit die of the processor may comprise nanowire transistor devices with non-uniform gate dielectric thicknesses, as described herein. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 606 also includes an integrated circuit die packaged within the communication chip 606. In an embodiment, the integrated circuit die of the communication chip 606 may comprise nanowire transistor devices with non-uniform gate dielectric thicknesses, as described herein.In further implementations, another component housed within the computing device 600 may comprise nanowire transistor devices with non-uniform gate dielectric thicknesses, as described herein.In various implementations, the computing device 600 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 600 may be any other electronic device that processes data.Figure 7 illustrates an interposer 700 that includes one or more embodiments of the disclosure. The interposer 700 is an intervening substrate used to bridge a first substrate 702 to a second substrate 704. The first substrate 702 may be, for instance, an integrated circuit die. The second substrate 704 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. In an embodiment, one of both of the first substrate 702 and the second substrate 704 may comprise nanowire transistor devices with non-uniform gate dielectric thicknesses, in accordance with embodiments described herein. Generally, the purpose of an interposer 700 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 700 may couple an integrated circuit die to a ball grid array (BGA) 706 that can subsequently be coupled to the second substrate 704. In some embodiments, the first and second substrates 702/704 are attached to opposing sides of the interposer 700. In other embodiments, the first and second substrates 702/704 are attached to the same side of the interposer 700. And in further embodiments, three or more substrates are interconnected by way of the interposer 700.The interposer 700 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer 700 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materialsThe interposer 700 may include metal interconnects 708 and vias 710, including but not limited to through-silicon vias (TSVs) 712. The interposer 700 may further include embedded devices 714, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 700. In accordance with embodiments of the disclosure, apparatuses or processes disclosed herein may be used in the fabrication of interposer 700.Thus, embodiments of the present disclosure may comprise semiconductor devices that comprise nanowire transistor devices with graded tip regions, and the resulting structures.The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example 1: a semiconductor device, comprising: a substrate; a plurality of first semiconductor layers in a vertical stack over the substrate, wherein the first semiconductor layers have a first spacing; a first dielectric surrounding each of the first semiconductor layers, wherein the first dielectric has a first thickness; a plurality of second semiconductor layers in a vertical stack over the substrate, wherein the second semiconductor layers have a second spacing that is greater than the first spacing; and a second dielectric surrounding each of the second semiconductor layers, wherein the second dielectric has a second thickness that is greater than the first thickness.Example 2: the semiconductor device of Example 1, wherein the first semiconductor layers and the second semiconductor layers are nanowires or nanoribbons.Example 3: the semiconductor device of Example 1 or Example 2, wherein a surface facing the substrate of a bottommost first semiconductor layer is aligned with a surface facing the substrate of a bottommost second semiconductor layer.Example 4: the semiconductor device of Example 1 or Example 2, wherein a surface facing the substrate of a bottommost first semiconductor layer is misaligned with a surface facing the substrate of a bottommost second semiconductor layer.Example 5: the semiconductor device of Examples 1-4, wherein the second gate dielectric comprises: a first dielectric layer over the second semiconductor layers; and a second dielectric layer over the first dielectric layer.Example 6: the semiconductor device of Example 5, wherein the first dielectric layer is an oxide, and wherein the second dielectric layer is a dipole material.Example 7: the semiconductor device of Example 6, wherein the first dielectric layer is SiO2 or HfO2 and wherein the second dielectric layer comprises one or more of La2O3, ZrO2, and TiO2.Example 8: a semiconductor device, comprising: a substrate a first transistor over the substrate, wherein the first transistor comprises: a plurality of first nanoribbons, the first nanoribbons arranged in a vertical stack with a first spacing between each first nanoribbon; a first gate structure over the plurality first nanoribbons, the first gate structure defining a first channel region of the plurality of first nanoribbons, wherein the first gate structure comprises: a first gate dielectric wrapping around the plurality of first nanoribbons, the first gate dielectric having a first thickness; and a first gate electrode wrapping around the first gate dielectric; and a second transistor over the substrate, wherein the second transistor comprises: a plurality of second nanoribbons, the second nanoribbons arranged in a vertical stack with a second spacing between each second nanoribbon, wherein the second spacing is greater than the first spacing; a second gate structure over the plurality second nanoribbons, the second gate structure defining a second channel region of the plurality of second nanoribbons, wherein the first gate structure comprises: a second gate dielectric wrapping around the plurality of second nanoribbons, the second gate dielectric having a second thickness that is greater than the first thickness; and a second gate electrode wrapping around the second gate dielectric.Example 9: the semiconductor device of Example 8, wherein the second spacing is an integer multiple of the first spacing.Example 10: the semiconductor device of Example 9, wherein the second spacing is twice the first spacing.Example 11: the semiconductor device of Examples 8-10, wherein a bottommost second nanoribbon is aligned with a bottommost first nanoribbon.Example 12: the semiconductor device of Examples 8-11, wherein a thickness of each first nanoribbon is substantially similar to a thickness of each second nanoribbon.Example 13: the semiconductor device of Examples 8-11, wherein there are more first nanoribbons than second nanoribbons.Example 14: the semiconductor device of Example 13, wherein the number of first nanoribbons is an integer multiple of the number of second nanoribbons.Example 15 the semiconductor device of Examples 8-14, wherein the second thickness is at least twice the first thickness.Example 16: the semiconductor device of Examples 8-15, wherein the first spacing is approximately 7nm or less and wherein the second spacing is approximately 7nm or greater.Example 17: a method of forming a semiconductor device, comprising:disposing a multilayer stack of alternating semiconductor layers and sacrificial layers over a substrate, wherein the multilayer stack comprises a first region and a second region, and wherein the multilayer stack in the first region is different than the multilayer stack in the second region; patterning the multi-layer stack into a plurality of fins, wherein a first fin is in the first region and a second fin is in the second region; disposing a sacrificial gate structure over each of the first fin and the second fin, wherein the sacrificial gates define a first channel region of the first fin and a second channel region of the second fin;disposing pairs of source/drain regions on opposite ends of each sacrificial gate structure;removing the sacrificial layers from the channel regions of the first fin and the second fin;disposing a first gate dielectric over the semiconductor layers in the first channel region, wherein the first gate dielectric has a first thickness; and disposing a second gate dielectric over the semiconductor layers in the second channel region, wherein the second gate dielectric has a second thickness that is greater than the first thickness.Example 18: the method of Example 17, wherein a thickness of the semiconductor layers in the first region is substantially similar to a thickness of the semiconductor layers in the second region, and wherein a spacing between semiconductor layers in the second region is greater than a spacing between semiconductor layers in the first region.Example 19: the method of Example 18, wherein the second gate dielectric is disposed with an atomic layer deposition (ALD) process.Example 20: the method of Example 17, wherein a thickness of the semiconductor layers in the first region is smaller than a thickness of the semiconductor layers in the second region, and wherein a spacing between semiconductor layers in the first region is substantially similar to a spacing between semiconductor layers in the second region.Example 21: the method of Example 20, wherein the second gate dielectric is disposed by oxidizing the semiconductor layers in the second channel region.Example 22: the method of Examples 17-21, wherein a number of semiconductor layers in the first region is an integer multiple of the number of semiconductor layers in the second region.Example 23: an electronic device, comprising: a board; an electronic package coupled to the board; and a die electrically coupled to the electronic package, wherein the die comprises: a substrate; a plurality of first semiconductor layers in a vertical stack over the substrate, wherein the first semiconductor layers have a first spacing; a first dielectric surrounding each of the first semiconductor layers, wherein the first dielectric has a first thickness; a plurality of second semiconductor layers in a vertical stack over the substrate, wherein the second semiconductor layers have a second spacing that is greater than the first spacing; and a second dielectric surrounding each of the second semiconductor layers, wherein the second dielectric has a second thickness that is greater than the first thickness.Example 24: the electronic device of Example 23, wherein the first semiconductor layers and the second semiconductor layers are nanowires or nanoribbons.Example 25: the electronic device of Example 23 or Example 24, wherein the number of first semiconductor layers is an integer multiple of the number of second semiconductor layers.
An integrated circuit (IC) structure may include one or more trench- based semiconductor devices, e.g., field-effect transistors (trench FETs), having a front-side drain contact. Each semiconductor device may include an epitaxy layer, a doped source region in the epitaxy layer, a front-side source contact coupled to the source region, a poly gate formed in a trench in the epitaxy layer, and a front-side drain contact extending through the poly gate trench and isolated from the poly gate. The device may define a drift region from the poly gate/lower source region surface intersection to the front-side drain contact. The drift region may be located within the epitaxy layer, without extends into an underlying bulk substrate or transition layer. The depth of the front-side drain contact may be selected to influence the breakdown voltage of the respective device. In addition, the front- side drain contacts may allow the IC structure to be flip-chip mounted or packaged.
CLAIMS1. An integrated circuit (IC) device, comprising:a plurality of semiconductor devices, each semiconductor device comprising:an epitaxy layer;a doped source region formed in the epitaxy layer;a front-side source contact coupled to the doped source region; a trench formed in the epitaxy layer;a front-side drain contact extending into the trench formed in the epitaxy layer; a poly gate formed in the epitaxy layer; andwherein a drift region is defined from an intersection of the poly gate and doped source region to the front-side drain contact.2. The device of any of Claims 1 or 3-12, wherein each semiconductor device comprises a trench field-effect transistor (FET).3. The device of any of Claims 1-2 or 4-12, further comprising a front-side gate contact.4. The device of any of Claims 1-3 or 5-12, wherein a depth of the drain contact defines a breakdown voltage of the semiconductor device.5. The device of any of Claims 1-4 or 7-12, wherein the drain contact is located above a bulk substrate region of the device. 6. The device of Claim 5, wherein the drain contact does not extend into the bulk substrate region.7. The device of any of Claims 1-6 or 9-12, wherein the drain contact is located above a transition region between the epitaxy layer and a bulk substrate region.8. The device of Claim 7, wherein the drain contact does not extend into the transition region between the epitaxy layer and the bulk substrate region.9. The device of any of Claims 1-8 or 10-12, wherein the epitaxy layer is coupled directly to a bulk substrate region, with no transition region between the epitaxy layer and bulk substrate region.10. The device of any of Claims 1-9 or 11-12, wherein the semiconductor device defines a current path from the front-side source contact to the front-side drain contact without passing through a transition layer or a bulk substrate. 11. The device of any of Claims 1-10 or 12, wherein the semiconductor device defines a current path from the source region to the drain contact, wherein the current path is fully contained in the epitaxy layer.12. The device of any of Claims 1-11, wherein the drain contact is isolated from the poly gate by an oxide layer.13. An integrated circuit (IC) device, comprising:at least one field-effect transistor (FET), each FET comprising:a substrate;an epitaxy region over the substrate;a source formed in the epitaxy region;a poly gate formed in the epitaxy region;a drain contact formed in the epitaxy region; anda current path from the source to the drain contact, wherein the current path is located in the epitaxy region and does not pass through the substrate.14. The device of any of Claims 13 or 15-17, comprising a transition region between the epitaxy region and the substrate, wherein the current path does not pass through the transition region.15. The device of any of Claims 13-14 or 16-17, further comprising a front-side source contact coupled to the source; and wherein the drain contact is a front-side drain contact.16. The device of any of Claims 13-15 or 17, wherein the drain contact is isolated from the poly gate by an oxide layer. 17. The device of any of Claims 13-17, wherein:the source extends into the epitaxy region by a first distance;the poly gate extends into the epitaxy region by a second distance greater than the first distance; andthe drain contact extends into the epitaxy region by a third distance greater than the second distance.18. An electronic device, comprising:an integrated circuit (IC) device including a plurality of trench-type field-effect transistors (FETs), each trench FET comprising:a substrate;an epitaxy region over the substrate;a source formed in the epitaxy region;a poly gate formed in the epitaxy region;a drain contact formed in the epitaxy region; anda current path from the source to the drain contact, wherein the current path is located in the epitaxy region and does not pass through the substrate.
VERTICAL FIELD EFFECT TRANSISTOR WITH FRONT-SIDE SOURCE AND DRAIN CONTACTSRELATED PATENT APPLICATIONThis application claims priority to commonly owned U.S. Provisional PatentApplication No. 62/426,196; filed November 23, 2016; which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to semiconductor devices, e.g., field-effect transistors (FETs) and, more particularly, to trench FETs or other trench-type semiconductor devices having front-side source and drain contacts.BACKGROUNDProcesses for forming transistors include creating split-trench transistors, wherein the gate structure inside the trench is split into two segments. Trench-based transistors include field-effect transistors (FETs) such as power MOSFETs. Transistors formed using trenches may include gate electrodes that are buried in a trench etched in the silicon. This may result in a vertical channel. In many such FETs, the current may flow from front side of the semiconductor die to the back side of the semiconductor die. Transistors formed using trenches may be considered vertical transistors, as opposed to lateral devices.Trench FET devices may allow better density through use of the trench feature. However, trench FET devices may suffer from packaging issues when used in modules and devices. Furthermore, a thin back grind is typically required to use such trench devices.Figure 1 illustrates a known integrated circuit (IC) structure 10 including a number of trench-based semiconductor device, more specifically, trench FETs. The example IC structure 10 includes a highly-doped bulk silicon substrate 12, a lightly-doped epitaxy (EPI) layer 14 formed over bulk substrate 12, and a transition region 16 between EPI layer 14 and bulk substrate 12. Transition region may define a transition from the more lightly doped EPI layer 14 to the more heavily doped bulk substrate region 12. The more lightly doped region may be light enough to survive a breakdown field. The resistance of this region may have consequences for operation of the FET because this area is typically not a pure metal.Doped source regions 20 may be formed in a top portion of EPI layer 14, and poly gates 30 may be deposited in trenches formed in EPI layer 14. An oxide or insulation layer 26 may be formed over the EPI layer 14, and source contacts 22 and gate contacts (not shown) may be formed on the top or front-side of the wafer to connect the source regions 20 and poly gates 30 to conductive elements at the top or front-side of the wafer, e.g., an overlying metal layer 24 connected to source contacts 22 and/or front-side gate contacts (not shown). Drain contacts may be located on the bottom or back-side of the wafer, as indicated in Figure 1, to define a number of vertical trench FETs. This type of vertical FET may offer better density when compared with lateral FETs. A thin back grind may be used to reduce parasitic resistance.Figure 2 illustrates the performance of the epitaxy region 14, transition region 16, and bulk substrate 12 in terms of carrier concentration versus depth. The left, flat portion of the curve represents electrical performance in the EPI 16, the rising part of the curve represents electrical performance in the transition region 14, and the right, flat portion of the curve represents electrical performance in the bulk region 12. In some structures, the bulk region 12 may be 50 to 150 microns thick, and the transition 16 may approximately one micron thick. For a typical 25 volt FET, the die area might be about 7 mm2, and generate a total of 0.5 mohm, including resistance of 0.29 mohm for the back grind and 0.2 mohm for the transition. SUMMARYEmbodiments of the present disclosure provide semiconductor devices having front- side source and drain contacts. Some embodiments provide trench field-effect transistors (FETs) FETs having front-side drain contacts, and may include a drift region defined in an epitaxy region (EPI) and not passing through an underlying bulk substrate or transition region, if present. Some embodiments include an integrated circuit (e.g., microchip) including one or more such FETs having front-side drain contacts, which may allow for flip-chip style mounting/packaging of the integrated circuit (e.g., microchip).In some embodiments, the front-side drain contact may be formed in a trench formed within or through a poly gate trench formed in the EPI layer. The depth of the drain contact trench, and thus the drain contact formed in such trench, may be selectively set and the concentration of doping associated with the trench or adjacent structures may be selected to provide a desired breakdown voltage of the resulting FET. In addition, in some embodiments, the device might eliminate a transition area of epitaxy (EPF) doped silicon present in existing trench FETs. The elimination of such a transition area may remove resistance associated with the transition area.One embodiment provides an apparatus including a plurality of semiconductor devices, wherein each semiconductor device includes an epitaxy layer, a doped source region formed in the epitaxy layer, a front-side source contact coupled to the doped source region, a trench formed in the epitaxy layer, a front-side drain contact extending into the trench formed in the epitaxy layer, and a poly gate formed in the epitaxy layer, wherein a drift region is defined between the poly gate and the front-side drain contact.In one embodiment, each semiconductor device comprises a trench field-effect transistor (FET).In one embodiment, the device further includes a front-side gate contact. In one embodiment, a depth of the drain contact defines a breakdown voltage of the semiconductor device.In one embodiment, the drain contact is located above a bulk substrate region of the device.In one embodiment, the drain contact does not extend into the bulk substrate region. In one embodiment, the drain contact is located above a transition region between the epitaxy layer and a bulk substrate region.In one embodiment, the drain contact does not extend into the transition region between the epitaxy layer and the bulk substrate region.In one embodiment, the epitaxy layer is coupled directly to a bulk substrate region, with no transition region between the epitaxy layer and bulk substrate region.In one embodiment, the semiconductor device defines a current path from the front- side source contact to the front-side drain contact without passing through a transition layer or a bulk substrate. In one embodiment, the semiconductor device defines a current path from the source region to the drain contact, wherein the current path is fully contained in the epitaxy layer.In one embodiment, the drain contact is isolated from the poly gate by an oxide layer.Another embodiment provides an apparatus including at least one field-effect transistor (FET), wherein each FET includes a substrate, an epitaxy region over the substrate, a source formed in the epitaxy region, a poly gate formed in the epitaxy region, a drain contact formed in the epitaxy region, and a current path from the source to the drain contact, wherein the current path is located in the epitaxy region and does not pass through the substrate.In one embodiment, the apparatus includes a transition region between the epitaxy region and the substrate, wherein the current path does not pass through the transition region.In one embodiment, the apparatus includes a front-side source contact coupled to the source; and wherein the drain contact is a front-side drain contact.In one embodiment, the drain contact is isolated from the poly gate by an oxide layer.In one embodiment, the source extends into the epitaxy region by a first distance, the poly gate extends into the epitaxy region by a second distance greater than the first distance, and the drain contact extends into the epitaxy region by a third distance greater than the second distance.Another embodiment provides a method of forming a semiconductor device. The method may include forming an epitaxy (epi) region, forming a poly gate trench in the epitaxy region, forming a drain contact trench through the poly gate trench and extending to a further depth in the epitaxy region than the poly gate trench, forming a poly gate in the poly gate trench, forming a front-side drain contact in the drain contact trench, wherein the front-side drain contact is contained in the epitaxy region, and forming a source region in the epitaxy region adjacent the poly gate, and wherein a drift region is defined from an intersection of the poly gate and source region to the front-side drain contact.In one embodiment, the front-side drain contact in the drain contact trench is isolated from each of the at least one poly gate by a respective insulating spacer.In one embodiment, the drift region is fully contained in the epitaxy layer. In one embodiment, the method includes forming a bulk substrate, and forming the epitaxy region over the bulk substrate, wherein the drift region does not extend into the bulk substrate.In one embodiment, the method includes forming the epitaxy region direct on the bulk substrate such that the epitaxy region is directly coupled to the bulk substrate.In one embodiment, the method includes forming a bulk substrate, and forming or defining a transition region between the epitaxy region and the bulk substrate, wherein the drift region does not pass extend into the transition region.In one embodiment, the method includes forming a pair of poly gates in the poly gate trench, and forming the front-side drain contact in the drain contact trench such that the front- side drain contact extends between the pair of poly gates in the poly gate trench.In one embodiment, the method includes forming a respective insulating spacer between the front-side drain contact and each of the pair of poly gates.In one embodiment, the semiconductor device comprises a trench field-effect transistor (FET).Another embodiment provides a method of forming a trench field-effect transistor (FET). The method may include forming an epitaxy region, forming a source region in the epitaxy region, forming a front-side source contact coupled to the source region, forming a poly gate in the epitaxy region, and forming a front-side drain contact in the epitaxy region, wherein a current path is defined from the source to the drain contact, wherein the current path is located in the epitaxy region.In one embodiment, the method includes forming a bulk substrate, and forming the epitaxy region over the bulk substrate, wherein the current path does not pass through the bulk substrate. In one embodiment, the method includes forming a bulk substrate, and forming or defining a transition region between the epitaxy region and the bulk substrate, wherein the current path does not pass through the transition region.In one embodiment, the source region extends into the epitaxy region by a first distance, the poly gate extends into the epitaxy region by a second distance greater than the first distance, and the drain contact extends into the epitaxy region by a third distance greater than the second distance.In one embodiment, the method includes forming the poly gate in a poly gate trench, wherein the front-side drain contact extends through the poly gate trench, and wherein the front-side drain contact is isolated from the poly gate.In one embodiment, the method includes forming a pair of poly gates in a poly gate trench, wherein the front-side drain contact extends between the pair of poly gates, and wherein the front-side drain contact is isolated from each poly gate by a respective insulation structure.BRIEF DESCRIPTION OF THE FIGURES Example aspects and embodiments are discussed below with reference to the drawings, in which:Figure 1 illustrates a known integrated circuit (IC) structure including a number of trench-based semiconductor device, more specifically, trench FETs;Figure 2 illustrates the performance, in particular the carrier concentration versus depth, of the epitaxy region, transition region, and bulk substrate of the known IC structure of Figure i;Figure 3 illustrates an example integrated circuit (IC) structure including a number of trench-based semiconductor devices, in particular trench FETs, having front-side source and front-side drain contacts, according to one example embodiment; and Figures 4A-4Q illustrate an example method of forming an IC structure including a at least one trench FET having a front-side drain contact, e.g., the example IC structure shown in Figure 3, according to one example embodiment.DETAILED DESCRIPTIONSome embodiments of the present disclosure provide a semiconductor device such as a transistor, e.g., a FET, that includes a front-side (or top of the wafer) drain contact formed in an isolated trench adjacent respective poly gate(s). Thus, such a semiconductor device can be created using flip chip style packaging. Further, the depth of the drain contact trench may be variably set and the concentration of doping associated with the trench may be varied, e.g., to provide a desired breakdown voltage for each respective device. In addition, some embodiments may eliminate a transition area of epitaxy (EPI) doped silicon, which may remove or reduce resistance. Some embodiments provide electrical device or apparatus that includes any number of such semiconductor devices, e.g., trench FETs, according to the present disclosure.Figure 3 illustrates an example integrated circuit (IC) structure 100 including a number of semiconductor devices 105, in particular trench FETs 105, having front-side source and front-side drain contacts, according to one example embodiment. Example IC structure 100 may include a bulk substrate 112, an epitaxy (EPI) layer 114 formed over substrate 112, and a transition region 116 between EPI layer 114 and substrate 112. Substrate 112 may be a highly- doped (e.g., concentration of about 3 x 1019/cm3) bulk silicon substrate, EPI layer 114 may be a lightly-doped (e.g., concentration of about 3 x 1016/cm3) epitaxy layer, e.g., silicon epitaxy, grown or deposited over substrate 112, and transition region 116 may define a transition between from the lightly-doped EPI layer 114 to the more heavily doped bulk substrate region 112. Other embodiments may exclude transition region 116, such that EPI is directly coupled on bulk substrate 112 (which may be formed as a lightly-doped region), or may alternative exclude both transition region 116 and bulk substrate 112.A number of doped source regions 120 may be formed in a top portion of EPI layer 114, and poly gates 130A, 130B may be formed in trenches formed in EPI layer 114. However, in contrast with the known IC structure 10 shown in Figure 1, IC structure 100 includes a number of drain contacts 140 extending down into the poly gate trenches and up to the top or front side of the wafer, to define front-side drain contacts 140, as opposed to the back side drain contacts used in known device 10. As shown in Figure 3, each front-side drain contact 140 may extend into a drain trench152 formed in the poly gate trench, indicated at 150. In the illustrated embodiment, each front- side drain contact 140 essentially "splits" the poly gate of the known structure (e.g., poly gate 30 shown in Figure 1) to define a pair of poly gates 130A, 130B in each poly gate trench 150. Thus, drain contacts 140 may be referred to as "split trench" front-side drain contacts, and the FET 105 corresponding to each drain contact 140 may be referred to as a "split trench FET." Each drain contact 140 may be electrically isolated from poly gates 130A and 130B by insulator regions 144, e.g., oxide regions.As shown in Figure 3, each drain contact 140 may be formed (e.g., by forming a drain trench 152 within poly gate trench 150) to extend to a further depth than the adjacent poly gate(s) 130A, 130B, to thereby define a drift field or drift region between the gate-source junction defined between poly gate 130A or 130B and an adjacent source 120 to the bottom of front-side drain contact 140 exposed to EPI layer 1 14, as indicated by the label "Drift" in Figure 3. In some embodiments, this drift region may be completely contained within the EPI region 114. Thus, in some embodiments, the drift region of each FET 105 does not extend into bulk substrate region 112, and may also not extend into transition region 116 (in embodiments that include a transition region).As used herein, a "trench" may refer to an opening having any cross-section shape and any shape from a top-down view. For example, with reference to the various trenches shown in Figures 3 and 4, each trench may have (a) an elongated shape extending in a direction into the page (i.e., perpendicular to the cross sections shown in Figures 3 and 4), to define a linear or otherwise elongated trench shape in a cross-section taken from a top-down view, or (b) a generally circular or square cross-section taken from a top-down view (i.e., perpendicular to the cross sections shown in Figures 3 and 4), to define generally circular or square-shaped localized holes in the epitaxy layer, or (c) any other suitable shapes in the cross-sections shown in Figure 3 and 4 or in cross-sections perpendicular to the illustrated cross-section (e.g., from a top-down view).An insulation layer 126, e.g., oxide layer, may be formed over the EPI layer 1 14. Front- side source contacts 122 coupled to source regions 120, and front-side gate contacts 140 may extend vertically through insulation layer 126. Front-side source contacts 122 may be coupled to front-side source conductors 124, e.g., source metal layer (e.g., aluminum or copper), and front-side drain contacts 140 may be coupled to front-side drain conductors 142, e.g., drain metal layer (e.g., aluminum or copper). Front-side source contacts 122, front-side drain contacts 140, front-side source conductors 124, and front-side drain conductors 142 may be formed from any suitable metal or other conductive material. In one embodiment, front-side source contacts 122 and front-side drain contacts 140 comprise tungsten (W), and front-side source conductors 124 and front-side drain conductors 142 comprise copper (Cu). Top or front- side gate contact(s) (not shown) may also be provided according to known techniques and structures.The depth of drain contact 140, indicated as Ddmin, may set a drift length. A breakdown voltage (BVD) for each FET 105 may be defined based on the doping concentration of EPI region 114, and the drain contact depth Ddrain relative to the depth of EPI region 114 and/the poly gate depth Dpoiy. Thus, the depth of drain contact 140 for each respective FET 105 may be set to provide a desired BVD for the respective FET 105. Thus, in some embodiments, a contiguous semiconductor structure including multiple FETs sharing a common substrate and/or EPI layer may include multiple drain contacts with different depths. For example, the example semiconductor structure 100 includes multiple FETs 105 sharing a common bulk substrate 112 and EPI layer 114, with drain contacts 140 having different depths that provide different breakdown voltages.As noted above, the FET drift region for each FET 105 may be completely contained within the EPI region 114. By eliminating the current passing through the transition region and/or bulk region to a backside drain (as in the known device shown in Figure 1), resistance in such regions may be avoided. Thus, in some embodiments, the transition region 114 and/or the bulk region 166 may be eliminated altogether. In other embodiments, depending upon the desired voltage, a transition region may be eliminated, and a lightly doped bulk region maintained. Thus, as additional drain contact may be added to the frontside of the wafer. The current may flow from the gate-source junction, within the EPI layer, to the drain contact. The result may be that parasitic resistance is eliminated. Flip chip packaging might be used. This design may provide substantially better density than lateral FET devices.Figures 4A-4Q illustrate an example method of forming a semiconductor device including one or more trench FETs having front-side source contacts and front-side drain contacts, e.g., "split trench" FETs 105 shown in Figure 3, according to one example embodiment.As shown in Figure 4A, an epitaxy layer (EPI) 200 may be formed over one or more base layers 202, e.g., a bulk silicon substrate and/or a transition layer, e.g., as discussed above regarding the embodiment of Figure 3. Other embodiments may exclude base layers 202. A screen oxide layer 210 may be formed (e.g., grown) on top of EPI layer 200, and a nitride layer 212 may be deposited over oxide layer 210. A hard mask oxide layer 214 may then be deposited over the nitride layer 212.As shown in Figure 4B, a mask 220 (e.g., photoresist) may be formed with a trench222. As shown in Figure 4C, at least one etch may be performed through trench 222 to remove portions of mask oxide layer 214, nitride layer 212, and oxide layer 210 in the trench 222, to thereby expose a top surface of EPI 200 in the trench.As shown in Figure 4D, photomask 220 may be removed (e.g., stripped), and an oxide- selective etch may be performed to etch a poly gate trench 224 in the EPI layer 200, to a depth indicated as Dpoiytrench. For example, poly gate trench 224 may be etched to a depth Dpoiytrench of between 0.3 microns and 1.0 micron, e.g., about 0.6 microns.As shown in Figure 4E, a spacer oxide layer 230 may be deposited over the structure and extending into gate poly trench 224. As shown below, the thickness of spacer oxide layer 230 may subsequently define the thickness of poly gates 262 of the resulting device. The lower the thickness of spacer oxide layer 230 (which defines the poly gate thickness), the lower the parasitic capacitance of the resulting device. In some embodiments, the spacer oxide layer 230 thickness may between ΙΟΟθΑ and 3000A.As shown in Figure 4F, a vertical spacer etch may be performed to remove portions of spacer oxide layer outside poly gate trench 224 and at the bottom of poly gate trench 224, to thereby define a pair of oxide spacers 232 on the sidewalls of trench 224.As shown in Figure 4G, an oxide-selective trench etch may be performed to form a drain contact trench 240 in EPI layer 200, to a depth indicated as Ddrain trench. For example, drain contact trench 240 may be etched to a depth Ddrain trench of between 1.0 micron and 2.0 microns, e.g., about 1.4 microns. As discussed above, the depth Ddrain trench may be selected, along with doping concentrations in the device (e.g., doping concentration of EPI 200), to define a desired breakdown voltage of the resulting device, e.g., FET. In general, the deeper the Ddrain trench etch, the higher the breakdown voltage of the resulting device.As shown in Figure 4H, a layer of silicon-rich oxide (SRO) 244 may be deposited to fill drain contact trench 240. As shown in Figure 41, a chemical mechanical planarization (CMP) process may be performed down to the nitride layer 212.As shown in Figure 4J, an etch may be performed to remove the remaining portions of oxide spacers 232 in trench 224. In one embodiments, the etch may comprise an oxide etch selective to SRO 244, which etches oxide spacers 232 faster than SRO 244 in trench 240.As shown in Figure 4K, nitride layer 212 may be removed, e.g., by performing a wet etch.As shown in Figure 4L, a thermal oxide (Tox) layer 250 may be grown on all exposed silicon surfaces. In some embodiments, Tox layer 250 may be grown with a thickness of between ΙΟθΑ and 50θΑ, e.g., about 25θΑ. The thickness of Tox layer 250 may be selected for the respective gate drive requirements of the resulting device.Each of Figures 4M through 4Q shows two selected regions of the example semiconductor structure, specifically, the left side of each figure shows an example interior region of the structure while the right side of each figure shows an example lateral edge region of the structure.As shown in Figure 4M, a poly layer 254 may be deposited over the structure. In some embodiments, poly layer 254 may have a thickness of between ΙΟΟθΑ and 3000A, e.g., about 2000A. The thickness of poly layer 254 may depend on the poly gate thickness as defined by the thickness of the previously deposited spacer oxide layer 230. Poly layer 254 may be doped, e.g., using a phosphorous oxychloride (POCb) doping, e.g., an n-type furnace doping process. As shown in the right side of Figure 4M, a photoresist 260 may be formed over an edge of the structure, e.g., extending partially over a drain contact trench 240 near the edge of the structure.As shown in Figure 4N, a poly etch may be performed to remove portions of poly layer 254, to thereby define poly gates 262 and a poly gate with a lateral gate contact 262A at the lateral edge of the structure. The photoresist 260 over lateral gate contact 262A may be removed, e.g., stripped.As shown in Figure 40, a pre-metal dielectric (PMD) oxide 270 may be deposited, and a CMP performed.As shown in Figure 4P, a mask layer 274 may be deposited and patterned to form (a) a drain contact trench 266 A aligned with drain contact trench 240 and extending through the middle of SRO 244 within trench 240, to define a pair of SRO spacers 280A and 280B on opposing sides of drain contact trench 266A, (b) source contact trenches 266B on either side of drain contact trench 240, and (c) a gate contact trench 266C over gate contact 262A.As shown in Figure 4Q, the trenches formed in Figure 4P may be filled with conductive material, e.g., tungsten. Drain contact trench 266A may be filled to form a front-side drain contact 286 between SRO spacers 280A and 280B, source contact trenches 266B may be filled to form front-side source contacts 284 coupled to underlying doped source regions (not shown) in EPI layer 200, and gate contact trench 266C may be formed to define a gate contact 288 coupled to gate contact 262A. From the point shown in Figure 4Q, known processes may be performed to form metal layers or other conductive contacts that connect to front-side drain contact 286 and front-side source contacts 284, as desired.
A method of forming a microelectronic device comprises forming a source material around substantially an entire periphery of a base material, and removing the source material from lateral sides of the base material while maintaining the source material over an upper surface and a lower surface of the base material. Related methods and base structures for microelectronic devices are also described.
CLAIMSWhat is claimed is: 1. A method of forming a microelectronic device, the method comprising: forming a source material around substantially an entire periphery of a base material; and removing the source material from lateral sides of the base material while maintaining the source material over an upper surface and a lower surface of the base material. 2. The method of claim 1, further comprising forming an etch stop material over the base material prior to forming the source material around substantially the entire periphery of the base material.3. The method of claim 2, further comprising: selecting the base material to comprise a semiconductive material; and selecting the etch stop material to comprise a dielectric material.4. The method of claim 2, further comprising: selecting the base material to comprise a silicon; and selecting the etch stop material to comprise silicon dioxide.5. The method of claim 1, further comprising forming a protective material on lateral sides of remaining portions of source material after removing the source material from the lateral sides of the base material.6. The method of claim 1, further comprising selecting the base material to comprise one or more of monocrystalline silicon, poly crystalline silicon, silicon-germanium, germanium, gallium arsenide, a gallium nitride, gallium phosphide, indium phosphide, indium gallium nitride, and aluminum gallium nitride.7. The method of claim 1, further comprising selecting the source material to comprise doped polysilicon. 8. The method of claim 1, further comprising selecting the base material to comprise a ceramic material.9. The method of claim 8, wherein selecting the base material to comprise a ceramic material comprises selecting the base material to comprise silicon on poly aluminum nitride.10. The method of claim 1, further comprising selecting the base material to comprise a glass material.11. The method of claim 10, wherein selecting the base material to comprise a glass material comprises selecting the base material to comprise one or more of borosilicate glass, phosphosilicate glass, fluorosilicate glass, borophosphosilicate glass, aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, quartz, titania silicate glass, and soda- lime glass.12. The method of any one of claims 1 through 11, further comprising: forming a stack structure comprising a vertically alternating series of conductive structures and insulative structures over the source material; forming vertically extending strings of memory cells within the stack structure to form a first microelectronic device structure; attaching the first microelectronic device structure to a second microelectronic device structure comprising control logic circuitry to form a microelectronic device structure assembly; removing the base material after forming the microelectronic device structure assembly; and forming circuitry in electrical communication with the source material after removing the base material.13. The method of claim 12, wherein removing the base material comprises one or more of grinding and wet etching the base material. 14. A method of forming a microelectronic device, the method comprising: forming a doped semiconductive material over a base material; forming an insulative material over the doped semiconductive material; forming openings in the insulative material and exposing the doped semiconductive material through the openings; and epitaxially growing additional semiconductive material from the doped semiconductive material to fill the openings and cover the insulative material.15. The method of claim 14, wherein forming a doped semiconductive material over a base material comprises forming the doped semiconductive material to comprise a semiconductive material of the base material doped and one or more dopants dispersed within the semiconductive material.16. The method of claim 14 or claim 15, further comprising: forming a stack structure comprising vertically alternating series of conductive structures and insulative structures over the additional semiconductive material; forming vertically extending strings of memory cells within the stack structure to form a first microelectronic device structure; coupling the first microelectronic device structure to a second microelectronic device structure comprising control logic circuitry to form a microelectronic device structure assembly; and removing the base material after forming the microelectronic device structure assembly.17. The method of claim 16, wherein removing the base material comprises removing the base material without substantially removing the doped semiconductive material.18. The method of claim 16, wherein removing the base material comprises forming trenches in the base material along a {100} plane or a {110} plane of the base material.19. The method of claim 16, further comprising forming a source structure over the additional semiconductive material after removing the base material. 20. A base structure for a microelectronic device, comprising: a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material; and a doped semiconductive material overlying an upper surface of the base material and underlying a lower surface of the base material, side surfaces of the base material interposed between the upper surface and the lower surface of the base material substantially free of the doped semiconductive material.21. The base structure of claim 20, further comprising a dielectric material interposed between the upper surface of the base material and the doped semiconductive material.22. The base structure of claim 21, wherein the doped semiconductive material is positioned directly adjacent the lower surface of the base material.23. The base structure of claim 21, further comprising additional dielectric material directly adjacent side surfaces of the doped semiconductive material, an uppermost surface of the doped semiconductive material substantially free of the additional dielectric material.24. The base structure of any one of claims 20 through 23, wherein the base material comprises a substantially undoped semiconductive material.25. The base structure of any one of claims 20 through 23, wherein the base material comprises one or more of borosilicate glass, phosphosilicate glass, fluorosilicate glass, borophosphosilicate glass, aluminosilicate glass, an alkaline earth boro- aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass.26. The base structure of any one of claims 20 through 23, wherein the base material comprises one or more of poly-aluminum nitride, silicon on poly-aluminum nitride, aluminum nitride, aluminum oxide, and silicon carbide. 27. A base structure for a microelectronic device, comprising: a base material comprising one or more of semi conductive material, ceramic material, and glass material; a doped semiconductive material on the base material; a dielectric material on the doped semiconductive material; filled openings extending through dielectric material to the doped semiconductive material; and an epitaxial semiconductive material substantially filling the filled openings and covering surfaces of the dielectric material outside of the filled openings.28. The base structure of claim 27, wherein: the base material comprises silicon; the doped semiconductive material comprises conductively doped silicon; the dielectric material comprises silicon oxide; and the epitaxial semiconductive material comprises epitaxial silicon.29. The base structure of claim 27, wherein the base material comprises the ceramic material or the glass material.30. A base structure for a microelectronic device, the base structure comprising: a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material; doped poly silicon on a first side of the base material and on a second, opposite side of the base material; and a dielectric material adjacent side surfaces of the doped poly silicon on one of the first side and the second, opposite side of the base material.31. The base structure of claim 30, wherein a thickness of the doped poly silicon on the first side of the base material is substantially the same as a thickness of the doped poly silicon on the second, opposite side of the base material.
METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED BASE STRUCTURES FOR MICROELECTRONIC DEVICESCROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of the filing date of United States Patent Application Serial No. 16/905,734, filed June 18, 2020, which is related to U.S. Patent Application Serial No. 16/905,385, filed June 18, 2020, listing Kunal R. Parekh as inventor, for “MICROELECTRONIC DEVICES, AND RELATED METHODS, MEMORY DEVICES, AND ELECTRONIC SYSTEMS.” This application is also related to U.S. Patent Application Serial No. 16/905,452, filed June 18, 2020, listing Kunal R. Parekh as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES, MEMORY DEVICES, ELECTRONIC SYSTEMS, AND ADDITIONAL METHODS.” This application is also related to U.S. Patent Application Serial No. 16/905,698, filed June 18, 2020, listing Kunal R. Parekh as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS.” This application is also related to U.S. Patent Application Serial No. 16/905,747, filed June 18, 2020, listing Kunal R. Parekh as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS.” This application is also related to U.S. Patent Application Serial No. 16/905,763, filed June 18, 2020, listing Kunal R. Parekh as inventor, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES AND ELECTRONIC SYSTEMS.” The disclosure of each of the foregoing documents is hereby incorporated herein in its entirety by this reference.TECHNICAL FIELDThe disclosure, in various embodiments, relates generally to the field of microelectronic device design and fabrication. More specifically, the disclosure relates to methods of forming base structures for microelectronic devices, methods of forming microelectronic devices, and to related base structures for microelectronic devices. BACKGROUNDMicroelectronic device designers often desire to increase the level of integration or density of features within a microelectronic device by reducing the dimensions of the individual features and by reducing the separation distance between neighboring features. In addition, microelectronic device designers often desire to design architectures that are not only compact, but offer performance advantages, as well as simplified designs.One example of a microelectronic device is a memory device. Memory devices are generally provided as internal integrated circuits in computers or other electronic devices. There are many types of memory devices including, but not limited to, non-volatile memory devices (e.g., NAND Flash memory devices). One way of increasing memory density in non-volatile memory devices is to utilize vertical memory array (also referred to as a “three-dimensional (3D) memory array”) architectures. A conventional vertical memory array includes vertical memory strings extending through openings in one or more decks (e.g., stack structures) including tiers of conductive structures and dielectric materials. Each vertical memory string may include at least one select device coupled in series to a serial combination of vertically stacked memory cells. Such a configuration permits a greater number of switching devices (e.g., transistors) to be located in a unit of die area (i.e., length and width of active surface consumed) by building the array upwards (e.g., vertically) on a die, as compared to structures with conventional planar (e.g., two- dimensional) arrangements of transistors.Control logic devices within a base control logic structure underlying a memory array of a memory device (e.g., a non-volatile memory device) have been used to control operations (e.g., access operations, read operations, write operations) of the memory cells of the memoiy device. An assembly of the control logic devices may be provided in electrical communication with the memoiy cells of the memory array by way of routing and interconnect structures. However, processing conditions (e.g., temperatures, pressures, materials) for the formation of the memory array over the base control logic structure can limit the configurations and performance of the control logic devices within the base control logic structure in addition, the quantities, dimensions, and arrangements of the different control logic devices employed within the base control logic structure can also undesirably impede reductions to the size (e.g., horizontal footprint) of the memoiy device, and/or improvements in the performance (e.g., faster memory cell ON/OFF speed, lower threshold switching voltage requirements, faster data transfer rates, lower power consumption) of the memory de vice. Further, as the density and complexity of the memory array have increased, so has the complexity of the control logic devices. The increased density of the memor ' array increases the difficulty' of forming conductive contacts between components of the memory' array and components of the control logic devices.DISCLOSUREIn some embodiments, a method of forming a microelectronic device comprises forming a source material around substantially an entire periphery of a base material, and removing the source material from lateral sides of the base material while maintaining the source material over an upper surface and a lower surface of the base material.In other embodiments, a method of forming a microelectronic device comprises forming a doped semiconductive material over a base material, forming an insulative material over the doped semiconductive material, forming openings in the insulative material and exposing the doped semiconductive material through the openings, and epitaxially growing additional semiconductive material from the doped semiconductive material to fill the openings and cover the insulative material.In yet other embodiments, a base structure for a microelectronic device comprises a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material, and a doped semiconductive material overlying an upper surface of the base material and underlying a lower surface of the base material, side surfaces of the base material interposed between the upper surface and the lower surface of the base material substantially free of the doped semiconductive material.In further embodiments, a base structure for a microelectronic device comprises a base material comprising one or more of semiconductive material, ceramic material, and glass material, a doped semiconductive material on the base material, a dielectric material on the doped semiconductive material, filled openings extending through dielectric material to the doped semiconductive material, and an epitaxial semiconductive material substantially filling the filled openings and covering surfaces of the dielectric material outside of the filled openings.In additional embodiments, a base structure for a microelectronic device comprises a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material, doped poly silicon on a first side of the base material and on a second, opposite side of the base material, and a dielectric material adjacent side surfaces of the doped polysilicon on one of the first side and the second, opposite side of the base material.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A through FIG. ID are simplified cross-sectional views illustrating a method of forming a base structure, in accordance with embodiments of the disclosure;FIG. 2A through FIG. 2C are simplified cross-sectional views illustrating a method of forming a base structure, in accordance with other embodiments of the disclosure;FIG. 3A through FIG. 3C are simplified cross-sectional views illustrating a method of forming a base structure, in accordance with additional embodiments of the disclosure;FIG. 4A through FIG. 4D are simplified cross-sectional views illustrating a method of forming a microelectronic device structure assembly, in accordance with embodiments of the disclosure;FIG. 5A through FIG. 5C are simplified cross-sectional views illustrating a method of forming a microelectronic device structure assembly, in accordance with other embodiments of the disclosure;FIG. 6A and FIG. 6B are simplified cross-sectional views illustrating a method of forming a microelectronic device structure assembly, in accordance with additional embodiments of the disclosure;FIG. 7 is a block diagram of an electronic system, in accordance with embodiments of the disclosure; andFIG. 8 is a block diagram of a processor-based system, in accordance with embodiments of the disclosure.MODE(S) FOR CARRYING OUT THE INVENTIONThe illustrations included herewith are not meant to be actual views of any particular systems, microelectronic structures, microelectronic devices, or integrated circuits thereof, but are merely idealized representations that are employed to describe embodiments herein. Elements and features common between figures may retain the same numerical designation except that, for ease of following the description, reference numerals begin with the number of the drawing on which the elements are introduced or most fully described. The following description provides specific details, such as material types, material thicknesses, and processing conditions in order to provide a thorough description of embodiments described herein. However, a person of ordinary skill in the art will understand that the embodiments disclosed herein may be practiced without employing these specific details. Indeed, the embodiments may be practiced in conjunction with conventional fabrication techniques employed in the semiconductor industry. In addition, the description provided herein does not form a complete process flow for manufacturing a microelectronic device (e.g., a semiconductor device, a memory device, such as NAND Flash memory device), apparatus, or electronic system, or a complete microelectronic device, apparatus, or electronic system. The structures described below do not form a complete microelectronic device, apparatus, or electronic system. Only those process acts and structures necessary to understand the embodiments described herein are described in detail below. Additional acts to form a complete microelectronic device, apparatus, or electronic system from the structures may be performed by conventional techniques.The materials described herein may be formed by conventional techniques including, but not limited to, spin coating, blanket coating, chemical vapor deposition (CVD), atomic layer deposition (ALD), plasma enhanced ALD, physical vapor deposition (PVD), plasma enhanced chemical vapor deposition (PECVD), or low pressure chemical vapor deposition (LPCVD). Alternatively, the materials may be grown in situ. Depending on the specific material to be formed, the technique for depositing or growing the material may be selected by a person of ordinary skill in the art. The removal of materials may be accomplished by any suitable technique including, but not limited to, etching, abrasive planarization (e.g., chemical-mechanical planarization), or other known methods unless the context indicates otherwise.As used herein, the term “configured” refers to a size, shape, material composition, orientation, and arrangement of one or more of at least one structure and at least one apparatus facilitating operation of one or more of the structure and the apparatus in a predetermined way.As used herein, the terms “longitudinal,” “vertical,” “lateral,” and “horizontal” are in reference to a major plane of a substrate (e.g., base material, base structure, base construction, etc.) in or on which one or more structures and/or features are formed and are not necessarily defined by Earth’s gravitational field. A “lateral” or “horizontal” direction is a direction that is substantially parallel to the major plane of the substrate, while a “longitudinal” or “vertical” direction is a direction that is substantially perpendicular to the major plane of the substrate. The major plane of the substrate is defined by a surface of the substrate having a relatively large area compared to other surfaces of the substrate.As used herein, the term “substantially” in reference to a given parameter, property, or condition means and includes to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a degree of variance, such as within acceptable tolerances. By way of example, depending on the particular parameter, property, or condition that is substantially met, the parameter, property, or condition may be at least 90.0 percent met, at least 95.0 percent met, at least 99.0 percent met, at least 99.9 percent met, or even 100.0 percent met.As used herein, “about” or “approximately” in reference to a numerical value for a particular parameter is inclusive of the numerical value and a degree of variance from the numerical value that one of ordinary skill in the art would understand is within acceptable tolerances for the particular parameter. For example, “about” or “approximately” in reference to a numerical value may include additional numerical values within a range of from 90.0 percent to 110.0 percent of the numerical value, such as within a range of from 95.0 percent to 105.0 percent of the numerical value, within a range of from 97.5 percent to 102.5 percent of the numerical value, within a range of from 99.0 percent to 101.0 percent of the numerical value, within a range of from 99.5 percent to 100.5 percent of the numerical value, or within a range of from 99.9 percent to 100.1 percent of the numerical value.As used herein, spatially relative terms, such as “beneath,” “below,” “lower,” “bottom,” “above,” “upper,” “top,” “front,” “rear,” “left,” “right,” and the like, may be used for ease of description to describe one element’s or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. Unless otherwise specified, the spatially relative terms are intended to encompass different orientations of the materials in addition to the orientation depicted in the figures. For example, if materials in the figures are inverted, elements described as “below” or “beneath” or “under” or “on bottom of’ other elements or features would then be oriented “above” or “on top of’ the other elements or features. Thus, the term “below” can encompass both an orientation of above and below, depending on the context in which the term is used, which will be evident to one of ordinary skill in the art. The materials may be otherwise oriented (e.g., rotated 90 degrees, inverted, flipped, etc.) and the spatially relative descriptors used herein interpreted accordingly.As used herein, features (e.g., regions, materials, structures, devices) described as “neighboring” one another means and includes features of the disclosed identity (or identities) that are located most proximate (e.g., closest to) one another. Additional features (e.g., additional regions, additional materials, additional structures, additional devices) not matching the disclosed identity (or identities) of the “neighboring” features may be disposed between the “neighboring” features. Put another way, the “neighboring” features may be positioned directly adjacent one another, such that no other feature intervenes between the “neighboring” features; or the “neighboring” features may be positioned indirectly adjacent one another, such that at least one feature having an identity other than that associated with at least one the “neighboring” features is positioned between the “neighboring” features. Accordingly, features described as “vertically neighboring” one another means and includes features of the disclosed identity (or identities) that are located most vertically proximate (e.g., vertically closest to) one another. Moreover, features described as “horizontally neighboring” one another means and includes features of the disclosed identity (or identities) that are located most horizontally proximate (e.g., horizontally closest to) one another.As used herein, the term “memory device” means and includes microelectronic devices exhibiting memory functionality, but not necessary limited to memory functionality. Stated another way, and by way of example only, the term “memory device” means and includes not only conventional memory (e.g., conventional volatile memory, such as conventional dynamic random access memory (DRAM); conventional non-volatile memory, such as conventional NAND memory), but also includes an application specific integrated circuit (ASIC) (e.g., a system on a chip (SoC)), a microelectronic device combining logic and memory, and a graphics processing unit (GPU) incorporating memory.As used herein, “conductive material” means and includes electrically conductive material such as one or more of a metal (e.g., tungsten (W), titanium (Ti), molybdenum (Mo), niobium (Nb), vanadium (V), hafnium (Hf), tantalum (Ta), chromium (Cr), zirconium (Zr), iron (Fe), ruthenium (Ru), osmium (Os), cobalt (Co), rhodium (Rh), iridium (Ir), nickel (Ni), palladium (Pa), platinum (Pt), copper (Cu), silver (Ag), gold (Au), aluminum (Al)), an alloy (e.g., a Co-based alloy, an Fe-based alloy, an Ni-based alloy, an Fe- and Ni-based alloy, a Co- and Ni-based alloy, an Fe- and Co-based alloy, a Co- and Ni- and Fe-based alloy, an Al-based alloy, a Cu-based alloy, a magnesium (Mg)-based alloy, a Ti-based alloy, a steel, a low-carbon steel, a stainless steel), a conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide), and a conductively-doped semiconductor material (e.g., conductively-doped polysilicon, conductively-doped germanium (Ge), conductively-doped silicon germanium (SiGe)). In addition, a “conductive structure” means and includes a structure formed of and including a conductive material.As used herein, “insulative material” means and includes electrically insulative material, such one or more of at least one dielectric oxide material (e.g., one or more of a silicon oxide (SiOx), phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, an aluminum oxide (AlOx), a hafnium oxide (HfOx), a niobium oxide (NbOx), a titanium oxide (TiOx), a zirconium oxide (ZrOx), a tantalum oxide (TaOx), and a magnesium oxide (MgOx)), at least one dielectric nitride material (e.g., a silicon nitride (SiNy)), at least one dielectric oxynitride material (e.g., a silicon oxynitride (SiOxNy)), and at least one dielectric carboxynitride material (e.g., a silicon carboxynitride (SiOxCzNy)). Formulae including one or more of “x,” “y,” and “z” herein (e.g., SiOx, AlOx, HfOx, NbOx, TiOx, SiNy, SiOxNy, SiOxCzNy) represent a material that contains an average ratio of “x” atoms of one element, “y” atoms of another element, and “z” atoms of an additional element (if any) for every one atom of another element (e.g., Si, Al, Hf, Nb, Ti). As the formulae are representative of relative atomic ratios and not strict chemical structure, an insulative material may comprise one or more stoichiometric compounds and/or one or more non-stoichiometric compounds, and values of “x,” “y,” and “z” (if any) may be integers or may be non-integers. As used herein, the term “non-stoichiometric compound” means and includes a chemical compound with an elemental composition that cannot be represented by a ratio of well-defined natural numbers and is in violation of the law of definite proportions. In addition, an “insulative structure” means and includes a structure formed of and including an insulative material.According to embodiments described herein, a method of forming a microelectronic device comprises forming a first microelectronic device structure and attaching the first microelectronic device structure to a second microelectronic device structure. The first microelectronic device structure includes a base structure on which other components (e.g., a memory array, interconnects) are formed. After attaching the first microelectronic device structure to the second microelectronic device structure, at least a portion of the base structure of the first microelectronic device structure may be removed, such as by grinding, etching, or both. The base structure may include a base material comprising, for example, a semiconductive material (e.g., silicon), a ceramic material, or a glass material. The base structure may be formed to facilitate removal of the base structure from the first microelectronic device structure after attachment of the first microelectronic device structure to the second microelectronic device structure. In some embodiments, the base structure is formed by forming an etch stop material over a surface of the base material, followed by formation of a source material around a periphery of the base structure. The source material may be removed from lateral sides of the base structure to leave at least a portion of the source material over the etch stop material. Lateral sides of the source material over the etch stop material may be removed and replaced with a protective material (e.g., an oxide material). In other embodiments, the base structure comprises a doped material over a surface of the base material and an insulative material over the doped material. Openings may be formed through the insulative material to expose the doped material and an epitaxial material may be grown from the doped material to fill the openings and overlie the insulative material. In yet other embodiments, the base structure comprises a base material comprising a glass material surrounded by a source material.The source material may be removed from lateral sides of the base structure to leave at least a portion of the source material over base material.After forming the base structure, additional components (e.g., a memory array, interconnects) may be formed over a surface of the base structure to form the first microelectronic device structure. After formation of the first microelectronic device structure, the first microelectronic device structure may be coupled to a second microelectronic device structure to form a microelectronic device structure assembly. The second microelectronic device structure may include a device structure including one or more control logic structures for controlling one or more functions (e.g., operations) of the first microelectronic device structure. After attaching the first microelectronic device structure to the second microelectronic device structure, at least a portion of the base structure (e.g., the base material) may be removed. After removal of the at least a portion of the base structure, a source structure may be formed on the first microelectronic device structure (e.g., from the source material already present on the first microelectronic device structure, or from a source material formed on the microelectronic device structure assembly). Forming the first microelectronic device structure with the base structure may facilitate improved fabrication of the microelectronic device structure assembly including the first microelectronic device structure and the second microelectronic device structure since the base structure may be fabricated to protect other components of the first microelectronic device structure during back side processing of the first microelectronic device structure, such as during removal of the base material. For example, the base structure may be formed to include one or more etch stop materials that may facilitate protection of the other components of the first microelectronic device structure during removal of the base material. In addition, the methods described herein may facilitate formation and/or patterning of the source structure of the first microelectronic device structure after attachment of the first microelectronic device structure to the second microelectronic device structure and removal of the base material.FIG. 1A through FIG. ID are simplified partial cross-sectional views illustrating embodiments of a method of forming a base structure prior to further processing to form a first microelectronic device structure (e.g., a memory device, such as a 3D NAND Flash memory device). FIG. 1A through FIG. ID illustrate a method of forming a base structure 100 of a first microelectronic device structure prior to fabrication of, for example, a memory region on the base structure 100 and prior to bonding of the first microelectronic device structure to a second microelectronic device structure, such as a CMOS substrate.With the description provided below, it will be readily apparent to one of ordinary skill in the art that the methods and structures described herein with reference to FIG. 1 A through FIG. ID may be used in various devices and electronic systems.Referring to FIG. 1 A, a base structure 100 (e.g., a first die) may comprise a base material 102. The base material 102 (e.g., semiconductive wafer) comprises a material or construction upon which additional materials and structures of the base structure 100 are formed. The base material 102 may comprise one or more of semiconductive material (e.g., one or more of a silicon material, such monocrystalline silicon or poly crystalline silicon (also referred to herein as “polysilicon”); silicon-germanium; germanium; gallium arsenide; a gallium nitride; gallium phosphide; indium phosphide; indium gallium nitride; and aluminum gallium nitride), a base semiconductive material on a supporting structure, glass material (e.g., one or more of borosilicate glass (BSP), phosphosilicate glass (PSG), fluorosilicate glass (FSG), borophosphosilicate glass (BPSG), aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass), and ceramic material (e.g., one or more of poly-aluminum nitride (p-AIN), silicon on poly aluminum nitride (SOP AN), aluminum nitride (AIN), aluminum oxide (e.g., sapphire; a- AI2O3), and silicon carbide).In some embodiments, the base material 102 comprises a conventional silicon substrate (e.g., a conventional silicon wafer), or another bulk substrate comprising a semiconductive material. As used herein, the term “bulk substrate” means and includes not only silicon substrates, but also sibcon-on-insulator (SOI) substrates, such as sibcon-on-sapphire (SOS) substrates and sibcon-on-glass (SOG) substrates, epitaxial layers of silicon on a base semiconductive foundation, and other substrates formed of and including one or more semiconductive materials.In other embodiments, the base material 102 comprises a glass wafer. In further embodiments, the base material 102 comprises a ceramic wafer, such as SOP AN wafer. In some such embodiments, the base material 102 may comprise a wafer including silicon and a ceramic material.A thickness (e.g., in the Z-direction) of the base material 102 may be greater than about 500 micrometers (pm), greater than about 750 pm, or even greater than about 1,000 pm.With reference to FIG. IB, an etch stop material 104 may be formed over (e.g., on, directly on) a surface of the base material 102. The etch stop material 104 may include an insulative material exhibiting an etch selectivity with respect to the base material 102. For example, the etch stop material 104 may be formed of and include one or more of silicon dioxide, aluminum oxide, hafnium oxide, niobium oxide, titanium oxide, zirconium oxide, tantalum oxide, magnesium oxide, silicon nitride, an oxynitride material (e.g., silicon oxynitride (SiOxNy)), and a silicon carboxynitride (SiOxCzNy). In some embodiments, the etch stop material 104 comprises an oxide material, such as silicon dioxide. In some embodiments, the etch stop material 104 comprises an oxide of the base material 102. For example, in some embodiments, the base material 102 comprises silicon and the etch stop material 104 comprises silicon dioxide.The etch stop material 104 may be formed over the surface of the base material 102 by one or more of ALD, CVD, PVD, PECVD, LPCVD, or another deposition method. In other embodiments, the etch stop material 104 is formed (e.g., grown) in situ. By way of non-limiting example, the etch stop material 104 may be formed by thermal oxidation, such as by exposing a surface of the base material 102 to oxygen (e.g., C , H2O) at a temperature within a range from about 800°C to about 1,200°C.Referring to FIG. 1C, after forming the etch stop material 104, a source material 106 may be formed around substantially an entire periphery of the base structure 100. In other words, the source material 106 may overlie and substantially surround the base structure 100. Accordingly, the source material 106 may overlie a major surface of the base material 102, a major surface of the etch stop material 104, and sidewalls of the base material 102 and the etch stop material 104.The source material 106 may be formed of and include poly silicon. In some embodiments, the source material 106 is doped with one or more dopants, such as, for example, one or more p-type dopants (e.g., boron, aluminum, gallium, indium), one or more n-type dopants (e.g., phosphorus, arsenic, antimony, bismuth, lithium), and/or one or more other dopants (e.g., germanium, silicon, nitrogen).A thickness (e.g., in the Z-direction) of the source material 106 may be within a range from about 50 nanometers (nm) to about 500 nm, such as from about 50 nm to about 75 nm, from about 75 nm to about 100 nm, from about 100 nm to about 200 nm, from about 200 nm to about 400 nm, or from about 400 nm to about 500 nm.With reference now to FIG. ID, after forming the source material 106 around a periphery of the base structure 100, the source material 106 may be removed from lateral sides (e.g., in the X-direction and another direction perpendicular to the X-direction) of the base structure 100. Removal of the source material 106 from lateral sides of the base structure 100 may expose sidewalls of the base material 102 and the etch stop material 104.The source material 106 may be removed from the lateral sides of the base structure 100 by, for example, exposing the base structure 100 to an edge grinding process (also referred to as an edge profiling process or wafer edge grinding), or another edge treatment method. In other embodiments, the source material 106 is removed by exposing the base structure 100 to an edge trimming process.After removing the source material 106 from lateral sides of the base structure 100, a protective material 108 may be formed on sides of the source material 106. Accordingly the protective material 108 may surround lateral sides of the source material 106 located over the etch stop material 104.The protective material 108 may be formed of and include, for example, one or more of the materials described above with reference to the etch stop material 104. In some embodiments, the protective material 108 comprises an oxide material. The protective material 108 may have the same material composition as the etch stop material 104. In other embodiments, the protective material 108 has a different material composition than the etch stop material 104. In some embodiments, the protective material 108 comprises silicon dioxide.The base structure 100 may be used to facilitate formation of a first microelectronic device structure of a microelectronic device (e.g., a semiconductor device; a memory device, such as a 3D NAND Flash memory device). After forming the first microelectronic device structure from the base structure 100, the first microelectronic device structure may be coupled to one or more other microelectronic device structures, such as a chiplet including one or more control logic regions, as will be described herein.FIG. 2A through FIG. 2C are simplified partial cross-sectional views illustrating embodiments of a method of forming a base structure 200 prior to further processing to form a first microelectronic device structure of a microelectronic device (e.g., a semiconductor device; a memory device, such as a 3D NAND Flash memory device), in accordance with additional embodiments of the disclosure. FIG. 2A through FIG. 2C illustrate a method of forming a base structure 200 prior to fabrication of, for example, a memory array on the base structure 200 to form the first microelectronic device structure and prior to bonding the first microelectronic device structure to a second microelectronic device structure.With reference to FIG. 2A, the base structure 200 may include a base material 202. The base material 202 comprises a base material or construction upon which additional materials and structures of the microelectronic device structure 200 are formed. The base material 202 may include one or more of the materials described above with reference to the base material 202 (FIG. 1A). For example, the base material 202 may comprise one or more of semiconductive material (e.g., one or more of a silicon material, such monocrystalline silicon or poly crystalline silicon; silicon-germanium; germanium; gallium arsenide; a gallium nitride; gallium phosphide; indium phosphide; indium gallium nitride; and aluminum gallium nitride) a base semiconductive material on a supporting structure, glass material (e.g., one or more of BSP, PSG, FSG, BPSG, al aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass), and a ceramic material (e.g., one or more of p-AIN, SOP AN, AIN, aluminum oxide (e.g., sapphire; 01-AI2O3), and silicon carbide).. In some embodiments, the base material 202 comprises a conventional silicon substrate (e.g., a conventional silicon wafer), or another bulk substrate comprising a semiconductive material. In some embodiments, the base material 202 comprises a material that may be doped with one or more dopants.A thickness (e.g., in the Z-direction) of the base material 202 may be the same as that described above with reference to the base material 102.With continued reference to FIG. 2A, a doped material 204 may overlie the base material 202. The doped material 204 may include one or more dopants, such as one or more p-type dopants (e.g., boron, aluminum, gallium, indium), one or more n-type dopants (e.g., phosphorus, arsenic, antimony, bismuth, lithium), and/or one or more other dopants (e.g., germanium, silicon, nitrogen). In some embodiments, the doped material 204 comprises the same material composition as the base material 202, except that the doped material 204 is doped with the one or more dopants.The dopants may be present in the doped material 204 at a concentration within a range from about 1 x 1019atoms/cm3(or more simply, 1 x 1019/cm3) to about 4.0 x 1020/cm3, such as from about 1 x 1019/cm3to about 5 x 1019/cm3, from about 5 x 1019/cm3to about 1 x 102°/cm3, from about 1 x 102°/cm3to about 2.0 x 102°/cm3, or from about 2.0 x 1020/cm3to about 4.0 x 1020/cm3.In some embodiments, the doped material 204 comprises one or more of boron, germanium, and phosphorus. By way of non-limiting example, the doped material 204 may include silicon doped with boron; silicon doped with boron and germanium; silicon doped with phosphorus; or silicon doped with gallium. In some embodiments, the doped material 204 comprises so-called heavily boron-doped silicon.As will be described herein, the doped material 204 may facilitate selective etching of the base material 202 relative to the doped material 204 during further processing of the base structure 200. Accordingly, the doped material 204 may function as an etch stop material during removal of the base material 202.An insulative material 206 may overlie the doped material 204. The doped material 204 may be located between the insulative material 206 and the base material 202. The insulative material 206 may comprise one or more of the materials described above with reference to the etch stop material 104 (FIG. IB). For example, the insulative material 206 may be formed of and include one or more of silicon dioxide, aluminum oxide, hafnium oxide, niobium oxide, titanium oxide, zirconium oxide, tantalum oxide, magnesium oxide, silicon nitride, an oxynitride material (e.g., silicon oxynitride (SiOxNy)), and a silicon carboxynitride (SiOxCzNy). In some embodiments, the insulative material 206 comprises silicon dioxide.Referring to FIG. 2B, the insulative material 206 may be patterned to form openings 208 therein and to expose a portion of the doped material 204 through the openings 208. The openings 208 through the insulative material 206 may be formed by, for example, exposing the insulative material 206 to an etchant through a mask. The insulative material 206 may be exposed to a dry etchant comprising one or more of a fluorocarbon (e.g., CH2F2, CFbF, CF4, C4F8, C4F6, CF2), SF6, NF3, and oxygen. However, the disclosure is not so limited and the openings 208 through the insulative material 206 may be formed by methods other than those described.With reference to FIG. 2C, after forming the openings 208 (FIG. 2B) and exposing the doped material 204 through the openings 208, a semiconductive material 210 may be formed over the exposed portions of the doped material 204, within the openings 208, and over the remaining portions of the insulative material 206.In some embodiments, the semiconductive material 210 is formed by epitaxial growth. By way of non-limiting example, the semiconductive material 210 may be grown from the exposed portions of the doped material 204. The semiconductive material 210 may comprise a monocrystalline material and may include a monocrystalline surface 212. In some embodiments, the semiconductive material 210 exhibits the same crystalline orientation as the doped material 204. The monocrystalline surface 212 may facilitate formation of one or more device structures on the monocrystalline surface 212, as will be described herein.The semiconductive material 210 may be formed of and include one or more of the materials described above with referenced to the doped material 204. In some embodiments, the semiconductive material 210 comprises the same material composition as the doped material 204, except that a concentration of the dopants in the semiconductive material 210 is less than the concentration of the dopants in the doped material 204. In some embodiments, the semiconductive material 210 comprises doped epitaxial silicon (e.g., epitaxial silicon doped with one or more of at least one n-type dopant, at least one p- type dopant, or at least another dopant).The base structure 200 may be used to facilitate formation of a first microelectronic device structure (e.g., a semiconductor device, a memory device (e.g., NAND Flash memory device)). After forming the first microelectronic device from the base structure 200, the first microelectronic device may be coupled to one or more other microelectronic device structures, such as a chiplet including one or more control logic regions, as will be described herein.FIG. 3A through FIG. 3C are simplified partial cross-sectional views illustrating embodiments of another method of forming a base structure 300, in accordance with embodiments of the disclosure. FIG. 3A through FIG. 3C illustrate a method of forming a base structure 300 prior to fabrication of, for example, a memory region on the base structure 300 to form the first microelectronic device structure and prior to bonding the first microelectronic device structure to a second microelectronic device structure.With reference to FIG. 3A, the base structure 300 may include a base material 302. The base material 302 may include one or more of the materials described above with reference to the base material 102 (FIG. 1A). In some embodiments, the base material 302 comprises a glass material, such as one or more of borosilicate glass (BSP), aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, phosphosilicate glass (PSG), borophosphosilicate glass (BPSG), quartz, titania silicate glass, soda-lime glass.Referring to FIG. 3B, a source material 304 may be formed around substantially an entire periphery of the base material 302. The source material 304 may be formed of and include one or more of the materials described above with reference to the source material 106 (FIG. 1C). In some embodiments, the source material 304 comprises polysilicon. In some such embodiments, the source material 304 may comprise doped poly silicon. The source material 304 may have a same thickness as the source material 106 described above.Referring to FIG. 3C, after forming the source material 304, portions of the source material 304 on lateral sides of the base material 302 may be removed, such as by exposing the base structure 300 to an edge grinding process, an edge trimming process, or another edge treatment process, as described above with reference to FIG. ID.After forming the source material 304, the base structure 300 may be used to facilitate formation of a microelectronic device (e.g., a semiconductor device; a memory device, such as a 3D NAND Flash memory device). After forming the first microelectronic device from the base structure 300, the microelectronic device may be coupled to one or more other microelectronic device structures, such as a chiplet including one or more control logic regions, as will be described herein. As described above, after forming the base structures 100, 200, 300, first microelectronic device structures may be formed on, over, or within the base structures 100, 200, 300. The first microelectronic device structures may comprise, for example, features (e.g., structures, materials, devices) of a semiconductor device, a memory device (e.g., 3D NAND Flash memory device), or another device. FIG. 4A is a simplified cross-sectional view of a first microelectronic device structure 400, in accordance with embodiments of the disclosure. The first microelectronic device structure 400 may also be referred to as an array wafer. The first microelectronic device structure 400 may include an array wafer substrate 402, which may be substantially similar to the base structure 100 described above with reference to FIG. ID. In other words, the array wafer substrate 402 may include the base material 102, the etch stop material 104, the source material 106, and the protective material 108. Although the array wafer substrate 402 is illustrated as comprising the base structure 100, it will be understood that the array wafer substrate 402 may be may correspond to any of the base structures 100,200, 300 described above with reference to FIG. 1A through FIG. 3C.The first microelectronic device structure 400 may be formed to include a memory array region 404 vertically over (e.g., in the Z-direction) the array wafer substrate 402 and an interconnect region 406 vertically over the memory array region 404. The memory array region 404 may be vertically interposed between the interconnect region 406 and the array wafer substrate 402.The memory array region 404 may include a stack structure 408, line structures 410 (e.g., digit line structures, bit line structures), and line contact structures 412. The line contact structures 412 may vertically overlie (e.g., in the Z-direction) the stack structure 408, and may be electrically connected to structures, such as cell pillar structures 414 and deep contact structures 416 extending through the stack structure 408.The stack structure 408 may include a vertically alternating (e.g., in the Z-direction) sequence of conductive structures 418 and insulative structures 420 arranged in tiers 422. Each of the tiers 422 of the stack structure 408 may include at least one of the conductive structures 418 vertically neighboring at least one of the insulative structures 420. In some embodiments, the conductive structures 418 are formed of and include tungsten and the insulative structures 420 are formed of and include silicon dioxide. The conductive structures 418 and insulative structures 420 of the tiers 422 of the stack structure 408 may each individually be substantially planar, and may each individually exhibit a desired thickness.The cell pillar structures 414 may each individually include a semi conductive pillar (e.g., a poly silicon pillar, a silicon-germanium pillar) at least partially surrounded by one or more charge storage structures (e.g., a charge trapping structure, such as a charge trapping structure comprising an oxide-nitride-oxide (“ONO”) material; floating gate structures). Intersections of the cell pillar structures 414 and the conductive structures 418 of the tiers 422 of the stack structure 408 may define vertically extending strings of memory cells 424 coupled in series with one another within the memory array region 404 of the first microelectronic device structure 400. In some embodiments, the memory cells 424 formed at the intersections of the conductive structures 418 and the cell pillar structures 414 within each the tiers 422 of the stack structure 408 comprise so-called “MONOS” (metal - oxide - nitride - oxide - semiconductor) memory cells. In additional embodiments, the memory cells 424 comprise so-called “TANOS” (tantalum nitride - aluminum oxide - nitride - oxide - semiconductor) memory cells, or so-called “BETANOS” (band/barrier engineered TANOS) memory cells, each of which are subsets of MONOS memory cells. In further embodiments, the memory cells 424 comprise so-called “floating gate” memory cells including floating gates (e.g., metallic floating gates) as charge storage structures. The floating gates may horizontally intervene between central structures of the cell pillar structures and the conductive structures 418 of the different tiers 422 of the stack structure 408.The cell pillar structures 414 may vertically extend from an upper vertical boundary of the stack structure 408, through the stack structure 408, and to a location at or proximate an upper vertical boundary of the base structure 100 (e.g., within a dielectric material on the base structure 100).The deep contact structure(s) 416 may be configured and positioned to electrically connect one or more components of the first microelectronic device structure 400 vertically overlying the stack structure 408 with one or more components of the first microelectronic device structure 400 vertically underlying the stack structure 408. The deep contact structure(s) 416 may be formed of and include conductive material.With continued reference to FIG. 4A, the interconnect region 406 comprises first bond pad structures 426 electrically coupled to the line structures 410 by first interconnect structures 428. The first interconnect structures 428 may vertically overlie (e.g., in the Z-direction) and be electrically connected to the line structures 410 and the first bond pad structures 426 may vertically overlie (e.g., in the Z-direction) and be electrically connected to the first interconnect structures 428. The first bond pad structures 426 and the first interconnect structures 428 may individually be formed of and include conductive material. In some embodiments, the first bond pad structures 426 are formed of and include copper and the first interconnect structures 428 are formed of and include tungsten.Referring to FIG. 4B, after forming the memory array region 404 and the interconnect region 406, the first microelectronic device structure 400 may be flipped upside down (e.g., in the Z-direction) and attached (e.g., bonded) to a second microelectronic device structure 460 to form a microelectronic device structure assembly 450 comprising the first microelectronic device structure 400 and the second microelectronic device structure 460. The first bond pad structures 426 of the interconnect region 406 of the first microelectronic device structure 400 may be coupled to second bond pad structures 470 of the second microelectronic device structure 460. For example, after flipping the first microelectronic device structure 400, the first bond pad structures 426 may be horizontally aligned and brought into physical contact with the second bond pad structures 470 of the second microelectronic device structure 460. At least one thermocompression process may be employed to migrate (e.g., diffuse) and interact material(s) (e.g., copper) of the first bond pad structures 426 and the second bond pad structures 470 with one another to bond the first microelectronic device structure 400 to the second microelectronic device structure 460 to form the microelectronic device structure assembly 450.The second microelectronic device structure 460 may include a control logic region 462. The control logic region 462 may include a semiconductive base structure 464, gate structures 466, and routing structures 468. Portions of the semiconductive base structure 464, gate structures 466, and routing structures 468 form various control logic devices of the control logic region 462. The control logic devices may be configured to control various operations of other components (e.g., memory cells 424 of the cell pillar structures 414), such as components of the first microelectronic device structure 400. The control logic devices may include devices configured to control read, write, and/or erase operations of the memory cells 424 of the memory cell pillar structures 414 of the memory array region 404. As a non-limiting example, the control logic devices may include one or more (e.g., each) of charge pumps (e.g., VCCP charge pumps, VNEGWL charge pumps, DVC2 charge pumps), DLL circuitry (e.g., ring oscillators), Vdd regulators, string drivers, page buffers, and various chip/deck control circuitry. As another non-limiting example, As another non-limiting example, the control logic devices may include devices to control column operations of arrays (e.g., memory element array(s), access device array(s)) within the memory array region 404, such as one or more (e.g., each) of decoders (e.g., local deck decoders, column decoders), sense amplifiers (e.g., EQ amplifiers, ISO amplifiers, NSAs, PSAs), repair circuitry (e.g., column repair circuitry), I/O devices (e.g., local I/O devices), memory test devices, MUX, and ECC devices. As a further non-limiting example, the control logic devices of the control logic region 462 may include devices configured to control row operations for arrays (e.g., memory element array(s), access device array(s)) within the memory array region 404, such as one or more (e.g., each) of decoders (e.g., local deck decoders, row decoders), drivers (e.g., WL drivers), repair circuitry (e.g., row repair circuitry), memory test devices, MUX, ECC devices, and self-refresh/wear leveling devices. However, the disclosure is not so limited and the control logic devices of the control logic region 462 may include other and/or additional components.The semiconductive base structure 464 may comprise a base material or construction upon which additional materials are formed. The semiconductive base structure 464 may comprise a semiconductive structure (e.g., a semiconductive wafer), or a base semiconductive material on a supporting structure. For example, the semiconductive base structure 464 may comprise a conventional silicon substrate (e.g., a conventional silicon wafer), or another bulk substrate comprising a semiconductive material. In addition, the semiconductive base structure 464 may include one or more layers, structures, and/or regions formed therein and/or thereon. For example, the semiconductive base structure 464 may include conductively doped regions and undoped regions. The conductively doped regions may, for example, be employed as source regions and drain regions for transistors of the control logic devices of the control logic region 462; and the undoped regions may, for example, be employed as channel regions for the transistors of the control logic devices.The gate structures 466 of the control logic region 462 may vertically overlie (e.g., in the Z-direction) portions of the semiconductive base structure 464. The gate structures 466 may individually horizontally extend between and be employed by transistors of the control logic devices within the control logic region 462 of the second microelectronic device structure 460. The gate structures 466 may be formed of and include a conductive material. A gate dielectric material (e.g., a dielectric oxide) may vertically intervene (e.g., in the Z-direction) between the gate structures 466 and channel regions (e.g., within the semiconductive base structure 464) of the transistors. For clarity and ease of understanding of the description, the gate dielectric material is not illustrated in FIG. 4B.The routing structures 468 may vertically overlie (e.g., in the Z-direction) the semiconductive base structure 464 and may be electrically connected to the semiconductive base structure 464 by way of interconnect structures 467. Some of the interconnect structures 467 may vertically extend between and electrically couple some of the routing structures 468, and other of the interconnect structures 467 may vertically extend between and electrically couple regions (e.g., conductivity doped regions, such as source regions and drain regions) of the semiconductive base structure 464 to one or more of the routing structures 468. The routing structures 468 and the interconnect structures 467 may each individually be formed of and include conductive material.The second bond pad structures 470 vertically overlie (e.g., in the Z-direction) and electrically connect with the routing structures 468 by one or more interconnect structures. The second bond pad structures 470 may be formed of and include conductive material. As described above, the second bond pad structures 470 may be coupled to the first bond pad structures 426 of the first microelectronic device structure 400 to form the microelectronic device structure assembly 450.Referring to FIG. 4C, after forming the microelectronic device structure assembly 450, portions of the source material 106 on the back side of the base material 102, and the base material 102 may be removed (e.g., detached) from the first microelectronic device structure 400. The source material 106 and the base material 102 may be removed by one or more material removal processes such as one or both of grinding and etching. For example, the base material 102 may be removed by grinding the source material 106 and the base material 102. The base material 102 may be removed by grinding until a thickness of the base material 102 is less than about 100 pm, such as less than about 75 pm, less than about 50 pm, or less than about 40 pm.After grinding the base material 102, remaining portions of the base material 102 may be removed by etching (e.g., wet etching, dry etching) with an etching process selective to the etch stop material 104. As one example, the base material 102, may be exposed to a wet etchant comprising one or both of potassium hydroxide (KOH) and tetramethylammonium hydroxide (TMAH) to selectively remove the base material 102 without substantially removing the etch stop material 104. In other embodiments, the base material 102 is exposed to a dry etching process (e.g., reactive ion etching (RIE), inductively coupled plasma (ICP) etching) to selectively remove the base material 102 without substantially removing the etch stop material 104. In some embodiments, the dry etchant includes one or more of sulfur hexafluoride (SF6), oxygen (Ch), C4F8, CF4, C3F6, xenon difluoride (XeF2), or another material.In some embodiments, the presence of the protective material 108 around lateral sides of the source material 106 adjacent to the memory array region 404 may protect the source material 106 during removal of the base material 102. In some embodiments, the protective material 108 may reduce or substantially prevent contamination of the source material 106 with contaminants and particulates generated grinding of the source material 106 and may also prevent undesired exposure of the source material 106 to one or more etchants.With reference to FIG. 4D, after removing the base material 102 (FIG. 4B), a back end of the line (BEOL) structure 495 may be formed over the etch stop material 104 and in electrical communication with source structures 480 formed from the source material 106 (FIG. 4C). For example, openings may be formed through the etch stop material 104 and the source material 106 to separate portions of the source material 106 from each other and form the source structures 480. The openings may be filled with an insulative material to isolate different portions of the source structure 480 from each other. The insulative material may include the same material composition as the etch stop material 104. In some embodiments, a conductive material, such as tungsten silicide (WSix), tungsten nitride, tungsten silicon nitride (WSixNy) may be formed over the source material 106 prior to patterning the source material 106 to form the source structures 480. In some embodiments, the etch stop material 104 may be removed from surfaces of the source material 106 prior to forming the conductive material over the source material 106. In some embodiments, the source structure 480 comprises one or more of doped silicon (e.g., doped polysilicon), tungsten silicide, tungsten nitride, and tungsten silicon nitride.The insulative material may isolate portions of the source structure 480 in electrical communication with the memory cell pillar structures 414 from other portions of the source structure 480 in electrical communication with other portions of the memory array region 404 (e.g., the deep contact structures 416). Since the first microelectronic device structure 400 is formed to include the source material 106, the source structures 480 may be formed without deposition of a source material after attachment of the first microelectronic device structure 400 to the second microelectronic device structure 460. In addition, the source structures 480 may be formed without deposition of a source material after removal of the base material 102 (FIG. 4B).The BEOL structure 495 may include second interconnect structures 482 in electrical communication with the source structures 480 and electrically coupling the source structures 480 to first metallization structures 484. The second interconnect structures 482 may be formed of and include conductive material, such as tungsten. The first metallization structures 484 may be formed of and include conductive material, such as copper.Third interconnect structures 486 may electrically couple the first metallization structures 484 to second metallization structures 488. A passivation material 490 may be formed over the microelectronic device structure assembly 450 to electrically isolate the second metallization structures 488. The third interconnect structures 486 and the second metallization structures 488 may be formed of and include conductive material. For example, the third interconnect structures 486 may be formed of and include tungsten. The second metallization structures 488 may be formed of and include aluminum.Although FIG. 4A through FIG. 4D have been described and illustrated as including the first microelectronic device structure 400 comprising the array wafer substrate 402 comprising the base structure 100, the disclosure is not so limited. In other embodiments, the array wafer substrate 402 comprises the base structure 300 (FIG. 3C). In some embodiments, during removal of the base material 302 (FIG. 3C) from the microelectronic device structure assembly 450, the base material 302 may be removed by, for example, exposing the base material 302 to hydrofluoric acid or a grinding process.In other embodiments, the first microelectronic device structure may be formed with the base structure 200 described above with reference to FIG. 2C. FIG. 5A is a simplified cross-sectional view illustrating a microelectronic device structure assembly 550 including a first microelectronic device structure 500, in accordance with embodiments of the disclosure. The microelectronic device structure assembly 550 is substantially similar to the microelectronic device structure assembly 450 described above with reference to FIG. 4B, except that the first microelectronic device structure 500 includes the base structure 200 described above with reference to FIG. 2C. Accordingly, the first microelectronic device structure 500 may include an array wafer substrate comprising the base structure 200. As described above with reference to FIG. 4A, the memory array region 404 and the interconnect region 406 may be formed above the array wafer substrate. Thereafter, the first bond pad structures 426 may be bonded to the second bond pad structures 470 of the second microelectronic device structure 460 to form the microelectronic device structure assembly 550.In some embodiments, the memory array region 404 may be formed over the monocrystalline surface 212 (FIG. 2C) which may facilitate improved fabrication of the memory array region 404. For example, etching and patterning of the memory cell pillar structures 414 may be improved by forming the memory cell pillar structures 414 over the monocrystalline surface 212 compared to conventional microeconomic device structures wherein memory cell pillars are formed poly silicon. In addition, use of the base structure 200 may facilitate transfer and attachment of the first microelectronic device structure 400 to the second microelectronic device structure 460 since the base material 202 may exhibit a greater stiffness than conventional base materials.With reference to FIG. 5B, after forming the microelectronic device structure assembly 550, at least a portion of the base material 202 of the first microelectronic device structure 500 may be removed from the microelectronic device structure assembly 550.For example, the base material 202 may be removed by grinding the base material 202 to a thickness less than about 100 pm, such as less than about 75 pm, less than about 50 pm, or less than about 40 pm.After grinding the base material 202, the remaining portions of the base material 202 may be removed by one or more material removal processes that selectively remove the base material 202 relative to the doped material 204. In other words, the doped material 204 may be used as an etch stop material during removal of the base material 202. By way of non-limiting example, base material 202 may be exposed to a wet etchant including one or both of KOH and TMAH to remove the remaining portions of the base material 202 without substantially removing the doped material 204. In some embodiments, the wet etchant comprises TMAH. Removal of the remaining portions of the base material 202 may expose surfaces of the doped material 204.With reference to FIG. 5C, the doped material 204 may be selectively removed relative to the insulative material 206. By way of non-limiting example, the doped material 204 may be removed with an etchant comprising nitric acid and hydrofluoric acid. The ratio of the hydrofluoric acid to the nitric acid, the concentration of the hydrofluoric acid and nitric acid, and the temperature of the etch solution may be controlled to facilitate a desired rate of removal of the doped material 204. However, the disclosure is not so limited and the doped material 204 may be selectively removed relative to the insulative material 206 by other methods.After removing the doped material 204, a source structure 580 may be formed over the insulative material 206 and in electrical communication with the semiconductive material 210 formed through the insulative material 206. The source structure 580 may be substantially similar to the source structure 480 described above. The source structure 580 may be formed of and include one or more of doped silicon (e.g., doped poly silicon), tungsten silicide, tungsten nitride, and tungsten silicon nitride.In additional embodiments, the source structure 580 is formed from the doped material 204. As a non-limiting example, at least partially depending on the material composition of the doped material 204, portions of the doped material 204 may be removed (e.g., etched) relative to other portions of the source structure 580 to form the source structure 580 therefrom. The source structure 580 may correspond to remaining (e.g., unremoved) portions of the doped material 204. As another non-limiting example, at least partially depending on the material composition and thickness of the doped material 204, the doped material 204 may be converted into another conductive material and patterned (e.g., etched) to form the source structure 580.A thickness (e.g., in the Z-direction) of the source structure 580 may be within a range from about 50 nm to about 75 nm, from about 75 nm to about 100 nm, from about 100 nm to about 200 nm, from about 200 nm to about 400 nm, or from about 400 nm to about 500 nm.After forming the source structure 580, a back end of the line structure 595 may be formed over the source structure 580. For example, second interconnect structures 582 may be formed over and in electrical communication with the source structure 580. The second interconnect structures 582 may be formed of and include conductive material, such as tungsten. An opening may be formed through the insulative material 206 and the source structure 580 to separate portions of the source structure 580 (e.g., portions in electrical communication with the memory cells 424 from portions in electrical communication with the deep contact structures 416). An insulative material 584 may be formed over the second interconnect structures 582 and the source structure 580. First metallization structures 586 may be formed vertically over (e.g., in the Z-direction) and in electrical communication with the second interconnect structures 582. The first metallization structures 586 may be formed of and include a conductive material, such as copper. Third interconnect structures 588 may be formed over and in electrical communication with the first metallization structures 586 and electrically couple the first metallization structures 586 to second metallization structures 592. The third interconnect structures 588 may be formed of and include a conductive material, such as copper. The third metallization structures 590 may be formed of and include conductive material, such as aluminum. A passivation material may be formed over the microelectronic device structure assembly 550 to electrically isolate the second metallization structures 592.Although FIG. 4C and FIG. 5B have been described and illustrated as removing the base materials 102, 202, 302 by grinding and subsequent etching, the disclosure is not so limited. In other embodiments, the base materials 102, 202, 302 may be removed based on the crystalline orientation of the base materials 102, 202, 302 and orientation selective etching of the base materials 102, 202, 302 using one or more wet etchants, such as KOH, NaOH, and TMAH. FIG. 6A is a simplified cross-sectional view illustrating a microelectronic device structure assembly 650 including a first microelectronic device structure 600 attached to a second microelectronic device structure 660. The second microelectronic device structure 660 may be substantially similar to the second microelectronic device structure 460 described above with reference to FIG. 4B. The first microelectronic device structure 600 may be substantially similar any of the first microelectronic device structures 400, 500 described above with reference to FIG. 4A through FIG. 5C. The first microelectronic device structure 600 may be attached to the second microelectronic device structure 660 as described above with reference to attachment of the first microelectronic device structures 400, 500 to the second microelectronic device structure 460.The first microelectronic device structure 600 may include any of the base structures 100, 200, 300 described above with reference to FIG. 1A through FIG. 3C. With reference to FIG. 6A, the first microelectronic device structure 600 may include a base material 602 comprising one or more of the materials described above with reference to the base material 102 (FIG. 1A). In some embodiments, the base material 602 comprises silicon. In some embodiments, a protective material 604 may be formed around at least a portion of the microelectronic device structure assembly 650. For example, the protective material 604 may be disposed around the second microelectronic device structure 660. In some embodiments, the protective material 604 is disposed between the first microelectronic device structure 600 and the second microelectronic device structure 660, such as in a region 606 between bevels of the first microelectronic device structure 600 and the second microelectronic device structure 660.The protective material 604 an insulative material. In some embodiments, the protective material 604 is formed of and includes silicon dioxide.Referring to FIG. 6B, the base material 602 may be removed (e.g., detached) from the first microelectronic device structure 600. In some embodiments, the base material 602 is patterned to expose the {100} plane or the {110} plane of the base material 602. The base material 602 may be exposed to one or more etchants formulated to remove the base material 602 along the {100} plane or the {110} plane, which may form trenches 608 in the base material 602. In some embodiments, the trenches 608 may be patterned with i-line photolithography. In some embodiments, a mask is formed over the base material 602 and slits are formed through the mask to expose portions of the base material 602. The base material 602 is exposed to the one or more etchants through the openings in the mask material. The one or more etchants may include one or both of potassium hydroxide and tetramethylammonium hydroxide. In some embodiments, the etchant comprises potassium hydroxide.After forming the trenches 608, the remaining portions of the base material 602 may be removed by exposing the base material to one or more etchants configured to selectively remove the base material 602 without substantially removing materials underlying the base material 602 (e.g., the etch stop material 104 (FIG. 1C), the doped material 204 (FIG. 2C)). The remaining portions of the base material 602 may be selectively removed with respect to materials underlying the base material 602 as described above.Removing the base material 602 by forming the trenches 608 may facilitate improved removal of the base material 602 relative to other methods of removal of the base material. Since the trenches 608 are formed by i-line photolithography, the trenches 608 may be removed with relatively low cost methods. In addition, since the base material 602 is removed based on the orientation of the base material 602, the removal thereof may be at a relatively faster rate than other removal processes. Further, since the base material 602 is not removed by grinding, the microelectronic device structure assembly 650 may not be exposed to particles generated from the grinding process.After removal of the base material 602, a source structure (e.g., the source structure 580) may be formed over the first microelectronic device structure 600, as described above with reference to FIG. 5C.Although FIG. 4C, FIG. 5B, and FIG. 6B have been described and illustrated as removing the base materials 102, 202, 302, 602 with particular methods, the disclosure is not so limited. In other embodiments, the base materials 102, 202, 302, 602 may be formed to include hydrogen atoms at a desired depth prior to formation of the memory array region 404 and attachment of the first microelectronic device structures 400, 500, 600 to the second microelectronic device structures 460, 660. After attachment of the first microelectronic device structure 400, 500, 600 to the respective second microelectronic device structure 460, 660, the respective base material 102, 202, 302, 602 may be removed by fracturing the base material 102, 202, 302, 602 at locations corresponding to the implanted hydrogen atoms.Forming the microelectronic device structure assemblies 450, 550, 650 according to the methods described herein may facilitate improved fabrication of microelectronic devices. For example, forming the first microelectronic device structures 400, 500, 600 to include the base structures 100, 200, 300 prior to attaching the first microelectronic device structures 400, 500, 600 to the second microelectronic device structure 460 may facilitate improved fabrication of the microelectronic device structure assemblies 450, 550, 650. Formation of the base structures 100, 200, 300 prior to attaching the first microelectronic device structures 400, 500, 600 to the second microelectronic device structure 460 facilitates formation of the material of the source structure (e.g., the source structure 480, 580) prior to attaching the first microelectronic device structures 400, 500, 600 to the second microelectronic device structure 460. In addition, the base structures 100, 200, 300 may be fabricated with various materials to facilitate selective removal of the base material 102, 202, 302 after attaching the first microelectronic device structures 400, 500, 600 to the second microelectronic device structure 460 and without damaging other components or structures of the respective microelectronic device structure assemblies 450, 550, 650. Further, the methods described above facilitate fabrication of the second microelectronic device structure 460 (e.g., a CMOS wafer including control logic circuitry for one or more components of the first microelectronic device structures 400, 500, 600) separate from the fabrication of the first microelectronic device structures 400, 500, 600 (e.g., prior to attaching the first microelectronic device structures 400, 500, 600 to the second microelectronic device structure 460).Thus, in accordance with some embodiments of the disclosure, a method of forming a microelectronic device comprises forming a source material around substantially an entire periphery of a base material, and removing the source material from lateral sides of the base material while maintaining the source material over an upper surface and a lower surface of the base material.Furthermore, in accordance with additional embodiments of the disclosure, a method of forming a microelectronic device comprises forming a doped semiconductive material over a base material, forming an insulative material over the doped semiconductive material, forming openings in the insulative material and exposing the doped semiconductive material through the openings, and epitaxially growing additional semiconductive material from the doped semiconductive material to fill the openings and cover the insulative material.Moreover, in accordance with further embodiments of the disclosure, a base structure for a microelectronic device comprises a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material, and a doped semiconductive material overlying an upper surface of the base material and underlying a lower surface of the base material, side surfaces of the base material interposed between the upper surface and the lower surface of the base material substantially free of the doped semiconductive material.In addition, a base structure for a microelectronic device structure according to embodiments of the disclosure comprises a base material comprising one or more of semiconductive material, ceramic material, and glass material, a doped semiconductive material on the base material, a dielectric material on the doped semiconductive material, filled openings extending through dielectric material to the doped semiconductive material, and an epitaxial semiconductive material substantially filling the filled openings and covering surfaces of the dielectric material outside of the filled openings.In further embodiments, a base structure for a microelectronic device comprises a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material, doped polysilicon on a first side of the base material and on a second, opposite side of the base material, and a dielectric material adjacent side surfaces of the doped polysilicon on one of the first side and the second side of the base material.Microelectronic devices including microelectronic device structures (e.g., the first microelectronic device structures 400, 500, 600) and microelectronic device structure assemblies (e.g., the microelectronic device structure assemblies 450, 550, 650) including the base structures (e.g., the base structures 100, 200, 300) may be used in embodiments of electronic systems of the disclosure. For example, FIG. 7 is a block diagram of an electronic system 703, in accordance with embodiments of the disclosure. The electronic system 703 may comprise, for example, a computer or computer hardware component, a server or other networking hardware component, a cellular telephone, a digital camera, a personal digital assistant (PDA), portable media (e.g., music) player, a Wi-Fi or cellular- enabled tablet such as, for example, an iPAD® or SURFACE® tablet, an electronic book, a navigation device, etc. The electronic system 703 includes at least one memory device 705. The memory device 705 may include, for example, an embodiment of a microelectronic device structure previously described herein (e.g., the first microelectronic device structures 400, 500, 600) or a microelectronic device (e.g., the microelectronic device structure assemblies 450, 550, 650 previously described with reference to FIG. 4A through FIG. 6B) including the including the base structures 100, 200, 300.The electronic system 703 may further include at least one electronic signal processor device 707 (often referred to as a “microprocessor”). The electronic signal processor device 707 may, optionally, include an embodiment of a microelectronic device or a microelectronic device structure previously described herein (e.g., one or more of the first microelectronic device structures 400, 500, 600 or the microelectronic device structure assemblies 450, 550, 650 previously described with reference to FIG. 4A through FIG. 6B). The electronic system 703 may further include one or more input devices 709 for inputting information into the electronic system 703 by a user, such as, for example, a mouse or other pointing device, a keyboard, a touchpad, a button, or a control panel. The electronic system 703 may further include one or more output devices 711 for outputting information (e.g., visual or audio output) to a user such as, for example, a monitor, a display, a printer, an audio output jack, a speaker, etc. In some embodiments, the input device 709 and the output device 711 may comprise a single touchscreen device that can be used both to input information to the electronic system 703 and to output visual information to a user. The input device 709 and the output device 711 may communicate electrically with one or more of the memory device 705 and the electronic signal processor device 707.With reference to FIG. 8, depicted is a processor-based system 800. The processor- based system 800 may include microelectronic device structures (e.g., the first microelectronic device structures 400, 500, 600) and microelectronic device structure assemblies (e.g., the microelectronic device structure assemblies 450, 550, 650) manufactured in accordance with embodiments of the present disclosure. The processor- based system 800 may be any of a variety of types such as a computer, pager, cellular phone, personal organizer, control circuit, or other electronic device. The processor-based system 800 may include one or more processors 802, such as a microprocessor, to control the processing of system functions and requests in the processor-based system 800. The processor 802 and other subcomponents of the processor-based system 800 may include microelectronic devices and microelectronic device structures (e.g., microelectronic devices and microelectronic device structures including one or more of the first microelectronic device structures 400, 500, 600 or the microelectronic device structure assemblies 450, 550, 650) manufactured in accordance with embodiments of the present disclosure.The processor-based system 800 may include a power supply 804 in operable communication with the processor 802. For example, if the processor-based system 800 is a portable system, the power supply 804 may include one or more of a fuel cell, a power scavenging device, permanent batteries, replaceable batteries, and rechargeable batteries. The power supply 804 may also include an AC adapter; therefore, the processor-based system 800 may be plugged into a wall outlet, for example. The power supply 804 may also include a DC adapter such that the processor-based system 800 may be plugged into a vehicle cigarette lighter or a vehicle power port, for example.Various other devices may be coupled to the processor 802 depending on the functions that the processor-based system 800 performs. For example, a user interface 806 may be coupled to the processor 802. The user interface 806 may include input devices such as buttons, switches, a keyboard, a light pen, a mouse, a digitizer and stylus, a touch screen, a voice recognition system, a microphone, or a combination thereof. A display 808 may also be coupled to the processor 802. The display 808 may include an LCD display, an SED display, a CRT display, a DLP display, a plasma display, an OLED display, an LED display, a three-dimensional projection, an audio display, or a combination thereof. Furthermore, an RF sub-system/baseband processor 810 may also be coupled to the processor 802. The RF sub-system/baseband processor 810 may include an antenna that is coupled to an RF receiver and to an RF transmitter (not shown). A communication port 812, or more than one communication port 812, may also be coupled to the processor 802. The communication port 812 may be adapted to be coupled to one or more peripheral devices 814, such as a modem, a printer, a computer, a scanner, or a camera, or to a network, such as a local area network, remote area network, intranet, or the Internet, for example.The processor 802 may control the processor-based system 800 by implementing software programs stored in the memory. The software programs may include an operating system, database software, drafting software, word processing software, media editing software, or media playing software, for example. The memory is operably coupled to the processor 802 to store and facilitate execution of various programs. For example, the processor 802 may be coupled to system memory, which may include one or more of spin torque transfer magnetic random access memory (STT-MRAM), magnetic random access memory (MRAM), dynamic random access memory (DRAM), static random access memory (SRAM), racetrack memory, and other known memory types. The system memory 816 may include volatile memory, non-volatile memory, or a combination thereof. The system memory 816 is typically large so that it can store dynamically loaded applications and data. In some embodiments, the system memory 816 may include semiconductor devices, such as the microelectronic devices and microelectronic device structures (e.g., the first microelectronic device structures 400, 500, 600 and the microelectronic device structure assemblies 450, 550, 650) described above, or a combination thereof.The processor 802 may also be coupled to non-volatile memory 818, which is not to suggest that system memory 816 is necessarily volatile. The non-volatile memory 818 may include one or more of STT-MRAM, MRAM, read-only memory (ROM) such as an EPROM, resistive read-only memory (RROM), and flash memory to be used in conjunction with the system memory. The size of the non-volatile memory 818 is typically selected to be just large enough to store any necessary operating system, application programs, and fixed data. Additionally, the non-volatile memory 818 may include a high- capacity memory such as disk drive memory, such as a hybrid-drive including resistive memory or other types of non-volatile solid-state memory, for example. The non-volatile memory 818 may include microelectronic devices, such as the microelectronic devices and microelectronic device structures (e.g., the first microelectronic device structures 400, 500, 600 and the microelectronic device structure assemblies 450, 550, 650) described above, or a combination thereof.Accordingly, in at least some embodiments, an electronic system comprises an input device, an output device, a processor device operably coupled to the input device and the output device, and a memory device operably coupled to the processor device and comprising at least one microelectronic device structure assembly. The at least one microelectronic device structure assembly comprises a first microelectronic device structure comprising a back end of the line structure comprising metallization materials in electrical communication with a source structure, a memory array region comprising strings of memory cells extending through a stack structure comprising alternating levels of insulative structures and conductive structures, and an interconnect region including bond pad structures in electrical communication with the memory array region. The electronic system further comprises a second microelectronic device structure comprising CMOS circuitry in electrical communication with the bond pad structures.Additional non-limiting example embodiments of the disclosure are set forth below.Embodiment 1 : A method of forming a microelectronic device, the method comprising: forming a source material around substantially an entire periphery of a base material; and removing the source material from lateral sides of the base material while maintaining the source material over an upper surface and a lower surface of the base material.Embodiment 2: The method of Embodiment 1, further comprising forming an etch stop material over the base material prior to forming the source material around substantially the entire periphery of the base material.Embodiment 3: The method of Embodiment 2, further comprising: selecting the base material to comprise a semiconductive material; and selecting the etch stop material to comprise a dielectric material.Embodiment 4: The method of any one of Embodiments 1 through 3, further comprising: selecting the base material to comprise a silicon; and selecting the etch stop material to comprise silicon dioxide. Embodiment 5: The method of any one of Embodiments 1 through 4, further comprising forming a protective material on lateral sides of remaining portions of source material after removing the source material from the lateral sides of the base material.Embodiment 6: The method of any one of Embodiments 1 through 5, further comprising selecting the base material to comprise one or more of monocrystalline silicon, poly crystalline silicon, silicon-germanium, germanium, gallium arsenide, a gallium nitride, gallium phosphide, indium phosphide, indium gallium nitride, and aluminum gallium nitride.Embodiment 7 : The method of any one of Embodiments 1 through 6, further comprising selecting the source material to comprise doped polysilicon.Embodiment 8: The method of any one of Embodiments 1 through 7, further comprising selecting the base material to comprise a ceramic material.Embodiment 9: The method of Embodiment 8, wherein selecting the base material to comprise a ceramic material comprises selecting the base material to comprise silicon on poly-aluminum nitride.Embodiment 10: The method of any one of Embodiments 1 through 7, further comprising selecting the base material to comprise a glass material.Embodiment 11 : The method of any one of Embodiments 1 through 10, wherein selecting the base material to comprise a glass material comprises selecting the base material to comprise one or more of borosilicate glass, phosphosilicate glass, fluorosilicate glass, borophosphosilicate glass, aluminosilicate glass, an alkaline earth boro- aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass.Embodiment 12: The method of any one of Embodiments 1 through 11, further comprising: forming a stack structure comprising a vertically alternating series of conductive structures and insulative structures over the source material; forming vertically extending strings of memory cells within the stack structure to form a first microelectronic device structure; attaching the first microelectronic device structure to a second microelectronic device structure comprising control logic circuitry to form a microelectronic device structure assembly; removing the base material after forming the microelectronic device structure assembly; and forming circuitry in electrical communication with the source material after removing the base material.Embodiment 13: The method of Embodiment 12, wherein removing the base material comprises one or more of grinding and wet etching the base material. Embodiment 14: A method of forming a microelectronic device, the method comprising: forming a doped semiconductive material over a base material; forming an insulative material over the doped semiconductive material; forming openings in the insulative material and exposing the doped semiconductive material through the openings; and epitaxially growing additional semiconductive material from the doped semiconductive material to fill the openings and cover the insulative material.Embodiment 15: The method of Embodiment 14, wherein forming a doped semiconductive material over a base material comprises forming the doped semiconductive material to comprise a semiconductive material of the base material doped and one or more dopants dispersed within the semiconductive material.Embodiment 16: The method of Embodiment 14 or Embodiment 15, further comprising forming a stack structure comprising vertically alternating series of conductive structures and insulative structures over the additional semiconductive material; forming vertically extending strings of memory cells within the stack structure to form a first microelectronic device structure; coupling the first microelectronic device structure to a second microelectronic device structure comprising control logic circuitry to form a microelectronic device structure assembly; and removing the base material after forming the microelectronic device structure assembly.Embodiment 17: The method of Embodiment 16, wherein removing the base material comprises removing the base material without substantially removing the doped semiconductive material.Embodiment 18: The method of Embodiment 16 or Embodiment 17, wherein removing the base material comprises forming trenches in the base material along a {100} plane or a {110} plane of the base material.Embodiment 19: The method of any one of Embodiments 16 through 18, further comprising forming a source structure over the additional semiconductive material after removing the base material.Embodiment 20: A base structure for a microelectronic device, comprising: a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material; and a doped semiconductive material overlying an upper surface of the base material and underlying a lower surface of the base material, side surfaces of the base material interposed between the upper surface and the lower surface of the base material substantially free of the doped semiconductive material. Embodiment 21 : The base structure of Embodiment 20, further comprising a dielectric material interposed between the upper surface of the base material and the doped semiconductive material.Embodiment 22: The base structure of Embodiment 20 or Embodiment 21, wherein the doped semiconductive material is positioned directly adjacent the lower surface of the base material.Embodiment 23: The base structure of Embodiment 21 or Embodiment 22, further comprising additional dielectric material directly adjacent side surfaces of the doped semiconductive material, an uppermost surface of the doped semiconductive material substantially free of the additional dielectric material.Embodiment 24: The base structure of any one of Embodiments 20 through 23, wherein the base material comprises a substantially undoped semiconductive material.Embodiment 25: The base structure of any one of Embodiment 20 through 24, wherein the base material comprises one or more of borosilicate glass, phosphosilicate glass, fluorosilicate glass, borophosphosilicate glass, aluminosilicate glass, an alkaline earth boro-aluminosilicate glass, quartz, titania silicate glass, and soda-lime glass.Embodiment 26: The base structure of any one of Embodiments 20 through 24, wherein the base material comprises one or more of poly-aluminum nitride, silicon on poly-aluminum nitride, aluminum nitride, aluminum oxide, and silicon carbide.Embodiment 27: A base structure for a microelectronic device, comprising: a base material comprising one or more of semiconductive material, ceramic material, and glass material; a doped semiconductive material on the base material; a dielectric material on the doped semiconductive material; filled openings extending through dielectric material to the doped semiconductive material; and an epitaxial semiconductive material substantially filling the filled openings and covering surfaces of the dielectric material outside of the filled openings.Embodiment 28: The base structure of Embodiment 27, wherein: the base material comprises silicon; the doped semiconductive material comprises conductively doped silicon; the dielectric material comprises silicon oxide; and the epitaxial semiconductive material comprises epitaxial silicon.Embodiment 29: The base structure of Embodiment 27, wherein the base material comprises the ceramic material or the glass material. Embodiment 30: A base structure for a microelectronic device, the base structure comprising: a base material comprising one or more of a semiconductive material, a ceramic material, and a glass material; doped polysilicon on a first side of the base material and on a second, opposite side of the base material; and a dielectric material adjacent side surfaces of the doped poly silicon on one of the first side and the second, opposite side of the base material.Embodiment 31 : The base structure of Embodiment 30, wherein a thickness of the doped poly silicon on the first side of the base material is substantially the same as a thickness of the doped poly silicon on the second, opposite side of the base material. While certain illustrative embodiments have been described in connection with the figures, those of ordinary skill in the art will recognize and appreciate that embodiments encompassed by the disclosure are not limited to those embodiments explicitly shown and described herein. Rather, many additions, deletions, and modifications to the embodiments described herein may be made without departing from the scope of embodiments encompassed by the disclosure, such as those hereinafter claimed, including legal equivalents. In addition, features from one disclosed embodiment may be combined with features of another disclosed embodiment while still being encompassed within the scope of the disclosure.
A method of producing a FinFET device with fin pitch of less than 20 nm is presented. In accordance with some embodiments, fins are deposited on sidewall spacers, which themselves are deposited on mandrels. The mandrels can be formed by lithographic processes while the fins and sidewall spacers formed by deposition technologies.
1.A method of forming a fin of a dual fin FinFET device, comprising:Forming a mandrel using a photolithography etching process;Forming sidewall spacers on the mandrel;Forming fins on the sidewall spacers,Wherein the double fin FinFET device is formed on each of the sidewall spacers.2.The method of claim 1 wherein the pitch of the dual fin FinFET device is less than 20 nm.3.The method of claim 1 wherein a spacing between a pair of said double fin FinFET devices is determined by a width of said mandrel.4.The method of claim 1 wherein the distance between the mandrels separates the nMOS FinFET device from the pMOS FinFET device.5.The method of claim 1 wherein the pitch of the dual fin FinFET device is determined by sidewall spacer width and fin width.6.The method of claim 1 further comprising removing the mandrel and the sidewall spacer.7.A method of forming a multi-fin device, comprising:Forming one or more mandrels having a first pitch and a first width;Forming sidewall spacers on each side of the one or more mandrels, the sidewall spacers each having a second width;Fins are formed on the sides of the sidewall spacers, wherein the fins have a pitch of less than 20 nm.8.The method of claim 7 wherein said mandrel is formed using a lithographic exposure and etching process.9.The method of claim 7 wherein said sidewall spacers and said fins are deposited using a material deposition technique.10.The method of claim 9 wherein said material deposition technique is a 7 nm technique.11.The method of claim 7 further comprising removing the mandrel and the sidewall spacer to leave the fin.12.The method of claim 7 wherein said multi-fin device is a dual fin device formed on one of said sidewall spacers.13.The method of claim 12 wherein adjacent dual fin devices are separated according to said width of said mandrel.14.The method of claim 7 wherein the multi-fin device comprises more than two fins formed on adjacent sidewall spacers.15.A multi-fin device comprising:a plurality of fins formed on the sidewall spacer by deposition, the sidewall spacers being separated by a mandrel,Wherein the spacing of the plurality of fins is less than 20 nm.16.The multi-fin device of claim 15 wherein the plurality of fins comprise two fins formed on opposite sides of one sidewall spacer.17.The multi-fin device of claim 16 wherein the two fins are separated from another dual fin device.18.The multi-fin device of claim 15 wherein the plurality of fins comprises more than two fins formed on sides of adjacent sidewall spacers.19.The multi-fin device of claim 15 wherein the sidewall spacers are removed.20.A multi-fin device comprising:A device for providing a plurality of fins with a pitch of less than 20 nm.21.A multi-fin device according to claim 20, wherein the means for providing a plurality of fins comprises:Means for depositing a mandrel; andMeans for depositing sidewall spacers on the mandrel.
New self-aligned quadruple patterning process for fin spacings less than 20 nmCross-reference to related applicationsThe present application claims priority to U.S. Application Serial No. 15/271,043, filed on Sep.Technical fieldThis application relates to the fabrication of FinFET structures having a pitch of less than twenty (20) nanometers (nm).Background techniqueFinFETs are increasingly used to effectively scale integrated circuits. A FinFET having a vertical fin structure functioning as a channel occupies less horizontal space on a semiconductor substrate, and can be formed in a logic region and a storage region by a general semiconductor patterning process.However, the continued pressure of further scaling integrated circuits has created a need for processes that form smaller and smaller fin structures. Limitations of optical resolution in current lithography processes do not allow for the formation of structures with sufficiently small features to further scale the integrated circuit. As the demand for feature sizes of these devices continues to decrease, new processes for achieving target sizes need to be developed.Summary of the inventionIn accordance with some embodiments, a method of forming a fin of a dual fin FinFET device includes forming a mandrel using a photolithographic etching process; forming a sidewall spacer on the mandrel; and forming a fin on the sidewall spacer, wherein the double fin FinFET A device is formed on each of the sidewall spacers in the sidewall spacer.A method of forming a multi-fin device can include forming one or more mandrels having a first pitch and a first width; forming sidewall spacers on each side of one or more mandrels, sidewall spacers Having a second width; and forming a fin on a side of the sidewall spacer, wherein the fin has a pitch of less than 20 nm.These and other embodiments are discussed more fully below in conjunction with the following figures.DRAWINGSFigure 1A shows a plan view of a multi-fin FinFET device.FIG. 1B shows a cross-sectional view of a multi-fin FinFET device.Figure 2 illustrates an example process for fabricating a FinFET device.FIG. 3 illustrates another example process for fabricating a FinFET device.FIG. 4 illustrates an example process for fabricating a FinFET device in accordance with some embodiments of the present invention.FIG. 5 illustrates another example process for fabricating a FinFET device in accordance with some embodiments of the present invention.Embodiments of the present disclosure and its advantages are best understood by referring to the following detailed description. It should be understood that the same reference numerals are used to identify the same elements in one or more of the drawings.Detailed waysIn the following description, specific details describing some embodiments are set forth. It will be apparent to those skilled in the art, however, that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are illustrative and not restrictive. Those skilled in the art will recognize that other elements are within the scope and spirit of the disclosure, even if not specifically described herein.The description and drawings of the aspects and embodiments of the present invention are not to be construed as limiting. Various changes may be made without departing from the spirit and scope of the invention. In some instances, well-known structures and techniques are not shown or described in detail to avoid obscuring the disclosure.1A and 1B illustrate a FinFET structure 100. As shown in the plan view of FIG. 1A, FinFET structure 100 includes one or more parallel fins 104-1 through fins 104-N formed on substrate 102. A gate structure 106 is deposited over the fins 104-1 through fins 104-N. Modern structures may include two or more fins 104 that are evenly separated by a pitch P. The FinFET device can be an nMOS device or a pMOS device depending on the formation of the fins 104-1 through 104-N. FIG. 1B illustrates a cross-sectional view of the structure 100 shown in FIG. 1A. The pitch P is defined by the space between the two fins and the width of the fin, as shown in Figure 1A.Although the FinFET structure 100 illustrated in FIGS. 1A and 1B greatly increases device density, increasing device density further results in a need for small feature sizes and smaller pitches of the FinFET structures used. However, techniques for fabricating FinFET structures with higher pitches have gone beyond the limitations of current lithographic techniques. In particular, it is desirable to scale the fin pitch below 20 nm to scale the logic cell height and thus scale the overall chip size.Modern lithography is wavelength limited when manufacturing devices with small features. Currently, 193 nm lithography is limited to feature sizes of about 80 nm. In other words, the 193 nm lithography process can produce features with a minimum pitch of about 80 nm using a single lithographic exposure and etch process, the minimum pitch being defined by the minimum feature width plus the minimum feature spacing. In order to achieve smaller pitch sizes, multiple patterned lithography (MPL) has been developed. Two forms of MPL have been tried, one using a repeated photolithography process (lithography-etch-lithography or LELE) technology and the other based on a self-aligned spacer process. Self-aligned spacer processing is advantageous when fabricating fins of FinFET structures. However, due to process limitations, it has proven difficult to achieve pitches of less than 20 nm.Self-aligned spacer processing is commonly referred to as self-aligned dual processing (SADP). In SADP, a set of mandrels is formed photolithographically by patterning and etching a mandrel material. A sidewall spacer can then be formed on the sidewall of the mandrel. The formation of the sidewall spacers can be accomplished by depositing material over the mandrel material, removing material deposited on the horizontal surface, and removing the mandrel material, leaving the sidewall spacers. The deposition of the sidewall spacers can result in a spacer width that is much smaller than the spacer width achievable by photolithographic forming of the mandrel. The sidewall spacers and mandrel can then be polished to expose the mandrel and the spacer used as an etch mask to remove the remaining mandrel material.Thus, the SADP process involves forming a spacer on the sidewalls of the pre-patterned mandrel as a film layer, removing the spacer layer from the horizontal surface, and removing the initially patterned mandrel material leaving the spacer itself. Since each mandrel has two sidewall spacers, the line density is now doubled. Therefore, SADP is suitable for defining a narrow gate with half of the initial lithographic pitch. In theory, this spacer method can be repeated to halve the spacing between the spacers. For example, a second SADP process, referred to as self-aligned quadruple patterning (SAQP), can result in a quarter of the pitch of the initially formed mandrel.FIG. 2 illustrates a SAQP process 200. As shown in Fig. 2, the mandrel 202-1 to the mandrel 202-4 are deposited at a pitch of P1. 2 illustrates the mandrel 202-1 to the mandrel 202-4, but any number of mandrels 202 can be formed. As discussed above, the mandrel 202 is patterned and etched using a photolithographic process.Then, by depositing sidewall material on the mandrel 202 in the SADP process, the sidewall material on the horizontal surface is removed, and etching is performed to remove the mandrel 202, and a side is formed on the mandrel 202-1 to the mandrel 202-4. Wall spacers 204-1 to 204-8. As shown in FIG. 2, sidewall spacers 204-1 and 204-2 are formed on opposite sides of the mandrel 202-1; sidewall spacers 204-3 and 204-4 are formed on the mandrel 202-2. On opposite sides; sidewall spacers 204-5 and 204-6 are formed on opposite sides of the mandrel 202-3; and sidewall spacers 204-7 and 204-8 are formed on opposite sides of the mandrel 202-4 .In a second SADP process on the sidewall spacers 204, sidewall spacers 206-1 through 206-8 and fins 208-1 through 208-8 are formed on the sidewalls of the sidewall spacers 204. As shown in FIG. 2, sidewall spacers 206-1 and fins 208-1 are formed on opposite sides of the sidewall spacers 204-1; fins 208-2 and sidewall spacers 206-2 are formed on the sidewalls. On opposite sides of the spacer 204-2; sidewall spacers 206-3 and fins 208-3 are formed on opposite sides of the sidewall spacers 204-3; fins 208-4 and sidewall spacers 206-4 are formed on On opposite sides of sidewall spacer 204-4; sidewall spacers 206-4 and fins 208-5 are formed on opposite sides of sidewall spacers 204-5; fins 208-6 and sidewall spacers 206-6 Formed on opposite sides of sidewall spacer 204-6; sidewall spacers 206-7 and fins 208-7 are formed on opposite sides of sidewall spacers 204-7; and fins 208-8 and sidewall spacers 206-8 are formed on opposite sides of the sidewall spacers 204-8. Then, spacers 204-1 through spacers 204-8 and spacers 206-1 to spacers 206-8 are removed leaving fins 208-1 through 208-8. Thus, fins 208-1 and fins 208-2 form part of a dual fin FinFET device; fins 208-3 and fins 208-4 form part of a dual fin FinFET device; fins 208-5 and fins 208-6 form a dual fin FinFET device A portion of the fin 208-7 and fin 208-8 form part of a dual fin FinFET device. As shown in FIG. 2, only one fin 208 is formed on each sidewall spacer of the sidewall spacer 206.As further shown in FIG. 2, the spacing between the deposited devices is halved in each successive SADP process. Thus, if the spacing between the mandrels 202 is P1, the spacing between the spacers 204 is P1/2 and the final spacing between the fins 208 in a single device is P1/4. In addition, the spacing between the devices is P1. Using the limitations of the 193 nm lithography process, P1 is 80 nm and the pitch between the fins is 20 nm. Therefore, the SAQP as shown in FIG. 2 cannot make a pitch size of less than 20 nm between the fins.Figure 3 illustrates a self-aligned eight-fold process (SAOP) that can achieve a pitch of less than 20 nm using a 193 nm lithography process. The SAOP is performed by three consecutive SADP processes, resulting in a pitch of 1/8 of the spacing between the mandrels. As shown in FIG. 3, a lithography process is used to pattern the mandrel 302-1 and the mandrel 302-2. The mandrel 302 is deposited at a pitch P2. As shown in FIG. 3, sidewall spacers 304 are then deposited on the mandrel 302 in a first SADP process. Thus, sidewall spacers 304-1 and 304-2 are formed on opposite sides of the mandrel 302-1, and sidewall spacers 304-3 and 304-4 are formed on opposite sides of the mandrel 302-2. The mandrel 302 is then removed leaving the sidewall spacers 304. As shown in Figure 3, the sidewall spacers 304 have a pitch of P2/2. In the second SADP process, sidewall spacers 306 are formed on sidewall spacers 304 and sidewall spacers 304 are removed. As shown in Figure 3, sidewall spacers 306-1 and 306-2 are formed on opposite sides of sidewall spacers 304-1; sidewall spacers 306-3 and 306-4 are formed in sidewall spacers On opposite sides of 304-2; sidewall spacers 306-5 and 306-6 are formed on opposite sides of sidewall spacer 304-3; and sidewall spacers 306-7 and 306-8 are formed at sidewall spacing On the opposite side of piece 304-4. The spacing between the sidewall spacers 306 is now P2/4.In yet another third SADP process, sidewall spacers 307 and fins 308 are formed on sidewall spacers 306, after which spacers 307 and spacers 306 are removed leaving fins 308. As shown in FIG. 3, sidewall spacers 307-1 and fins 308-1 are formed on opposite sides of the sidewall spacers 306-1; fins 308-2 and sidewall spacers 307-2 are formed on the sidewalls. On opposite sides of the spacer 306-2; sidewall spacers 307-3 and fins 308-3 are formed on opposite sides of the sidewall spacers 306-3; fins 308-4 and sidewall spacers 307-4 are formed on On opposite sides of the sidewall spacers 306-4; sidewall spacers 307-5 and fins 308-5 are formed on opposite sides of the sidewall spacers 306-5; fins 308-6 and sidewall spacers 307-6 Formed on opposite sides of sidewall spacers 306-6; sidewall spacers 307-7 and fins 308-7 are formed on opposite sides of sidewall spacers 306-7; and fins 308-8 and sidewall spacers 307-8 are formed on opposite sides of the sidewall spacers 306-8. Then, the spacing created between the fins 308 and the sidewall spacers 307 is P2/8. Likewise, only one fin 308 is formed on each of the sidewall spacers 306.If P2 is, for example, 128 nm, then P2/2 is 64 nm; P2/4 is 32 nm; and P2/8 is 16 nm. Therefore, a pitch of 16 nm can be achieved by device separation (after removing the 32 nm dummy fin or sidewall spacer 307 using the SAOP process). However, achieving the required third SADP process requires too many process steps, adds cost and complicates the process, and is difficult to achieve under the constraints of the material deposition process.4 illustrates an example of an SAQP process for implementing a dual fin device having a pitch of less than 20 nm, in accordance with some embodiments of the present invention. As shown in Figure 4, the mandrel 402 is deposited in a photolithography process. Mandrel 400-1 and mandrel 402-2 are illustrated in FIG. The mandrel 402-1 and the mandrel 402-2 are deposited at a pitch P3 and a width W1. A sidewall spacer 404 is deposited on the sidewall of the mandrel 402. Thus, sidewall spacers 404-1 and 404-2 are formed on opposite sides of the mandrel 402-1, and sidewall spacers 404-3 and 404-4 are formed on opposite sides of the mandrel 402-2. However, the width W2 of the sidewall spacers 404 is arranged to affect the final pitch of the fins 406 instead of arranging the width W2 of the sidewall spacers 404 such that the spacing between the sidewall spacers 404 is P3/2.As shown in FIG. 4, fins 406 are formed on the sidewalls of sidewall spacers 404. As illustrated, fins 406-1 and fins 406-2 are formed on opposite sides of sidewall spacers 404-1; fins 406-3 and fins 406-4 are formed on opposite sides of sidewall spacers 404-2 Upper fins 406-5 and fins 506-6 are formed on opposite sides of sidewall spacers 404-3; and fins 406-7 and fins 406-8 are formed on opposite sides of sidewall spacers 404-4. In some embodiments, the width W2 of the sidewall spacer 404 and the width W3 of the fin 406 are the same.As further illustrated in Figure 4, the mandrel 402 can be formed at a pitch of P3. The fins 406 in each device have a pitch of P and the device has a pitch spacing of D. As an example, if the sum of the width W2 of the sidewall spacer 404 and W2 of the fin 406 is 16 nm, the pitch P can be made 16 nm. As an example, if both W2 and W3 are 8 nm (this is an achievable size for depositing sidewall material using a 7 nm process technique), the pitch P is 16 nm. By varying the width W1 of the mandrel 402 and the width of the spacer W2, the spacing of the sidewall spacers 404PS can be set to P3/2. The spacing D between the resulting double fin devices is given by the sum of W1 and W2. However, the spacing between the mandrels 402 may not result in a uniform distance between each sidewall spacer 404.Therefore, as shown in FIG. 4, in a process including forming a mandrel 402 having a width W1 and a pitch P3 by a photolithography etching process, fins having a small pitch (a pitch of less than 20 nm) are formed. The sidewall spacers 404 are formed by depositing material on the sides of the mandrel 402, and the mandrel material is removed to leave the sidewall spacers 404. The sidewall spacers 404 each have a width W2 and the sidewall spacers have a pitch of PS. Fins 406 are then formed on the sidewalls of the spacers 404, with each spacer 404 being used for the formation of a single dual fin device. As a result, when the spacer 404 is removed, the fins 406 deposited on the sidewalls are not removed (i.e., there is no dummy spacer removed). In some embodiments, the mandrel pitch P3 can be used to separate the NMOS FinFET from the PMOS FinFET device.The example embodiment of the invention illustrated in Figure 4 can produce a fin pitch of less than 20 nm, primarily because the fin pitch only depends on the ability to deposit sidewall spacers 404 and fins 406 within a particular width. In the 7 nm technology, those deposition widths can be as low as 7 nm, and a width of 8 nm or more is available. The spacing D between the devices still depends on the process limitations involved in forming the mandrel 402.FIG. 5 illustrates an example of a process for fabricating a multi-fin FinFET device in which the number of fins is greater than two, in accordance with some embodiments of the present invention. Although it may not be easy to achieve a fin pitch of less than 20 nm due to limitations in process technology, the process illustrated in FIG. 5 can be used to fabricate a multi-fin device with a fin pitch greater than 20 nm.As shown in FIG. 5, the mandrel 502 is formed by a photolithography and etching process. Mandrel 502 (illustrated from mandrel 502-1 to mandrel 502-4) has a pitch of P4 and a width of W1 (within the limitations of the resolution of the lithography process). As further shown, sidewall spacers 504 are formed on mandrel 502. Specifically, sidewall spacers 504-1 and 504-3 are formed on opposite sides of the mandrel 502-1; sidewall spacers 504-3 and 504-4 are formed on opposite sides of the mandrel 502-2; Wall spacers 504-5 and 504-6 are formed on opposite sides of mandrel 502-3; and 504-7 and 504-8 are formed on opposite sides of mandrel 502-4.As further shown, the fin 506 and the sacrificial sidewall spacer 507 are formed on the sidewall spacer 504. Figure 5 illustrates the fabrication of a three fin device, however, sidewalls 504 on a single mandrel 502 can also be used to fabricate a four fin device. Devices having more than four fins can be fabricated using sidewalls 504 from adjacent mandrels 502.In the example three fin device illustrated in FIG. 5, the fins of each device may span adjacent mandrels 502. As shown in FIG. 5, fins 506-1 and fins 506-2 are formed on opposite sides of sidewall spacers 504-1. During formation of the fins 506, sacrificial spacers 507-1 and 507-2 are formed on opposite sides of the sidewall spacers 504-2 and removed. Fins 506-3 and fins 506-4 are formed on opposite sides of sidewall spacers 504-3, and fins 506-5 (forming third fins of devices including fins 506-3, 506-4, and 506-5) Formed on the first side of the sidewall spacers 504-4. A sacrificial spacer 507-3 is formed on the second side of the sidewall spacer 504-4. As further illustrated in Figure 5, sacrificial sidewall spacers 507-4 and fins 506-6 are formed on opposite sides of sidewall spacers 504-5; fins 506-7 and fins 506-8 are formed at sidewall spacers On opposite sides of piece 504-6; sacrificial sidewall spacers 507-5 and 507-6 are formed on opposite sides of sidewall spacer 504-7; and fins 506-9 and fins 506-10 are formed at sidewall spacing On the opposite side of piece 504-8.Thus, as shown in FIG. 5, assuming that a larger fin pitch can be tolerated, a device having more than two fins can be formed in accordance with some embodiments of the present invention. As shown in FIG. 5, the spacing between the fins 506 is determined by the width W2 of the sidewall spacers 504. Therefore, the fin pitch (shown as P4/4 by way of example in FIG. 5) is given by the sum of the width W2 of the sidewall spacer 504 and the width W3 of the sidewall spacer w3. The width W1 of the mandrel 502 can be adjusted to produce a total pitch P4/4 of the multi-fin device. In some embodiments, the width W2 of the spacer 504 is the same as the width W3 of the fin 506.In the foregoing specification, various embodiments have been described with reference to the drawings. It is apparent, however, that various modifications and changes can be made without departing from the scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded as
The invention relates to sequential pre-extraction through a link array. A pre-fetch manager may detect that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating a sequential read pattern. The pre-fetch manager may determine that a number of the group tags occupying the queue is below a queue threshold and store data associated with at least one tag of the group tags in an internal performance memory of the memory subsystem based on the detection and the determination. In such cases, the pre-fetch manager may pre-fetch the data from a memory manager and store in the internal performance memory.
1. A method for memory operation, comprising:detecting that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating a sequential read mode;determining that the number of the group tags occupying the queue is below a queue threshold; andBased at least in part on detecting that the group tags occupying the queue of the memory subsystem correspond to the sequential read mode and determining that the number of the group tags occupying the queue is below the queue threshold, data associated with at least one tag of the group tags is stored in an internal performance memory of the memory subsystem.2. The method according to claim 1, further comprising:transmitting a read request for the data corresponding to the at least one tag of the group tags to a memory manager based at least in part on the determining that the number of the group tags occupying the queue is below the queue threshold; andA read response associated with the data corresponding to the at least one tag of the set of tags is received based at least in part on transmitting the read request, wherein the data is stored based at least in part on receiving the read response.3. The method of claim 1 , wherein detecting that the group tag occupying the queue of the memory subsystem corresponds to the sequential read mode comprises:It is determined that the amount of data in the memory subsystem is above a data threshold.4. The method of claim 1 , wherein detecting that the group tag occupying the queue of the memory subsystem corresponds to the sequential read mode comprises:A sequential read of the sequential read mode is determined to be above a sequential read threshold.5. The method of claim 1 , wherein determining that the number of the group tags occupying the queue is below the queue threshold comprises:determining that a number of unoccupied queue slots in the queue is above the queue threshold; andA number of outstanding sequential reads for the sequential read mode is determined based at least in part on determining the number of unoccupied queue slots.6. The method according to claim 1, further comprising:receiving a command to retrieve data from a memory manager of a memory device; andResources of the internal performance memory are allocated based at least in part on the receiving, wherein the data associated with the at least one tag of the group of tags is stored at the allocated resources of the internal performance memory.7. The method according to claim 1, further comprising:detecting that a second set of tags occupying the queue of the memory subsystem corresponds to a write mode; andStoring the data corresponding to the at least one tag of the group tag in the internal performance memory of the memory subsystem is refrained.8. The method according to claim 7, further comprising:Based at least in part on the detection, flushing from the internal performance memory the stored data corresponding to the at least one tag of the set of tags.9. The method according to claim 1, further comprising:determining that the number of the group tags occupying the queue is above the queue threshold; andData associated with a least recently used read stream is removed from the internal performance memory based at least in part on the determination, wherein the data corresponding to the at least one tag is stored based at least in part on the removal.10. The method according to claim 1, further comprising:detecting that a second set of tags occupying the queue of the memory subsystem corresponds to a non-sequential read mode; andData associated with a least recently used read stream is removed from the internal performance memory based at least in part on the determination, wherein the data corresponding to the at least one tag is stored based at least in part on the removal.The method of claim 1 , wherein each tag of the group of tags is linked to the at least one tag of the group of tags.12. The method according to claim 1, further comprising:A sequential read of the sequential read mode is assigned to a queue time slot of the queue based at least in part on determining that the number of the group tags occupying the queue is below the queue threshold.13. A memory system comprising:a plurality of memory components; andA processing device operably coupled to the plurality of memory components and configured to cause the memory system to perform the following operations:detecting that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating a sequential read mode;determining that the number of the group tags occupying the queue is below a queue threshold; andBased at least in part on detecting that the group tags occupying the queue of the memory subsystem correspond to the sequential read mode and determining that the number of the group tags occupying the queue is below the queue threshold, data corresponding to at least one tag of the group tags is stored in an internal performance memory of the memory subsystem.14. The memory system according to claim 13,The processing device is further configured to cause the memory system to perform the following operations:transmitting a read request for the data corresponding to the at least one tag of the group tags to a memory manager based at least in part on the determining that the number of the group tags occupying the queue is below the queue threshold; andA read response associated with the data corresponding to the at least one tag of the set of tags is received based at least in part on transmitting the read request, wherein the data is stored based at least in part on receiving the read response.15. The memory system according to claim 13,The processing device is further configured to cause the memory system to perform the following operations:Determining that an amount of data of the memory subsystem is above a data threshold, wherein detecting that the group tags occupying the queue of the memory subsystem correspond to the sequential read mode is based at least in part on the determining that the amount of data of the memory subsystem is above the data threshold.16. The memory system of claim 13, further comprising:The processing device is further configured to cause the memory system to perform the following operations:Determining that a sequential read of the sequential read pattern is above a sequential read threshold, wherein detecting that the group tag occupying the queue of the memory subsystem corresponds to the sequential read pattern is based at least in part on the determining that the sequential read of the sequential read pattern is above the sequential read threshold.17. The memory system according to claim 13,The processing device is further configured to cause the memory system to perform the following operations:determining that the number of unoccupied queue slots in the queue is above the queue threshold; andA number of outstanding sequential reads for the sequential read mode is determined based at least in part on determining the number of unoccupied queue slots.18. The memory system according to claim 13,The processing device is further configured to cause the memory system to perform the following operations:receiving a command to retrieve data from a memory manager of a memory device; andResources of the internal performance memory are allocated based at least in part on the receiving, wherein the data associated with the at least one tag of the group of tags is stored at the allocated resources of the internal performance memory.19. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to:detecting that a set of tags occupying a queue of a memory subsystem corresponds to a sequential read mode such that a single read descriptor is associated with the set of tags;determining that the number of the group tags occupying the queue is below a queue threshold; andBased at least in part on detecting that the group tags occupying the queue of the memory subsystem correspond to the sequential read mode and determining that the number of the group tags occupying the queue is below the queue threshold, data corresponding to at least one tag of the group tags is stored in an internal performance memory of the memory subsystem.20. The non-transitory computer-readable storage medium of claim 19, further comprising instructions that, when executed by the processing device, cause the processing device to:transmitting a command requesting the data of the at least one tag corresponding to the group tag to a memory manager based at least in part on the determining that the number of the group tags occupying the queue is below the queue threshold; andA read response associated with the data corresponding to the at least one tag of the group of tags is received based at least in part on transmitting the command, wherein the data is stored based at least in part on receiving the read response.
Prefetch in order via linked arrayscross referenceThis patent application claims priority to U.S. patent application No. 16/833,306, filed by Virani et al. on March 27, 2020, entitled “SEQUENTIALPREFETCHING THROUGH A LINKING ARRAY,” which is assigned to the present assignee and is expressly incorporated herein by reference in its entirety.Technical FieldThe following relates to pre-fetching in sequence through a linked array for a memory subsystem.Background techniqueThe memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. In general, the host system may utilize the memory subsystem to store data at the memory devices and retrieve data from the memory devices.Summary of the inventionA method is described. The method may include detecting that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating an in-order read mode, determining that a number of the set of tags occupying the queue is below a queue threshold, and storing data associated with at least one tag of the set of tags in an internal performance memory of the memory subsystem based at least in part on the detection and determination.A system is described. The system may include a plurality of memory components and a processing device operably coupled to the plurality of memory components to detect that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating an in-order read mode, determine that the number of the set of tags occupying the queue is below a queue threshold, and store data associated with at least one tag of the set of tags in an internal performance memory of the memory subsystem based at least in part on the detection and determination.A non-transitory computer-readable medium storing code is described. The code may include instructions executable by a processor to detect that a set of tags occupying a queue of a memory subsystem corresponds to a single read descriptor indicating an in-order read mode, determine that the number of the set of tags occupying the queue is below a queue threshold, and store data associated with at least one tag of the set of tags in an internal performance memory of the memory subsystem based at least in part on the detection and determination.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be considered to limit the present disclosure to specific embodiments, but are only for explanation and understanding.FIG. 1 is an example of a computing system including a memory subsystem according to some embodiments of the present disclosure.2 is a flow chart of an example method of sequentially pre-fetching by linking arrays according to some embodiments of the present disclosure.3 is a block diagram of an example system that supports in-order pre-fetching through linked arrays, according to some embodiments of the present disclosure.FIG. 4 is an example of a computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure are directed to sequential pre-fetching through a linked array. A memory subsystem may be a storage device, a memory module, or a mixture of a storage device and a memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system may utilize a memory subsystem that includes one or more memory components (hereinafter also referred to as a "memory device"). The host system may provide data to be stored at the memory subsystem and may request retrieval of data from the memory subsystem.The memory device may be a non-volatile memory device. A non-volatile memory device is a package of one or more dies. Each die may be composed of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane is composed of a set of physical transfer units, such as blocks. Each block is composed of a set of pages. Each page is composed of a set of memory cells that store data bits.Data operations may be performed by the memory subsystem. Data operations may be operations initiated by the host. For example, the host system may initiate data operations (write, read, erase, etc.) on the memory subsystem. The host system may send access requests (e.g., write commands, read commands) to the memory subsystem to store data on a memory device at the memory subsystem and read data from a memory device on the memory subsystem.The memory subsystem may utilize tags (e.g., SysTags) (where the tags contain information about logical block addresses (LBAs), translation unit addresses (TUAs), internal buffer addresses) and HTags. TUAs may describe user data from the host system's perspective, and internal buffer addresses may track the location of data in a transfer procedure. HTags may be commands applied to the LBAs contained in the tags. Tags (e.g., SysTags) may be individual entities that cannot transmit information about other tags. For example, a tag may be an internal data description and control block that may be used to transmit user data information and the data itself between various hardware and firmware components.In some embodiments, tags may be chained by using a "Next SysTag" pointer contained within each tag, thereby grouping tags together to work with commands. Chaining of tags may allow a tag to point to another (e.g., next) tag. In some systems, individual tags may be placed into a queue within the memory subsystem and processed individually. With chaining, multiple tags may be processed with a single command because one tag may be linked to (e.g., point to) another tag. In such systems, however, chaining of tags may be underutilized, which may cause the memory subsystem to experience performance losses, increased signaling overhead, and increased processing overhead for performing operations. In such cases, underutilized chaining of tags may reduce performance of the memory subsystem, increase power consumption, or the like.Aspects of the present disclosure address the above and other deficiencies by having a memory subsystem that pre-fetches data sequentially. In low queue depth workloads, the number of tags allocated for host commands may be low. In some instances, when a sequential read mode is recognized (i.e., when a single descriptor (such as a single read descriptor) is generated for multiple tags), the memory subsystem may pre-fetch (i.e., read) data from the NAND and store it in a static random access memory (SRAM). In such cases, when the tag is processed, the memory manager (e.g., back-end) program has already been executed (e.g., the data from the NAND has been read (i.e., pre-fetched) before the tag corresponding to the data is processed) and the read data may be accessible via the SRAM. For example, when a host command is processed for the target pre-fetched data, the data may be passed to the host system. Because the pre-fetched data may be stored in the SRAM, the host system may access the pre-fetched data more quickly than the back-end program for reading data from the NAND each time a command is issued. Such techniques enhance the performance of the memory subsystem, thereby experiencing improved read speeds, reduced power consumption, reduced processing complexity, and improved processing time.Features of the present disclosure are first described in the context of a computing environment as described with reference to Figure 1. Features of the present disclosure are described in the context of the methods and block diagrams described with reference to Figures 2 and 3. These and other features of the present disclosure are further illustrated and described by and with reference to a computer system involving sequential pre-fetching through a linked array as described with reference to Figure 4.1 is an example of a computing system 100 including a memory subsystem 110 according to some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of these.The memory subsystem 110 may be a storage device, a memory module, or a mixture of a storage device and a memory module. Examples of storage devices include solid-state drives (SSDs), flash drives, universal serial bus (USB) flash drives, secure digital (SD) cards, embedded multimedia controllers (eMMC) drives, universal flash storage (UFS) drives, and hard disk drives (HDDs). Examples of memory modules include dual in-line memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile DIMMs (NVDIMMs).The computing system 100 may be a computing device such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., an airplane, drone, train, car, or other transportation vehicle), a device with Internet of Things (IoT) capabilities, an embedded computer (e.g., an embedded computer included in a vehicle, industrial equipment, or a networked commercial device), or such a computing device that includes a memory and a processing device.The computing system 100 may include a host system 105 coupled to one or more memory subsystems 110. In some embodiments, the host system 105 is coupled to different types of memory subsystems 110. FIG. 1 illustrates one example of a host system 105 coupled to one memory subsystem 110. As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical connections, optical connections, magnetic connections, etc.The host system 105 may include a processor chipset and a software stack executed by the processor chipset. The processor chipset may include one or more cores, one or more caches, a memory controller (e.g., an NVDIMM controller), and a storage protocol controller (e.g., a PCIe controller, a Serial Advanced Technology Attachment (SATA) controller). The host system 105 uses the memory subsystem 110, for example, to write data to the memory subsystem 110 and read data from the memory subsystem 110.The host system 105 may be coupled to the memory subsystem 110 using a physical host interface. Examples of the physical host interface include, but are not limited to, a SATA interface, a peripheral component interconnect express (PCIe) interface, a universal serial bus (USB) interface, a fiber channel, a double data rate (DDR) memory bus, a small computer system interface (SCSI), a serial attached SCSI (SAS), a DIMM interface (e.g., a DIMM socket interface supporting DDR), etc. The physical host interface may be used to transfer data between the host system 105 and the memory subsystem 110. When the memory subsystem 110 is coupled to the host system 105 through a PCIe interface, the host system 105 may further utilize a non-volatile memory express (NVMe) interface to access memory components (e.g., memory device 130). The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 105. FIG. 1 illustrates the memory subsystem 110 as an example. In general, the host system 105 may access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.Memory devices 130, 140 may include any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (e.g., memory device 140) may be, but are not limited to, random access memory (RAM) such as dynamic RAM (DRAM) and synchronous DRAM (SDRAM).Some examples of non-volatile memory devices (e.g., memory device 130) include NAND-type flash memory and write-in-place memory, such as a three-dimensional cross-point ("3D cross-point") memory device, which is a cross-point array of non-volatile memory cells. The cross-point array of non-volatile memory can perform bit storage based on changes in body resistance in conjunction with a stackable cross-grid data access array. In addition, compared to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells. NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each of the memory devices 130 may include one or more memory cell arrays. One type of memory cell, such as a single-level cell (SLC), may store one bit per cell. Other types of memory cells, such as multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC), may store multiple bits per cell. In some embodiments, each of the memory devices 130 may include one or more memory cell arrays, such as SLC, MLC, TLC, QLC, or any combination of these. In some embodiments, a particular memory device may include an SLC portion of memory cells as well as an MLC portion, a TLC portion, or a QLC portion. The memory cells of the memory devices 130 may be grouped into pages, which may refer to a logical unit of a memory device used to store data. For some types of memory (e.g., NAND), pages may be grouped to form blocks.Although nonvolatile memory devices such as a 3D cross-point array of nonvolatile memory cells and NAND-type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device 130 may be based on any other type of nonvolatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selected memory, other chalcogenide-based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), spin transfer torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide-based RRAM (OxRAM), NOR flash memory, and electrically erasable programmable ROM (EEPROM).The memory subsystem controller 115 (or simply controller 115) can communicate with the memory device 130 to perform operations, such as reading data, writing data, or erasing data and other such operations at the memory device 130. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination of these. The hardware may include digital circuits with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory subsystem controller 115 may be a microcontroller, a dedicated logic circuit (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP)), or other suitable processor.The memory subsystem controller 115 may include a processor 120 (e.g., a processing device) configured to execute instructions stored in a local memory 125. In the illustrated example, the local memory 125 of the memory subsystem controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the host system 105.In some embodiments, local memory 125 may include memory registers that store memory pointers, fetched data, etc. Local memory 125 may also include a read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including a memory subsystem controller 115, in another example of the present disclosure, the memory subsystem 110 does not include a memory subsystem controller 115 and may actually rely on external control (e.g., provided by an external host or by a processor or controller separate from the memory subsystem).In general, the memory subsystem controller 115 may receive commands or operations from the host system 105, and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory device 130 and/or the memory device 140. The memory subsystem controller 115 may be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and address conversion between logical addresses (e.g., logical block addresses (LBA), name space) and physical addresses (e.g., physical block addresses) associated with the memory device 130. The memory subsystem controller 115 may further include host interface circuitry to communicate with the host system 105 via a physical host interface. The host interface circuitry may convert commands received from the host system into command instructions to access the memory device 130 and/or the memory device 140, and convert responses associated with the memory device 130 and/or the memory device 140 into information for the host system 105.The memory subsystem 110 may also include additional circuits or components not illustrated. In some examples, the memory subsystem 110 may include a cache or buffer (e.g., DRAM) and address circuits (e.g., row decoders and column decoders) that may receive addresses from the memory subsystem controller 115 and decode the addresses to access the memory device 130.In some examples, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130. An external controller (e.g., memory subsystem controller 115) may externally manage memory device 130 (e.g., perform media management operations on memory device 130). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined within the same memory device package with a local controller (e.g., local controller 135) that performs media management. An example of a managed memory device is a managed NAND (MNAND) device.The memory subsystem 110 includes a pre-fetch manager 150 that can detect sequential reads, detect low queue depths, and pre-fetch data from the NAND for storage in an internal performance memory 155 (e.g., SRAM). In some cases, the pre-fetch manager 150 can determine a sequential read based on determining a size of data associated with one or more tags in a queue for processing at the memory subsystem 110, based on determining a number of sequential read commands, or both. In some examples, the pre-fetch manager 150 can detect a low queue depth by determining a number of outstanding commands (e.g., read commands) in the memory subsystem 110.In some examples, the memory subsystem controller 115 includes at least a portion of a pre-fetch manager 150. For example, the memory subsystem controller 115 may include a processor 120 (e.g., a processing device) configured to execute instructions stored in the local memory 125 for performing the operations described herein. In some examples, the pre-fetch manager 150 is part of or in communication with the host system 105, an application, or an operating system.The prefetch manager 150 may detect a command to prefetch data and allocate resources for the command. In some cases, the prefetch manager 150 may detect a write pattern and refrain from prefetching data based on detecting a write pattern in the memory subsystem 110 or in a queue of the memory subsystem 110. In such cases, the prefetch manager 150 may flush the prefetch data (e.g., erase the prefetch data stored at the internal performance memory 155) based on detecting a write pattern in the memory subsystem 110. Additional details regarding the operation of the prefetch manager 150 are described below.FIG. 2 is a flow chart of an example method 200 of sequential pre-fetching by linking an array according to some embodiments of the present disclosure. The method 200 may be performed by processing logic, which may include hardware (e.g., a processing device, a circuit, dedicated logic, programmable logic, microcode, hardware of a device, an integrated circuit, etc.), software (e.g., instructions running or executed on a processing device), or a combination thereof. In some examples, the method 200 is performed by the pre-fetch manager 150 of FIG. 1. Although shown in a particular order or sequence, the order of the processes may be modified unless otherwise specified. Therefore, the illustrated examples should be understood as examples only, and the illustrated processes may be performed in different orders, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in each example. Other process flows are possible.At operation 205, the processing device may detect a sequential read. For example, the processing device may detect a group of tags occupying a queue of a memory subsystem (e.g., memory subsystem 110 of FIG. 1). In some cases, the group of tags is associated with a single read descriptor for the group of tags indicating a sequential read mode. The single read descriptor may be an instance of an HTag (e.g., a read command) or other command applicable to multiple tags of the group of tags. For example, a single read descriptor may be used for each of the tags within the group of tags, and each tag of the group of tags may be linked to at least one other tag of the group of tags.A processing device (such as processor 120 of FIG. 1 ) may determine that the amount of data of a memory subsystem is above a threshold value (e.g., the amount of data in a queue of the memory subsystem is above a data threshold value). For example, the processing device may determine (e.g., detect) a sequential read based on determining the size of data in the memory subsystem or in a queue of the memory subsystem. In other examples, the processing device may determine that a sequential read of a sequential read pattern is above a sequential read threshold value. For example, the processing device may determine (e.g., detect) a sequential read based on determining the number of sequential read commands received at the memory subsystem or in a queue of the memory subsystem.In some cases, the processing device may determine that the number of the set of tags occupying the queue is above a queue threshold. In such cases, the processing device may remove data associated with the least recently used read stream from the internal performance memory (e.g., internal performance memory 155 of FIG. 1 ). For example, if a queue of the memory subsystem is occupied, the processing device may pop (e.g., remove or erase) data from the least recently used stream. Data corresponding to one of the tags may be stored based on popping the data.The processing device may detect that a second set of tags occupying a queue of the memory subsystem corresponds to a non-sequential read mode. In such a case, the processing device may remove data associated with the least recently used read stream from the internal performance memory. For example, after receiving a number of commands not related to sequential reads, the processing device may pop data from the least recently used stream. Data corresponding to one of the tags may be stored based on the removal (e.g., popping the data). In some examples, aspects of operation 205 may be performed by the pre-fetch manager 150 as described with reference to FIG. 1 .At operation 210, the processing device may detect a low queue depth. For example, the processing device may determine that the number of the group tags occupying the queue is below a queue threshold. In such a case, the processing device may assign a sequential read of the sequential read mode to a queue slot of the queue. For example, if a queue of the memory subsystem is not occupied, the processing device may assign a sequential read to the queue slot.The processing device may transmit a read request for data corresponding to at least one of the tags of the set of tags to a memory manager (e.g., a backend manager). The processing device may determine that a number of unoccupied queue slots in a queue is above a queue threshold and determine a number of outstanding sequential reads for a sequential read mode (e.g., based on a number of unoccupied slots of a queue of the memory subsystem). For example, the processing device may determine a number of outstanding commands in the memory subsystem to detect a low queue depth.In some examples, the processing device may detect that the second set of tags occupying the queue of the memory subsystem corresponds to a write pattern. In such a case, the processing device may refrain from storing data corresponding to at least one of the tags of the set of tags in the internal performance memory of the memory subsystem. For example, the processing device may detect the write pattern and refrain from pre-fetching data based on detecting the write pattern in the memory subsystem. If the write pattern is detected in the memory subsystem, the processing device may flush the stored data corresponding to at least one of the tags of the set of tags from the internal performance memory. In some examples, aspects of operation 210 may be performed by the pre-fetch manager 150 as described with reference to FIG. 1 .At operation 215, the processing device may pre-fetch data from the NAND to store in an internal performance memory such as an SRAM. For example, the processing device may store data associated with at least one tag of the set of tags in the internal performance memory of the memory subsystem. The processing device may transmit a read request (e.g., to the NAND or a backend manager of the NAND) and receive a read response associated with data corresponding to at least one tag of the set of tags. In such a case, storing the data may be based on receiving the read response such that the read response indicates the read data and the read data is stored on the internal performance memory.In some cases, the processing device may receive a command to retrieve data from a back end of a memory device (e.g., a memory manager) and allocate resources of an internal performance memory. In such cases, data associated with at least one tag of the set of tags may be stored at the allocated resources of the internal performance memory. For example, the processing device may detect a command to pre-fetch data and allocate resources for the command. In some examples, aspects of operation 215 may be performed by a pre-fetch manager 150 as described with reference to FIG. 1 .3 is a block diagram of an example system 300 that supports sequential prefetching through a linked array according to some embodiments of the present disclosure. System 300 may include a memory subsystem 305. Memory subsystem 305 may include a front end manager 310, a prefetch manager 315, an internal performance memory 330, and a queue 320. System 300 may also include a host system 335 and a memory manager 325.The memory subsystem 305 may receive a command for a sequential read operation from the host system 335. The pre-fetch manager 315 may transmit a read request for data corresponding to the tag of the group tag to the memory manager 325. The memory manager 325 may be included in the memory subsystem 305 or may be separate from the memory subsystem 305. Based on the transmitted read request, the pre-fetch manager 315 may receive a read response associated with the data corresponding to the tag of the group tag. The pre-fetch manager 315 may store the data in the internal performance memory 330 of the memory subsystem. Storing the data in the internal performance memory 330 may enable the pre-fetch manager 315 to pre-fetch the data when the depth of the queue 320 is low and when a sequential read is detected. The pre-fetch manager 315 may pre-fetch the data for sequential reading instead of using resources to fetch the data each time a read request is received, thereby reducing processing time, overhead, and power consumption.The tags included in the set of tags may be examples of internal data descriptions and control blocks that may be used to transfer user data information and data between hardware and firmware components. A processing core configured to receive tags may be included in the pre-fetch manager 315 or the memory manager 325. A tag may include a link to another tag in the set of tags. For example, the front-end manager 310 may send a single tag including a link (e.g., a next tag identifier may be included in a field of a single tag) to another tag, where information associated with the tag may be stored in the internal performance memory 330 from one processing core to a different processing core. When the memory manager 325 receives the tag, the memory manager 325 may retrieve the data associated with the tag of the set of tags. If the internal performance memory 330 includes data associated with the tag, the data may be retrieved from the internal performance memory 330.The linking of tags may allow the pre-fetch manager 315 to pre-fetch data without delaying or adversely affecting other components (e.g., the host system 335, the front-end manager 310, and the memory manager 325). For example, the pre-fetch manager 315 may link together tags that have not yet been associated with a command from the host system 335. In this case, if the memory subsystem includes a single queue depth environment with sequential reads, the pre-fetch manager 315 may identify (e.g., predict) a read command for the next tag (or LBA) and pre-fetch data from the memory manager 325. If the pre-fetch manager 315 detects a low queue depth and sequential reads, the pre-fetch manager 315 may pre-fetch data from the NAND (e.g., the memory manager 325) and store in the SRAM (e.g., the internal performance memory 330). In such a case, when the host system 335 issues the next read request, the data is already stored in the internal performance memory, and the next read request is not processed by the memory manager 325, thereby reducing processing time and improving efficiency.The pre-fetch manager 315 may detect that a set of tags occupying the queue 320 of the memory subsystem 305 corresponds to a single read descriptor (e.g., HTag) indicating a sequential read. For example, the pre-fetch manager 315 may detect the number of outstanding commands in the memory subsystem 305. The pre-fetch manager 315 may store the number of outstanding commands in a table to track the condition of the queue 320. For example, the pre-fetch manager 315 may identify, via the table, that a previous command is a read command. In such a case, the pre-fetch manager 315 may detect the sequential read mode based on determining that the sequential read of the sequential read mode is above a sequential read threshold (e.g., determining that the number of consecutive read commands exceeds a threshold). In other examples, the pre-fetch manager 315 may detect the sequential read when the amount of data in the memory subsystem is above (e.g., exceeds) a data threshold.The pre-fetch manager 315 may detect a condition (e.g., a low queue depth) of the queue 320. In some cases, the pre-fetch manager 315 may determine that the number of the set of tags occupying the queue 320 is below a queue threshold (e.g., low). The depth of the queue 320 may be low when the number of tags allocated for host commands from the host system 335 is low. For example, the pre-fetch manager 315 may determine the number of unoccupied time slots in the queue 320. The pre-fetch manager 315 may detect the low queue depth based on determining that the number of unoccupied time slots is above a threshold (e.g., exceeds the queue threshold). In some cases, the pre-fetch manager 315 may determine the number of outstanding sequential reads of the read mode after determining the number of unoccupied time slots. In some examples, the pre-fetch manager 315 may assign a sequential read of the sequential read mode to a time slot within the queue 320 based on determining that the queue depth is low.In some cases, the pre-fetch manager 315 may refrain from storing data in the internal performance memory 330. For example, the pre-fetch manager 315 may detect whether a write command (e.g., a write mode) is in the memory subsystem 305 or in the queue 320. In such cases, the pre-fetch manager 315 may detect that the second set of tags occupying the queue 320 corresponds to the write mode. The write mode and the read mode may utilize separate buffer pools (e.g., the internal performance memory 330), so that the pre-fetch manager 315 will need a separate empty buffer pool to store data for the write mode. In some cases, the queue 320 may already be occupied by the read mode. Therefore, the pre-fetch manager 315 may refrain from storing data corresponding to the tags in the internal performance memory 330 of the memory subsystem 305. In other examples, the pre-fetch manager 315 may flush the stored data corresponding to the tags from the internal performance memory 330 based on detecting the write mode.In some cases, the pre-fetch manager 315 may determine that the number of tags occupying the queue 320 is above a queue threshold (e.g., occupies each of the queues). In such cases, the pre-fetch manager 315 may remove (e.g., pop) data associated with the least recently used stream from the internal performance memory 330. In other examples, the pre-fetch manager 315 may detect that a second set of tags occupying the queue 320 corresponds to a non-sequential read mode. For example, after the pre-fetch manager 315 detects a number of commands that are not related to sequential reads (e.g., related to a write mode, an erase mode, or the like), the pre-fetch manager 315 may pop data from the read data stream (e.g., the least recently used stream).The pre-fetch manager 315 may detect a host command (e.g., a command received from the host system 335) requesting pre-fetch data. For example, the pre-fetch manager 315 may include a coherency checker that detects host commands. For example, when a read command or a write command is received at the memory subsystem, the command may be transmitted through the coherency checker to verify whether the outstanding data in the memory subsystem is related to the received host command. The pre-fetch data may be placed in the coherency checker with an indicator that identifies the data as pre-fetch data. The pre-fetch data may be placed in a buffer. For example, data associated with a tag may be stored in the internal performance memory 330 of the memory subsystem 305. When a read command is received from the host system 335, the command may be processed by the coherency checker. The coherency checker may identify that the data is already in the internal performance memory 330, thereby preventing the pre-fetch manager 315 from transmitting the read request to the memory manager 325 (e.g., NAND). In such a case, the data may be passed directly from the internal performance memory 330.In some cases, the pre-fetch manager 315 detects a command to retrieve pre-fetch data from the internal performance memory 330. To retrieve the pre-fetch data, the pre-fetch manager 315 may allocate a tag and a buffer for a single read descriptor (e.g., HTag). For each command received by the memory subsystem 305, resources of the internal performance memory 330 may be allocated. In such cases, data associated with at least one tag may be stored at the allocated resources of the internal performance memory 330.FIG. 4 is an example machine of a computer system 400 in which examples of the present disclosure may operate. The computer system 400 may include an instruction set for causing the machine to perform any one or more of the techniques described herein. In some examples, the computer system 400 may correspond to a host system (e.g., the host system 105 described with reference to FIG. 1 ), which includes, is coupled to, or utilizes a memory subsystem (e.g., the memory subsystem 110 described with reference to FIG. 1 ), or may be used to perform operations of a controller (e.g., executing an operating system to perform operations corresponding to the pre-fetch manager 150 described with reference to FIG. 1 ). In some examples, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment.The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network appliance, a server, a network router, a switch or a bridge, or any machine capable of executing (sequentially or otherwise) a set of instructions that specify actions to be taken by the machine. In addition, while a single machine is described, the term "machine" may also include any collection of machines that individually or collectively execute one (or more) sets of instructions to perform any one or more of the methodologies discussed herein.The example computer system 400 may include a processing device 405 , a main memory 410 (eg, ROM, flash memory, DRAM such as SDRAM or Rambus DRAM (RDRAM)), a static memory 415 (eg, flash memory, SRAM, etc.), and a data storage system 425 , which communicate with each other via a bus 445 .Processing device 405 represents one or more general purpose processing devices, such as a microprocessor, a central processing unit, or the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or a processor implementing a combination of instruction sets. Processing device 405 may also be one or more special purpose processing devices, such as an ASIC, an FPGA, a DSP, a network processor, or the like. Processing device 405 is configured to execute instructions 435 for performing the operations and steps discussed herein. Computer system 400 may further include a network interface device 420 to communicate via a network 440.The data storage system 425 may include a machine-readable storage medium 430 (also referred to as a computer-readable medium) on which is stored one or more instructions 435 or software embodying any one or more of the methods or functions described herein. The instructions 435 may also reside, completely or at least partially, within the main memory 410 and/or within the processing device 405 during execution thereof by the computer system 400, the main memory 410 and the processing device 405 also constituting machine-readable storage media. The machine-readable storage medium 430, the data storage system 425, and/or the main memory 410 may correspond to a memory subsystem.In one example, instructions 435 include instructions that implement functionality corresponding to pre-fetch manager 450 (e.g., pre-fetch manager 150 described with reference to FIG. 1 ). Although machine-readable storage medium 430 is shown as a single medium, the term "machine-readable storage medium" may include a single medium or multiple media storing one or more sets of instructions. The term "machine-readable storage medium" may also include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Thus, the term "machine-readable storage medium" may include, but is not limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means by which those skilled in the data processing arts most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. Operations are those requiring physical manipulation of physical quantities. Typically, but not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. At times, it has proven convenient, primarily for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may be directed to the actions and processes of a computer system or similar electronic computing device that manipulates and transforms data represented as physical (electronic) quantities within a computer system's registers and memories into other data similarly represented as physical quantities within a computer system's memories or registers or other such information storage systems.The present disclosure also relates to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the desired purpose, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as but not limited to any type of disk (including floppy disks, optical disks, CD-ROMs, and magnetic optical disks), ROM, RAM, EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions, each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods. The structures of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is described without reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having instructions stored thereon, which may be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some examples, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium, such as ROM, RAM, magnetic disk storage media, optical storage media, flash memory components, etc.In the foregoing description, examples of the present disclosure have been described with reference to specific exemplary examples thereof. It will be apparent that various modifications may be made to the present disclosure without departing from the broader scope of the examples of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense.
As part of a communication session, a wireless source device can transmit audio and video data to a wireless sink device, and the wireless sink device can transmit user input data received at the wireless sink device back to the wireless source device. In this manner, a user of the wireless sink device can control the wireless source device and control the content that is being transmitted from the wireless source device to the wireless sink device. The input data received at the wireless sink device can have associated coordinate information that is scaled or normalized by either the wireless sink device or the wireless source device.
CLAIMS: 1. A method of transmitting user data from a wireless sink device to a wireless source device, the method comprising: obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; generating a data packet comprising the normalized coordinate data; transmitting the data packet to a wireless source device. 2. The method of claim 1, further comprising: determining if the associated coordinate data is within a display window for content being received from the wireless source device. 3. The method of claim 1, further comprising: determining a resolution of a display window for content being received from the wireless source device; receiving from the source device, an indication of the resolution of a display of the source device. 4. The method of claim 3, wherein normalizing the coordinate data comprises scaling the associated coordinate data based on a ratio of the resolution of the display window and the resolution of the display of the source. 5. The method of claim 1, wherein the associated coordinate data corresponds to a location of a mouse click event. 6. The method of claim 1, wherein the associated coordinate data corresponds to a location of a touch event. 7. A wireless sink device for transmitting user data to a wireless source device, the wireless sink device comprising: a memory storing instructions; one or more processors configured to execute the instructions, wherein upon execution of the instructions the one or more processors cause: obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; generating a data packet comprising the normalized coordinate data; a transport unit to transmit the data packet to a wireless source device. 8. The wireless sink device of claim 7, wherein upon execution of the instructions the one or more processors further cause: determining if the associated coordinate data is within a display window for content being received from the wireless source device. 9. The wireless sink device of claim 7, wherein upon execution of the instructions the one or more processors further cause: determining a resolution of a display window for content being received from the wireless source device; receiving from the source device, an indication of the resolution of a display of the source device. 10. The wireless sink device of claim 9, wherein normalizing the coordinate data comprises scaling the associated coordinate data based on a ratio of the resolution of the display window and the resolution of the display of the source. 11. The wireless sink device of claim 7, wherein the associated coordinate data corresponds to a location of a mouse click event. 12. The wireless sink device of claim 7, wherein the associated coordinate data corresponds to a location of a touch event. 13. A computer-readable storage medium storing instructions that upon execution by one or more processors cause the one or more processors to perform a method of transmitting user data from a wireless sink device to a wireless source device, the method comprising: obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; generating a data packet comprising the normalized coordinate data; transmitting the data packet to a wireless source device. 14. A wireless sink device for transmitting user data to a wireless source device, the wireless sink device comprising: means for obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; means for normalizing the associated coordinate data to generate normalized coordinate data; means for generating a data packet comprising the normalized coordinate data; means for transmitting the data packet to a wireless source device. 15. A method of receiving user data from a wireless sink device at a wireless source device, the method comprising: receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; processing the data packet based on the normalized coordinate data. 16. The method of claim 15, further comprising: receiving from the wireless sink device a resolution of a display window for content being received from the wireless source device and location information for the display window; determining a resolution of a display of the source device. 17. The method of claim 16, wherein normalizing the coordinate data comprises scaling the associated coordinate data based on a ratio of the resolution of the display window and the resolution of the display of the source. 18. The method of claim 15, wherein the associated coordinate data corresponds to a location of a mouse click event. 19. The method of claim 15, wherein the associated coordinate data corresponds to a location of a touch event. 20. A wireless source device for receiving user data from a wireless sink device, the wireless source device comprising: a transport unit to receive a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; a memory storing instructions; one or more processors configured to execute the instructions, wherein upon execution of the instructions the one or more processors cause: normalizing the associated coordinate data to generate normalized coordinate data; processing the data packet based on the normalized coordinate data. 21. The wireless source device of claim 20, wherein upon execution of the instructions the one or more processors further cause: receiving from the wireless sink device a resolution of a display window for content being received from the wireless source device and location information for the display window; determining a resolution of a display of the source device. 22. The wireless source device of claim 21, wherein normalizing the coordinate data comprises scaling the associated coordinate data based on a ratio of the resolution of the display window and the resolution of the display of the source. 23. The wireless source device of claim 20, wherein the associated coordinate data corresponds to a location of a mouse click event. 24. The wireless source device of claim 20, wherein the associated coordinate data corresponds to a location of a touch event. 25. A computer-readable storage medium storing instructions that upon execution by one or more processors cause the one or more processors to perform method of receiving user data from a wireless sink device at a wireless source device, the method comprising: receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; processing the data packet based on the normalized coordinate data. 26. A wireless source device for receiving user data from a wireless sink device, the wireless source device comprising: means for receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; means for normalizing the associated coordinate data to generate normalized coordinate data; means for processing the data packet based on the normalized coordinate data.
USER INPUT BACK CHANNEL FOR WIRELESS DISPLAYS [0001] This application claims the benefit of U.S. Provisional Application No. 61/435,194, filed 21 January 2011; U.S. Provisional Application No. 61/447,592, filed 28 February 2011 ; U.S. Provisional Application No. 61/448,312, filed 2 March 2011; U.S. Provisional Application No. 61/450,101, filed 7 March 2011; U.S. Provisional Application No. 61/467,535, filed 25 March 2011 ; U.S. Provisional Application No. 61/467, 543, filed 25 March 2011; U.S. Provisional Application No. 61/514,863, filed 3 August 2011; and U.S. Provisional Application No. 61/544,440, filed 7 October 2011 ; the entire contents each of which are incorporated herein by reference in their entirety. TECHNICAL FIELD [0002] This disclosure relates to techniques for transmitting data between a wireless source device and a wireless sink device. BACKGROUND [0003] Wireless display (WD) or Wi-Fi Display (WFD) systems include a wireless source device and one or more wireless sink devices. The source device and each of the sink devices may be either mobile devices or wired devices with wireless communication capabilities. One or more of the source device and the sink devices may, for example, include mobile telephones, portable computers with wireless communication cards, personal digital assistants (PDAs), portable media players, or other such devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, e-readers, or any type of wireless display, video gaming devices, or other types of wireless communication devices. One or more of the source device and the sink devices may also include wired devices such as televisions, desktop computers, monitors, projectors, and the like, that include communication capabilities. [0004] The source device sends media data, such as audio video (AV) data, to one or more of the sink devices participating in a particular media share session. The media data may be played back at both a local display of the source device and at each of the displays of the sink devices. More specifically, each of the participating sink devices renders the received media data on its screen and audio equipment. SUMMARY [0005] This disclosure generally describes a system where a wireless sink device can communicate with a wireless sink device. As part of a communication session, a wireless source device can transmit audio and video data to the wireless sink device, and the wireless sink device can transmit user inputs received at the wireless sink device back to the wireless source device. In this manner, a user of the wireless sink device can control the wireless source device and control the content that is being transmitted from the wireless source device to the wireless sink device. [0006] In one example, a method of transmitting user data from a wireless sink device to a wireless source device includes obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; generating a data packet comprising the normalized coordinate data; transmitting the data packet to a wireless source device. [0007] In another example, a wireless sink device for transmitting user data to a wireless source device includes a memory storing instructions; one or more processors configured to execute the instructions, wherein upon execution of the instructions the one or more processors cause obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data, normalizing the associated coordinate data to generate normalized coordinate data, generating a data packet comprising the normalized coordinate data; and, a transport unit to transmit the data packet to a wireless source device. [0008] In another example, a computer-readable storage medium stores instructions that upon execution by one or more processors cause the one or more processors to perform a method of transmitting user data from a wireless sink device to a wireless source device. The method includes obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; generating a data packet comprising the normalized coordinate data; transmitting the data packet to a wireless source device. [0009] In another example, a wireless sink device for transmitting user data to a wireless source device includes means for obtaining user input data at the wireless sink device, wherein the user input data has associated coordinate data; means for normalizing the associated coordinate data to generate normalized coordinate data; means for generating a data packet comprising the normalized coordinate data; means for transmitting the data packet to a wireless source device. [0010] In another example, a method of receiving user data from a wireless sink device at a wireless source device includes receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; processing the data packet based on the normalized coordinate data. [0011] In another example, a wireless sink device for transmitting user data to a wireless source device includes a transport unit to receive a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; a memory storing instructions; one or more processors configured to execute the instructions, wherein upon execution of the instructions the one or more processors cause normalizing the associated coordinate data to generate normalized coordinate data and processing the data packet based on the normalized coordinate data. [0012] In another example, a computer-readable storage medium stores instructions that upon execution by one or more processors cause the one or more processors to perform method of receiving user data from a wireless sink device at a wireless source device. The method includes receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; normalizing the associated coordinate data to generate normalized coordinate data; processing the data packet based on the normalized coordinate data. [0013] In another example, a wireless sink device for transmitting user data to a wireless source device includes means for receiving a data packet at the wireless source device, wherein the data packet comprises user input data with associated coordinate data; means for normalizing the associated coordinate data to generate normalized coordinate data; means for processing the data packet based on the normalized coordinate data. BRIEF DESCRIPTION OF DRAWINGS [0014] FIG. 1A is a block diagram illustrating an example of a source/sink system that may implement techniques of this disclosure. [0015] FIG. IB is a block diagram illustrating an example of a source/sink system with two sink devices. [0016] FIG. 2 shows an example of a source device that may implement techniques of this disclosure. [0017] FIG. 3 shows an example of a sink device that may implement techniques of this disclosure. [0018] FIG. 4 shows a block diagram of a transmitter system and a receiver system that may implement techniques of this disclosure. [0019] FIGS. 5A and 5B show example message transfer sequences for performing capability negotiations according to techniques of this disclosure. [0020] FIG. 6 shows an example data packet that may be used for delivering user input data obtained at a sink device to a source device. [0021] FIGS. 7A and 7B are flow charts illustrating techniques of this disclosure that may be used for capability negotiation between a source device and a sink device. [0022] FIGS. 8A and 8B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with user input data. [0023] FIGS. 9A and 9B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with user input data. [0024] FIGS. 10A and 10B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with timestamp information and user input data. [0025] FIGS. 11A and 1 IB are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with timestamp information and user input data. [0026] FIGS. 12A and 12B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets that include voice commands. [0027] FIGS. 13 A and 13B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with multi-touch user input commands. [0028] FIGS. 14A and 14B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets with user input data forwarded form a third party device. [0029] FIGS. 15 A and 15B are flow charts illustrating techniques of this disclosure that may be used for transmitting and receiving data packets. DETAILED DESCRIPTION [0030] This disclosure generally describes a system where a wireless sink device can communicate with a wireless sink device. As part of a communication session, a wireless source device can transmit audio and video data to the wireless sink device, and the wireless sink device can transmit user inputs received at the wireless sink device back to the wireless source device. In this manner, a user of the wireless sink device can control the wireless source device and control the content that is being transmitted from the wireless source device to the wireless sink device. [0031] FIG. 1A is a block diagram illustrating an exemplary source/sink system 100 that may implement one or more of the techniques of this disclosure. As shown in FIG. 1A, system 100 includes source device 120 that communicates with sink device 160 via communication channel 150. Source device 120 may include a memory that stores audio/video (A/V) data 121, display 122, speaker 123, audio/video encoder 124 (also referred to as encoder 124), audio/video control module 125, and transmitter/receiver (TX/RX) unit 126. Sink device 160 may include display 162, speaker 163, audio/video decoder 164 (also referred to as decoder 164), transmitter/receiver unit 166, user input (UI) device 167, and user input processing module (UIPM) 168. The illustrated components constitute merely one example configuration for source/sink system 100. Other configurations may include fewer components than those illustrated or may include additional components than those illustrated. [0032] In the example of FIG. 1A, source device 120 can display the video portion of audio/video data 121 on display 122 and can output the audio portion of audio/video data 121 on speaker 123. Audio/video data 121 may be stored locally on source device 120, accessed from an external storage medium such as a file server, hard drive, external memory, Blu-ray disc, DVD, or other physical storage medium, or may be streamed to source device 120 via a network connection such as the internet. In some instances audio/video data 121 may be captured in real-time via a camera and microphone of source device 120. Audio/video data 121 may include multimedia content such as movies, television shows, or music, but may also include real-time content generated by source device 120. Such real-time content may for example be produced by applications running on source device 120, or video data captured, e.g., as part of a video telephony session. As will be described in more detail, such real-time content may in some instances include a video frame of user input options available for a user to select. In some instances, audio/video data 121 may include video frames that are a combination of different types of content, such as a video frame of a movie or TV program that has user input options overlaid on the frame of video. [0033] In addition to rendering audio/video data 121 locally via display 122 and speaker 123, audio/video encoder 124 of source device 120 can encode audio/video data 121 , and transmitter/receiver unit 126 can transmit the encoded data over communication channel 150 to sink device 160. Transmitter/receiver unit 166 of sink device 160 receives the encoded data, and audio/video decoder 164 decodes the encoded data and outputs the decoded data via display 162 and speaker 163. In this manner, the audio and video data being rendered by display 122 and speaker 123 can be simultaneously rendered by display 162 and speaker 163. The audio data and video data may be arranged in frames, and the audio frames may be time- synchronized with the video frames when rendered. [0034] Audio/video encoder 124 and audio/video decoder 164 may implement any number of audio and video compression standards, such as the ITU-T H.264 standard, alternatively referred to as MPEG-4, Part 10, Advanced Video Coding (AVC), or the newly emerging high efficiency video coding (HEVC) standard, sometimes called the H.265 standard. Many other types of proprietary or standardized compression techniques may also be used. Generally speaking, audio/video decoder 164 is configured to perform the reciprocal coding operations of audio/video encoder 124. Although not shown in FIG. 1A, in some aspects, A/V encoder 124 and A/V decoder 164 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. [0035] As will be described in more detail below, A/V encoder 124 may also perform other encoding functions in addition to implementing a video compression standard as described above. For example, A/V encoder 124 may add various types of metadata to A/V data 121 prior to A/V data 121 being transmitted to sink device 160. In some instances, A/V data 121 may be stored on or received at source device 120 in an encoded form and thus not require further compression by A/V encoder 124. [0036] Although, FIG. 1A shows communication channel 150 carrying audio payload data and video payload data separately, it is to be understood that in some instances video payload data and audio payload data may be part of a common data stream. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP). Audio/video encoder 124 and audio/video decoder 164 each may be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. Each of audio/video encoder 124 and audio/video decoder 164 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC). Thus, each of source device 120 and sink device 160 may comprise specialized machines configured to execute one or more of the techniques of this disclosure. [0037] Display 122 and display 162 may comprise any of a variety of video output devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, or another type of display device. In these or other examples, the displays 122 and 162 may each be emissive displays or transmissive displays. Display 122 and display 162 may also be touch displays such that they are simultaneously both input devices and display devices. Such touch displays may be capacitive, resistive, or other type of touch panel that allows a user to provide user input to the respective device. [0038] Speaker 123 may comprise any of a variety of audio output devices such as headphones, a single-speaker system, a multi-speaker system, or a surround sound system. Additionally, although display 122 and speaker 123 are shown as part of source device 120 and display 162 and speaker 163 are shown as part of sink device 160, source device 120 and sink device 160 may in fact be a system of devices. As one example, display 162 may be a television, speaker 163 may be a surround sound system, and decoder 164 may be part of an external box connected, either wired or wirelessly, to display 162 and speaker 163. In other instances, sink device 160 may be a single device, such as a tablet computer or smartphone. In still other cases, source device 120 and sink device 160 are similar devices, e.g., both being smartphones, tablet computers, or the like. In this case, one device may operate as the source and the other may operate as the sink. These rolls may even be reversed in subsequent communication sessions. In still other cases, the source device may comprise a mobile device, such as a smartphone, laptop or tablet computer, and the sink device may comprise a more stationary device (e.g., with an AC power cord), in which case the source device may deliver audio and video data for presentation to a large crowd via the sink device. [0039] Transmitter/receiver unit 126 and transmitter/receiver unit 166 may each include various mixers, filters, amplifiers and other components designed for signal modulation, as well as one or more antennas and other components designed for transmitting and receiving data. Communication channel 150 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 120 to sink device 160. Communication channel 150 is usually a relatively short-range communication channel, similar to Wi-Fi, Bluetooth, or the like. However, communication channel 150 is not necessarily limited in this respect, and may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media. In other examples, communication channel 150 may even form part of a packet-based network, such as a wired or wireless local area network, a wide-area network, or a global network such as the Internet. Additionally, communication channel 150 may be used by source device 120 and sink device 160 to create a peer-to-peer link. Source device 120 and sink device 160 may communicate over communication channel 150 using a communications protocol such as a standard from the IEEE 802.11 family of standards. Source device 120 and sink device 160 may, for example, communicate according to the Wi-Fi Direct standard, such that source device 120 and sink device 160 communicate directly with one another without the use of an intermediary such as a wireless access points or so called hotspot. Source device 120 and sink device 160 may also establish a tunneled direct link setup (TLDS) to avoid or reduce network congestion. The techniques of this disclosure may at times be described with respect to Wi-Fi, but it is contemplated that aspects of these techniques may also be compatible with other communication protocols. By way of example and not limitation, the wireless communication between source device 120 and sink device may utilize orthogonal frequency division multiplexing (OFDM) techniques. A wide variety of other wireless communication techniques may also be used, including but not limited to time division multi access (TDM A), frequency division multi access (FDMA), code division multi access (CDMA), or any combination of OFDM, FDMA, TDMA and/or CDMA. WiFi Direct and TDLS are intended to setup relatively short-distance communication sessions. Relatively short distance in this context may refer to, for example, less than 70 meters, although in a noisy or obstructed environment the distance between devices may be even shorter, such as less than 35 meters. [0040] In addition to decoding and rendering data received from source device 120, sink device 160 can also receive user inputs from user input device 167. User input device 167 may, for example, be a keyboard, mouse, trackball or track pad, touch screen, voice command recognition module, or any other such user input device. UIPM 168 formats user input commands received by user input device 167 into a data packet structure that source device 120 is capable of interpreting. Such data packets are transmitted by transmitter/receiver 166 to source device 120 over communication channel 150. Transmitter/receiver unit 126 receives the data packets, and A/V control module 125 parses the data packets to interpret the user input command that was received by user input device 167. Based on the command received in the data packet, A/V control module 125 can change the content being encoded and transmitted. In this manner, a user of sink device 160 can control the audio pay load data and video payload data being transmitted by source device 120 remotely and without directly interacting with source device 120. Examples of the types of commands a user of sink device 160 may transmit to source device 120 include commands for rewinding, fast forwarding, pausing, and playing audio and video data, as well as commands for zooming, rotating, scrolling, and so on. Users may also make selections, from a menu of options for example, and transmit the selection back to source device 120. [0041] Additionally, users of sink device 160 may be able to launch and control applications on source device 120. For example, a user of sink device 160 may able to launch a photo editing application stored on source device 120 and use the application to edit a photo that is stored locally on source device 120. Sink device 160 may present a user with a user experience that looks and feels like the photo is being edited locally on sink device 160 while in fact the photo is being edited on source device 120. Using such a configuration, a device user may be able to leverage the capabilities of one device for use with several devices. For example, source device 120 may be a smartphone with a large amount of memory and high-end processing capabilities. A user of source device 120 may use the smartphone in all the settings and situations smartphones are typically used. When watching a movie, however, the user may wish to watch the movie on a device with a bigger display screen, in which case sink device 160 may be a tablet computer or even larger display device or television. When wanting to send or respond to email, the user may wish to use a device with a keyboard, in which case sink device 160 may be a laptop. In both instances, the bulk of the processing may still be performed by source device 120 (a smartphone in this example) even though the user is interacting with a sink device. In this particular operating context, due to the bulk of the processing being performed by source device 120, sink device 160 may be a lower cost device with fewer resources than if sink device 160 were being asked to do the processing being done by source device 120. Both the source device and the sink device may be capable of receiving user input (such as touch screen commands) in some examples, and the techniques of this disclosure may facilitate two way interaction by negotiating and or identifying the capabilities of the devices in any given session. [0042] In some configuration, A/V control module 125 may be an operating system process being executed by the operating system of source device 125. In other configurations, however, A/V control module 125 may be a software process of an application running on source device 120. In such a configuration, the user input command may be interpreted by the software process, such that a user of sink device 160 is interacting directly with the application running on source device 120, as opposed to the operating system running on source device 120. By interacting directly with an application as opposed to an operating system, a user of sink device 160 may have access to a library of commands that are not native to the operating system of source device 120. Additionally, interacting directly with an application may enable commands to be more easily transmitted and processed by devices running on different platforms. [0043] Source device 120 can respond to user inputs applied at wireless sink device 160. In such an interactive application setting, the user inputs applied at wireless sink device 160 may be sent back to the wireless display source over communication channel 150. In one example, a reverse channel architecture, also referred to as a user interface back channel (UIBC) may be implemented to enable sink device 160 to transmit the user inputs applied at sink device 160 to source device 120. The reverse channel architecture may include upper layer messages for transporting user inputs and lower layer frames for negotiating user interface capabilities at sink device 160 and source device 120. The UIBC may reside over the Internet Protocol (IP) transport layer between sink device 160 and source device 120. In this manner, the UIBC may be above the transport layer in the Open System Interconnection (OSI) communication model. In one example, the OSI communication includes seven layers (1 -physical, 2 - data link, 3 - network, 4 - transport, 5 - session, 6 - presentation, and 7 - application). In this example, being above transport layer refers to layers 5, 6, and 7. To promote reliable transmission and in sequence delivery of data packets containing user input data, UIBC may be configured run on top of other packet-based communication protocols such as the transmission control protocol/internet protocol (TCP/IP) or the user datagram protocol (UDP). UDP and TCP can operate in parallel in the OSI layer architecture. TCP/IP can enable sink device 160 and source device 120 to implement retransmission techniques in the event of packet loss. [0044] In some cases, there may be a mismatch between the user input interfaces located at source device 120 and sink device 160. To resolve the potential problems created by such a mismatch and to promote a good user experience under such circumstances, user input interface capability negotiation may occur between source device 120 and sink device 160 prior to establishing a communication session or at various times throughout a communication session. As part of this negotiation process, source device 120 and sink device 160 can agree on a negotiated screen resolution. When sink device 160 transmits coordinate data associated with a user input, sink device 160 can scale coordinate data obtained from display 162 to match the negotiated screen resolution. In one example, if sink device 160 has a 1280x720 resolution and source device 120 has a 1600x900 resolution, the devices may, for example, use 1280x720 as their negotiated resolution. The negotiated resolution may be chosen based on a resolution of sink device 160, although a resolution of source device 120 or some other resolution may also be used. In the example where the sink device of 1280x720 is used, sink device 160 can scale obtained x-coordinates by a factor of 1600/1280 prior to transmitting the coordinates to source device 120, and likewise, sink device 160 can scale obtained y-coordinates by 900/720 prior to transmitting the coordinates to source device 120. In other configurations, source device 120 can scale the obtained coordinates to the negotiated resolution. The scaling may either increase or decrease a coordinate range based on whether sink device 160 uses a higher resolution display than source device 120, or vice versa. [0045] Additionally, in some instances, the resolution at sink device 160 may vary during a communication session, potentially creating a mismatch between display 122 and display 162. In order to improve the user experience and to ensure proper functionality, source/sink system 100 may implement techniques for reducing or preventing user interaction mismatch by implementing techniques for screen normalization. Display 122 of source device 120 and display 162 of sink device 160 may have different resolutions and/or different aspects ratios. Additionally, in some settings, a user of sink device 160 may have the ability to resize a display window for the video data received from source device 120 such that the video data received from source device 120 is rendered in a window that covers less than all of display 162 of sink device 160. In another example setting, a user of sink device 160 may have the option of viewing content in either a landscape mode or a portrait mode, each of which has unique coordinates and different aspect ratios. In such situations, coordinates associated with a user input received at sink device 160, such as the coordinate for where a mouse click or touch event occurs, may not able to be processed by source device 120 without modification to the coordinates. Accordingly, techniques of this disclosure may include mapping the coordinates of the user input received at sink device 160 to coordinates associated with source device 120. This mapping is also referred to as normalization herein, and as will be explained in greater detail below, this mapping can be either sink-based or source-based. [0046] User inputs received by sink device 160 can be received by UI module 167, at the driver level for example, and passed to the operating system of sink device 160. The operating system on sink device 160 can receive coordinates (XSINK, ysiNiO associated with where on a display surface a user input occurred. In this example, (XSINK, ysiNic) can be coordinates of display 162 where a mouse click or a touch event occurred. The display window being rendered on display 162 can have an x-coordinate length (Low) and a y-coordinate width (WDW) that describe the size of the display window. The display window can also have an upper left corner coordinate (aow, bow) that describes the location of the display window. Based on Low, WDW, and the upper left coordinate (aow, bow), the portion of display 162 covered by the display window can be determined. For example, an upper right corner of the display window can be located at coordinate (aow + Low, bow), a lower left corner of the display window can be located at coordinate (aow, bow + WDW), and a lower right corner of the display window can be located at coordinate (aow + Low, bow + WDW)- Sink device 160 can process an input as a UIBC input if the input is received at a coordinate within the display window. In other words, an input with associated coordinates (XSINK, ysiNic) can be processed as a UIBC input if the following conditions are met: ¾DW≤ XSINK≤ ¾>W +LDW (1) ow≤ ysiNK≤ bow +WDW (2) [0047] After determining that a user input is a UIBC input, coordinates associated with the input can be normalized by UIPM 168 prior to being transmitted to source device 120. Inputs that are determined to be outside the display window can be processed locally by sink device 160 as non-UIBC inputs. [0048] As mentioned above, the normalization of input coordinates can be either sourced-based or sink-based. When implementing sink-based normalization, source device 120 can send a supported display resolution (LSRC, WSRC) for display 122, either with video data or independently of video data, to sink device 160. The supported display resolution may, for example, be transmitted as part of a capability negotiation session or may be transmitted at another time during a communication session. Sink device 160 can determine a display resolution (LSINK, WSINK) for display 162, the display window resolution (Low, WDW) for the window displaying the content received from source device 120, and the upper left corner coordinate (aDW, bDW) for the display window. As described above, when a coordinate (XSINK, ysiNic) corresponding to a user input is determined to be within the display window, the operating system of sink device 160 can map the coordinate (XSINK, YSINK) to source coordinates (XSRC, YSRC) using conversion functions. Example conversion functions for converting (XSINK, ysiNic) to (xsRc, ysRc) can be as follows: XSRC = (XSINK - aow) * (LSRC/LDW) (3) ysRc = (ysiNK - bDW) * (WSRCAVDW) (4) [0049] Thus, when transmitting a coordinate corresponding to a received user input, sink device 160 can transmit the coordinate (XSRC, ysRc) for a user input received at (XSINK, ysiNK)- As will be described in more detail below, coordinate (XSRC, ysRc) may, for example, be transmitted as part of a data packet used for transmitting user input received at sink device 160 to source device 120 over the UIBC. Throughout other portions of this disclosure, where input coordinates are described as being included in a data packet, those coordinates can be converted to source coordinates as described above in instances where source/sink system 100 implements sink-based normalization. [0050] When source/sink system 100 implements sourced-based normalization, for user inputs determined to by UIBC inputs as opposed to local inputs (i.e. within a display window as opposed to outside a display window), the calculations above can be performed at source device 120 instead of sink device 160. To facilitate such calculations, sink device 160 can transmit to source device 120 values for LDW, WDW, and location information for the display window (e.g. aow, bow), as well as coordinates for (XSINK, ysiNK)- Using these transmitted values, source device 120 can determine values for (XSRC, YSRC) according to equations 3 and 4 above. [0051] In other implementations of sink-based normalization, sink device 160 can transmit coordinates (XDW> yD\v) for a user input that describe where within the display window a user input event occurs as opposed to where on display 162 the user input even occurs. In such an implementation, coordinates (XDW> yD\v) can be transmitted to source device 120 along with values for (Low, WDW)- Based on these received values, source device 120 can determine (XSRC, ysRc) according to the following conversion functions: XSRC = XDW * (LSRC/LDW) (5) ysRc = yD\v * (WSRCAVDW) (6) Sink device 160 can determine XDW and yow based on the following functions: XDW = XSINK - aow (7) yow = ysiNK - bow (8) [0052] When this disclosure describes transmitting coordinates associated with a user input, in a data packet for example, the transmission of these coordinates may include sink-based or source-based normalization as described above, and/or may include any additional information necessary for performing the sink-based or source-based normalization. [0053] The UIBC may be designed to transport various types of user input data, including cross-platform user input data. For example, source device 120 may run the iOS® operating system, while sink device 160 runs another operating system such as Android® or Windows®. Regardless of platform, UIPM 168 can encapsulate received user input in a form understandable to A/V control module 125. A number of different types of user input formats may be supported by the UIBC so as to allow many different types of source and sink devices to exploit the protocol regardless of whether the source and sink devices operate on different platforms. Generic input formats may be defined, and platform specific input formats may both be supported, thus providing flexibility in the manner in which user input can be communicated between source device 120 and sink device 160 by the UIBC. [0054] In the example of FIG. 1A, source device 120 may comprise a smartphone, tablet computer, laptop computer, desktop computer, Wi-Fi enabled television, or any other device capable of transmitting audio and video data. Sink device 160 may likewise comprise a smartphone, tablet computer, laptop computer, desktop computer, Wi-Fi enabled television, or any other device capable of receiving audio and video data and receiving user input data. In some instances, sink device 160 may include a system of devices, such that display 162, speaker 163, UI device 167, and A/V encoder 164 all parts of separate but interoperative devices. Source device 120 may likewise be a system of devices rather than a single device. [0055] In this disclosure, the term source device is generally used to refer to the device that is transmitting audio/video data, and the term sink device is generally used to refer to the device that is receiving the audio/video data from the source device. In many cases, source device 120 and sink device 160 may be similar or identical devices, with one device operating as the source and the other operating as the sink. Moreover, these rolls may be reversed in different communication sessions. Thus, a sink device in one communication session may become a source device in a subsequent communication session, or vice versa. [0056] FIG. IB is a block diagram illustrating an exemplary source/sink system 101 that may implement techniques of this disclosure. Source/sink system 101 includes source device 120 and sink device 160, each of which may function and operate in the manner described above for FIG. 1A. Source/sink system 101 further includes sink device 180. In a similar manner to sink device 160 described above, sink device 180 may receive audio and video data from source device 120 and transmit user commands to source device 120 over an established UIBC. In some configurations, sink device 160 and sink device 180 may operate independently of one another, and audio and video data output at source device 120 may be simultaneously output at sink device 160 and sink device 180. In alternate configurations, sink device 160 may be a primary sink device and sink device 180 may be a secondary sink device. In such an example configuration, sink device 160 and sink device 180 may be coupled, and sink device 160 may display video data while sink device 180 outputs corresponding audio data. Additionally, in some configurations, sink device 160 may output transmitted video data only while sink device 180 outputs transmitted audio data only. [0057] FIG. 2 is a block diagram showing one example of a source device 220. Source device 220 may be a device similar to source device 120 in FIG. 1A and may operate in the same manner as source device 120. Source device 220 includes local display 222, local speaker 223, processors 231, memory 232, transport unit 233, and wireless modem 234. As shown in FIG. 2, source device 220 may include one or more processors (i.e. processor 231) that encode and/or decode A/V data for transport, storage, and display. The A/V data may for example be stored at memory 232. Memory 232 may store an entire A/V file, or may comprise a smaller buffer that simply stores a portion of an A/V file, e.g., streamed from another device or source. Transport unit 233 may process encoded A/V data for network transport. For example, encoded A/V data may be processed by processor 231 and encapsulated by transport unit 233 into Network Access Layer (NAL) units for communication across a network. The NAL units may be sent by wireless modem 234 to a wireless sink device via a network connection. Wireless modem 234 may, for example, be a Wi-Fi modem configured to implement one of the IEEE 802.11 family of standards. [0058] Source device 220 may also locally process and display A/V data. In particular display processor 235 may process video data to be displayed on local display 222, audio processor 236 may process audio data for output on speaker 223. [0059] As described above with reference to source device 120 of FIG. 1A, source device 220 may also receive user input commands from a sink device. In this manner, wireless modem 234 of source device 220 receives encapsulated data packets, such as NAL units, and sends the encapsulated data units to transport unit 233 for decapsulation. For instance, transport unit 233 may extract data packets from the NAL units, and processor 231 can parse the data packets to extract the user input commands. Based on the user input commands, processor 231 can adjust the encoded A/V data being transmitted by source device 220 to a sink device. In this manner, the functionality described above in reference to A/V control module 125 of FIG. 1 A may be implemented, either fully or partially, by processor 231. [0060] Processor 231 of FIG. 2 generally represents any of a wide variety of processors, including but not limited to one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), other equivalent integrated or discrete logic circuitry, or some combination thereof. Memory 232 of FIG. 2 may comprise any of a wide variety of volatile or non-volatile memory, including but not limited to random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, and the like, Memory 232 may comprise a computer-readable storage medium for storing audio/video data, as well as other kinds of data. Memory 232 may additionally store instructions and program code that are executed by processor 231 as part of performing the various techniques described in this disclosure. [0061] FIG. 3 shows an example of a sink device 360. Sink device 360 may be a device similar to sink device 160 in FIG. 1A and may operate in the same manner as sink device 160. Sink device 360 includes one or more processors (i.e. processor 331), memory 332, transport unit 333, wireless modem 334, display processor 335, local display 362, audio processor 336, speaker 363, and user input interface 376. Sink device 360 receives at wireless modem 334 encapsulated data units sent from a source device. Wireless modem 334 may, for example, be a Wi-Fi modem configured to implement one more standards from the IEEE 802.11 family of standards. Transport unit 333 can decapsulate the encapsulated data units. For instance, transport unit 333 may extract encoded video data from the encapsulated data units and send the encoded A/V data to processor 331 to be decoded and rendered for output. Display processor 335 may process decoded video data to be displayed on local display 362, and audio processor 336 may process decoded audio data for output on speaker 363. [0062] In addition to rendering audio and video data, wireless sink device 360 can also receive user input data through user input interface 376. User input interface 376 can represent any of a number of user input devices included but not limited to a touch display interface, a keyboard, a mouse, a voice command module, gesture capture device (e.g., with camera-based input capturing capabilities) or any other of a number of user input devices. User input received through user input interface 376 can be processed by processor 331. This processing may include generating data packets that include the received user input command in accordance with the techniques described in this disclosure. Once generated, transport unit 333 may process the data packets for network transport to a wireless source device over a UIBC. [0063] Processor 331 of FIG. 3 may comprise one or more of a wide range of processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), other equivalent integrated or discrete logic circuitry, or some combination thereof. Memory 332 of FIG. 3 may comprise any of a wide variety of volatile or non- volatile memory, including but not limited to random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, and the like, Memory 232 may comprise a computer-readable storage medium for storing audio/video data, as well as other kinds of data. Memory 332 may additionally store instructions and program code that are executed by processor 331 as part of performing the various techniques described in this disclosure. [0064] FIG. 4 shows a block diagram of an example transmitter system 410 and receiver system 450, which may be used by transmitter/receiver 126 and transmitter/receiver 166 of FIG. 1 A for communicating over communication channel 150. At transmitter system 410, traffic data for a number of data streams is provided from a data source 412 to a transmit (TX) data processor 414. Each data stream may be transmitted over a respective transmit antenna. TX data processor 414 formats, codes, and interleaves the traffic data for each data stream based on a particular coding scheme selected for that data stream. [0065] The coded data for each data stream may be multiplexed with pilot data using orthogonal frequency division multiplexing (OFDM) techniques. A wide variety of other wireless communication techniques may also be used, including but not limited to time division multi access (TDMA), frequency division multi access (FDMA), code division multi access (CDMA), or any combination of OFDM, FDMA, TDMA and/or CDMA. [0066] Consistent with FIG. 4, the pilot data is typically a known data pattern that is processed in a known manner and may be used at the receiver system to estimate the channel response. The multiplexed pilot and coded data for each data stream is then modulated (e.g., symbol mapped) based on a particular modulation scheme (e.g., Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), M-PSK, or M- QAM (Quadrature Amplitude Modulation), where M may be a power of two) selected for that data stream to provide modulation symbols. The data rate, coding, and modulation for each data stream may be determined by instructions performed by processor 430 which may be coupled with memory 432. [0067] The modulation symbols for the data streams are then provided to a TX MIMO processor 420, which may further process the modulation symbols (e.g., for OFDM). TX MIMO processor 420 can then provide Ντ modulation symbol streams to Ντ transmitters (TMTR) 422a through 422t. In certain aspects, TX MIMO processor 420 applies beamforming weights to the symbols of the data streams and to the antenna from which the symbol is being transmitted. [0068] Each transmitter 422 may receive and process a respective symbol stream to provide one or more analog signals, and further conditions (e.g., amplifies, filters, and upconverts) the analog signals to provide a modulated signal suitable for transmission over the MIMO channel. Νχ modulated signals from transmitters 422a through 422t are then transmitted from Ντ antennas 424a through 424t, respectively. [0069] At receiver system 450, the transmitted modulated signals are received by NR antennas 452a through 452r and the received signal from each antenna 452 is provided to a respective receiver (RCVR) 454a through 454r. Receiver 454 conditions (e.g., filters, amplifies, and downconverts) a respective received signal, digitizes the conditioned signal to provide samples, and further processes the samples to provide a corresponding "received" symbol stream. [0070] A receive (RX) data processor 460 then receives and processes the NR received symbol streams from NR receivers 454 based on a particular receiver processing technique to provide Ντ "detected" symbol streams. The RX data processor 460 then demodulates, deinterleaves and decodes each detected symbol stream to recover the traffic data for the data stream. The processing by RX data processor 460 is complementary to that performed by TX MIMO processor 420 and TX data processor 414 at transmitter system 410. [0071] A processor 470 that may be coupled with a memory 472 periodically determines which pre-coding matrix to use. The reverse link message may comprise various types of information regarding the communication link and/or the received data stream. The reverse link message is then processed by a TX data processor 438, which also receives traffic data for a number of data streams from a data source 436, modulated by a modulator 480, conditioned by transmitters 454a through 454r, and transmitted back to transmitter system 410. [0072] At transmitter system 410, the modulated signals from receiver system 450 are received by antennas 424, conditioned by receivers 422, demodulated by a demodulator 440, and processed by a RX data processor 442 to extract the reserve link message transmitted by the receiver system 450. Processor 430 then determines which pre- coding matrix to use for determining the beamforming weights then processes the extracted message. [0073] FIG. 5A is a block diagram illustrating an example message transfer sequence between a source device 520 and a sink device 560 as part of a capabilities negotiations session. Capability negotiation may occur as part of a larger communication session establishment process between source device 520 and sink device 560. This session may, for example, be established with Wi-Fi Direct or TDLS as the underlying connectivity standard. After establishing the Wi-Fi Direct or TDLS session, sink device 560 can initiate a TCP connection with source device 520. As part of establishing the TCP connection, a control port running a real time streaming protocol (RTSP) can be established to manage a communication session between source device 520 and sink device 560. [0074] Source device 520 may generally operate in the same manner described above for source device 120 of FIG. 1 A, and sink device 560 may generally operate in the same manner described above for sink device 160 of FIG. 1A. After source device 520 and sink device 560 establish connectivity, source device 520 and sink device 560 may determine the set of parameters to be used for their subsequent communication session as part of a capability negotiation exchange. [0075] Source device 520 and sink device 560 may negotiate capabilities through a sequence of messages. The messages may, for example, be real time streaming protocol (RTSP) messages. At any stage of the negotiations, the recipient of an RTSP request message may respond with an RTSP response that includes an RTSP status code other than RTSP OK, in which case, the message exchange might be retried with a different set of parameters or the capability negotiation session may be ended. [0076] Source device 520 can send a first message (RTSP OPTIONS request message) to sink device 560 in order to determine the set of RTSP methods that sink device 560 supports. On receipt of the first message from source device 520, sink device 560 can respond with a second message (RTSP OPTIONS response message) that lists the RTSP methods supported by sink 560. The second message may also include a RTSP OK status code. [0077] After sending the second message to source device 520, sink device 560 can send a third message (RTSP OPTIONS request message) in order to determine the set of RTSP methods that source device 520 supports. On receipt of the third message from sink device 560, source device 520 can respond with a fourth message (RTSP OPTIONS response message) that lists the RTSP methods supported by source device 520. The fourth message can also include RTSP OK status code. [0078] After sending the fourth message, source device 520 can send a fifth message (RTSP GET_PARAMETER request message) to specify a list of capabilities that are of interest to source device 520. Sink device 560 can respond with a sixth message (an RTSP GET_PARAMETER response message). The sixth message may contain an RTSP status code. If the RTSP status code is OK, then the sixth message can also include response parameters to the parameter specified in the fifth message that are supported by sink device 560. Sink device 560 can ignore parameters in the fifth message that sink device 560 does not support. [0079] Based on the sixth message, source 520 can determine the optimal set of parameters to be used for the communication session and can send a seventh message (an RTSP SET_PARAMETER request message) to sink device 560. The seventh message can contain the parameter set to be used during the communication session between source device 520 and sink device 560. The seventh message can include the wfd-presentation-url that describes the Universal Resource Identifier (URI) to be used in the RTSP Setup request in order to setup the communication session. The wfd- presentation-url specifies the URI that sink device 560 can use for later messages during a session establishment exchange. The wfd-urlO and wfd-urll values specified in this parameter can correspond to the values of rtp-portO and rtp-portl values in the wfd- client-rtp-ports in the seventh message. RTP in this instance generally refers to the realtime protocol which can run on top of the UDP. [0080] Upon receipt of the seventh message, sink device 560 can respond with an eighth message with an RTSP status code indicating if setting the parameters as specified in the seventh message was successful. As mentioned above, the roles or source device and sink device may reverse or change in different sessions. The order of the messages that set up the communication session may, in some cases, define the device that operates as the source and define the device that operates as the sink. [0081] FIG. 5B is a block diagram illustrating another example message transfer sequence between a source device 560 and a sink device 520 as part of capabilities negotiations session. The message transfer sequence of FIG. 5B is intended provide a more detailed view of the transfer sequence described above for FIG. 5A. In FIG. 5B, message "lb. GET_PARAMETER RESPONSE" shows an example of a message that identifies a list of supported input categories (e.g. generic and HIDC) and a plurality of lists of supported input types. Each of the supported input categories of the list of supported input categories has an associated list of supported types (e.g. generic_cap_list and hidc_cap_list). In FIG. 5B, message "2a. SET_PARAMETER REQUEST" is an example of a second message that identifies a second list of supported input categories (e.g. generic and HIDC), and a plurality of second lists of supported types. Each of the supported input categories of the second list of supported input categories has an associated second list of supported types (e.g. generic_cap_list and hidc_cap_list). Message "lb. GET_PARAMETER RESPONSE" identifies the input categories and input types supported by sink device 560. Message "2a. SET_PARAMETER REQUEST" identifies input categories and input types supported by source device 520, but it may not be a comprehensive list of all input categories and input types supported by source device 520. Instead, message "2a. SET_PARAMETER REQUEST" may identify only those input categories and input types identified in message "lb. GET_PARAMETER RESPONSE" as being supported by sink device 560. In this manner, the input categories and input types identified in message "2a. SET_PARAMETER REQUEST" may constitute a subset of the input categories and input types identified in message "lb. GET_PARAMETER RESPONSE." [0082] FIG. 6 is a conceptual diagram illustrating one example of a data packet that may be generated by a sink device and transmitted to a source device. Aspects of data packet 600 will be explained with reference to FIG. 1A, but the techniques discussed may be applicable to additional types of source/sink systems. Data packet 600 may include a data packet header 610 followed by payload data 650. Payload data 650 may additionally include one or more payload headers (e.g. payload header 630). Data packet 600 may, for example, be transmitted from sink device 160 of FIG. 1 A to source device 120, such that a user of sink device 160 can control audio/video data being transmitted by source device 120. In such an instance, payload data 650 may include user input data received at sink device 160. Payload data 650 may, for example, identify one or more user commands. Sink device 160 can receive the one or more user commands, and based on the received commands, can generate data packet header 610 and payload data 650. Based on the content of data packet header 610 of data packet 600, source device 120 can parse payload data 650 to identify the user input data received at sink device 160. Based on the user input data contained in payload data 650, source device 120 may alter in some manner the audio and video data being transmitted from source device 120 to sink device 160. [0083] As used in this disclosure, the terms "parse" and "parsing" generally refer to the process of analyzing a bitstream to extract data from the bitstream. Once extracted, the data can be processed by source device 120, for example. Extracting data may, for example, include identifying how information in the bitstream is formatted. As will be described in more detail below, data packet header 610 may define a standardized format that is known to both source device 120 and sink device 160. Payload data 650, however, may be formatted in one of many possible ways. By parsing data packet header 610, source device 120 can determine how payload data 650 is formatted, and thus, source device 120 can parse payload data 650 to extract from payload data 650 one or more user input commands. This can provide flexibility in terms of the different types of payload data that can be supported in source-sink communication. As will be described in more detail below, payload data 650 may also include one or more payload headers such as payload header 630. In such instances, source device 120 may parse data packet header 610 to determine a format for payload header 630, and then parse payload header 630 to determine a format for the remainder of payload data 650. [0084] Diagram 620 is a conceptual depiction of how data packet header 610 may be formatted. The numbers 0-15 in row 615 are intended to identify bit locations within data packet header 610 and are not intended to actually represent information contained within data packet header 610. Data packet header 610 includes version field 621, timestamp flag 622, reserved field 623, input category field 624, length field 625, and optional timestamp field 626. [0085] In the example of FIG. 6, version field 621 is a 3-bit field that may indicate the version of a particular communications protocol being implemented by sink device 160. The value in version field 621 may inform source device 120 how to parse the remainder of data packet header 610 as well as how to parse payload data 650. In the example of FIG. 6, version field 621 is a three-bit field, which would enable a unique identifier for eight different versions. In other examples, more or fewer bits may be dedicated to version field 621. [0086] In the example of FIG. 6, timestamp flag (T) 622 is a 1-bit field that indicates whether or not timestamp field 626 is present in data packet header 610. Timestamp field 626 is a 16-bit field containing a timestamp based on multimedia data that was generated by source device 120 and transmitted to sink device 160. The timestamp may, for example, be a sequential value assigned to frames of video by source device 120 prior to the frames being transmitted to sink device 160. Timestamp flag 622 may, for example, include a "1" to indicate timestamp field 626 is present and may include a "0" to indicate timestamp field 626 is not present. Upon parsing data packet header 610 and determining that timestamp field 626 is present, source device 120 can process the timestamp included in timestamp field 626. Upon parsing data packet header 610 and determining that timestamp field 626 is not present, source device 120 may begin parsing pay load data 650 after parsing length field 625, as no timestamp field is present in data packet header 610. [0087] If present, timestamp field 626 can include a timestamp to identify a frame of video data that was being displayed at wireless sink device 160 when the user input data of payload data 650 was obtained. The timestamp may, for example, have been added to the frame of video by source device 120 prior to source device 120 transmitting the frame of video to sink device 160. Accordingly, source device 120 may generate a frame of video and embed in the video data of the frame, as metadata for example, a timestamp. Source device 120 can transmit the video frame, with the timestamp, to sink device 160, and sink device 160 can display the frame of video. While the frame of video is being displayed by sink device 160, sink device 160 can receive a user command from a user. When sink device 160 generates a data packet to transfer the user command to source device 120, sink device 160 can include in timestamp field 626 the timestamp of the frame that was being displayed by sink device 160 when the user command was received. [0088] Upon receiving data packet 600 with timestamp field 626 present in the header, wireless source device 120 may identify the frame of video being displayed at sink device 160 at the time the user input data of payload data 650 was obtained and process the user input data based on the content of the frame identified by the timestamp. For example, if the user input data is a touch command applied to a touch display or a click of a mouse pointer, source device 120 can determine the content of the frame being displayed at the time the user applied the touch command to the display or clicked the mouse. In some instances, the content of the frame may be needed to properly process the payload data. For example, a user input based on a user touch or a mouse click can be dependent on what was being shown on the display at the time of the touch or the click. The touch or click may, for example, correspond to an icon or menu option. In instances where the content of the display is changing, a timestamp present in timestamp field 626 can be used by source device 120 to match the touch or click to the correct icon or menu option. [0089] Source device 120 may additionally or alternatively, compare the timestamp in timestamp field 626 to a timestamp being applied to a currently rendered frame of video. By comparing the timestamp of timestamp field 626 to a current timestamp, source device 120 can determine a round trip time. The round trip time generally corresponds to the amount of time that lapses from the point when a frame is transmitted by source device 120 to the point when a user input based on that frame is received back at source device 120 from sink device 160. The round trip time can provide source device 120 with an indication of system latency, and if the round trip time is greater than a threshold value, then source device 120 may ignore the user input data contained in payload data 650 under the assumption the input command was applied to an outdated display frame. When the round trip time is less than the threshold, source device 120 may process the user input data and adjust the audio/video content being transmitted in response to the user input data. Thresholds may be programmable, and different types of devices (or different source-sink combinations) may be configured to define different thresholds for round trip times that are acceptable. [0090] In the example of FIG. 6, reserved field 623 is an 8-bit field that does not include information used by source 120 in parsing data packet header 610 and payload data 650. Future versions of a particular protocol (as identified in version field 621), however, may make use of reserved field 623, in which case source device 120 may use information in reserved field 623 for parsing data packet header 610 and/or for parsing payload data 650. Reserved field 623 in conjunction with version field 621 potentially provide capabilities for expanding and adding features to the data packet format without fundamentally altering the format and features already in use. [0091] In the example of FIG. 6, input category field 624 is a 4-bit field to identify an input category for the user input data contained in payload data 650. Sink device 160 may categorize the user input data to determine an input category. Categorizing user input data may, for example, be based on the device from which a command is received or based on properties of the command itself. The value of input category field 624, possibly in conjunction with other information of data packet header 610, identifies to source device 120 how payload data 650 is formatted. Based on this formatting, source device 120 can parse payload data 650 to determine the user input that was received at sink device 160. [0092] As input category 624, in the example of FIG. 6, is 4 bits, sixteen different input categories could possibly be identified. One such input category may be a generic input format to indicate that the user input data of payload data 650 is formatted using generic information elements defined in a protocol being executed by both source device 120 and sink device 160. A generic input format, as will be described in more detail below, may utilize generic information elements that allow for a user of sink device 160 to interact with source device 120 at the application level. [0093] Another such input category may be a human interface device command (HIDC) format to indicate that the user input data of payload data 650 is formatted based on the type of input device used to receive the input data. Examples of types of devices include a keyboard, mouse, touch input device, joystick, camera, gesture capturing device (such as a camera-based input device), and remote control. Other types of input categories that might be identified in input category field 624 include a forwarding input format to indicate user data in payload data 650 did not originate at sink device 160, or an operating system specific format, and a voice command format to indicate payload data 650 includes a voice command. [0094] Length field 625 may comprise a 16-bit field to indicate the length of data packet 600. The length may, for example, be indicated in units of 8-bits. As data packet 600 is parsed by source device 120 in words of 16 bits, data packet 600 can be padded up to an integer number of 16 bits. Based on the length contained in length field 625, source device 120 can identify the end of payload data 650 (i.e. the end of data packet 600) and the beginning of a new, subsequent data packet. [0095] The various sizes of the fields provided in the example of FIG. 6 are merely intended to be explanatory, and it is intended that the fields may be implemented using different numbers of bits than what is shown in FIG. 6. Additionally, it is also contemplated that data packet header 610 may include fewer than all the fields discussed above or may use additional fields not discussed above. Indeed, the techniques of this disclosure may be flexible, in terms of the actual format used for the various data fields of the packets. [0096] After parsing data packet header 610 to determine a formatting of payload data 650, source device 120 can parse payload data 650 to determine the user input command contained in payload data 650. Payload data 650 may have its own payload header (payload header 630) indicating the contents of payload data 650. In this manner, source device 120 may parse payload header 630 based on the parsing of data packet header 610, and then parse the remainder payload data 650 based on the parsing of the payload header 630. [0097] If, for example, input category field 624 of data packet header 610 indicates a generic input is present in payload data 650, then payload data 650 can have a generic input format. Source device 120 can thus parse payload data 650 according to the generic input format. As part of the generic input format, payload data 650 can include a series of one or more input events with each input event having its own input event Table 1 , below identifies the fields that may be included in an input header. [0098] The generic input event (IE) identification (ID) field identifies the generic input event identification for identifying an input type. The generic IE ID field may, for example, be one octet in length and may include an identification selected from Table 2 below. If, as in this example, the generic IE ID field is 8 bits, then 256 different types of inputs (identified 0-255) may be identifiable, although not all 256 identifications necessarily need an associated input type. Some of the 256 may be reserved for future use with future versions of whatever protocol is being implemented by sink device 160 and source device 120. In Table 2, for instance, generic IE IDs 9-255 do not have associated input types but could be assigned input types in the future. [0099] The length field in the input event header identifies the length of the describe field while the describe field includes the information elements that describe the user input. The formatting of the describe field may be dependent on the type of input identifies in the generic IE ID field. Thus, source device 120 may parse the contents of the describe field based on the input type identified in the generic IE ID field. Based on the length field of the input event header, source device 120 can determine the end of one input event in payload data 650 and the beginning of a new input event. As will be explained in more detail below, one user command may be described in payload data 650 as one or more input events. [00100] Table 2 provides an example of input types, each with a corresponding generic IE ID that can be used for identifying the input type. Table 2 Generic IE ID INPUT TYPE 0 Left Mouse Down/Touch Down 1 Left Mouse Up/Touch Up 2 Mouse Move/Touch Move 3 Key Down 4 Key Up 5 Zoom 6 Vertical Scroll 7 Horizontal Scroll 8 Rotate 9-255 Reserved [00101] The describe fields associated with each input type may have a different format. The describe fields of a LeftMouse Down/TouchDown event, a Left Mouse Up/Touch Up event, and Mouse Move/Touch Move event may, for example, include the information elements identified in Table 3 below, although other formats could also be used in other examples. Table 3 [00102] The number of pointers may identify the number of touches or mouse clicks associated with an input event. Each pointer may have a unique pointer ID. If, for example, a multi-touch event includes a three finger touch, then the input event might have three pointers, each with a unique pointer ID. Each pointer (i.e. each finger touch) may have a corresponding x-coordinate and y-coordinate corresponding to where the touch occurred. [00103] A single user command may be described as a series of input events. For example, if a three-finger swipe is a command to close an application, the three finger swipe may be described in payload data 650 as a touch down event with three pointers, a touch move event with three pointers, and a touch up event with three pointers. The three pointers of the touch down event may have the same pointer IDs as the three pointers of the touch move event and touch up event. Source device 120 can interpret the combination of those three input events as a three finger swipe. [00104] The describe fields of a Key Down event or a Key Up event may, for example, include the information elements identified in Table 4 below. Table 4 [00105] The describe field of a zoom event may, for example, include the information elements identified in Table 5 below. Table 5 [00106] The describe field of a horizontal scroll event or a vertical scroll event may, for example, include the information elements identified in Table 6 below. Table 6 [00107] The above examples have shown some exemplary ways that the payload data might be formatted for a generic input category. If input category field 624 of data packet header 610 indicates a different input category, such as a forwarded user input, then payload data 650 can have a different input format. With a forwarded user input, sink device 160 may receive the user input data from a third party device and forward the input to source device 120 without interpreting the user input data. Source device 120 can thus parse payload data 650 according to the forwarded user input format. For example, payload header 630 of payload data 650 may include a field to identify the third party device from which the user input was obtained. The field may, for example, include an internet protocol (IP) address of the third party device, MAC address, a domain name, or some other such identifier. Source device 120 can parse the remainder of the payload data based on the identifier of the third party device. [00108] Sink device 160 can negotiate capabilities with the third party device via a series of messages. Sink device 160 can then transmit a unique identifier of the third party device to source device 120 as part of establishing a communication session with source device 120 as part of a capability negotiation process. Alternatively, sink device 160 may transmit information describing the third-party device to source device 120, and based on the information, source device 120 can determine a unique identifier for the third-party device. The information describing the third party device may, for example, include information to identify the third-party device and/or information to identify capabilities of the third-party device. Regardless of whether the unique identifiers is determined by source device 120 or sink device 160, when sink device 160 transmits data packets with user input obtained from the third part device, sink device 160 can include the unique identifier in the data packet, in a payload header for example, so that source device 120 can identify the origin of the user input. [00109] If input category field 624 of data packet header 610 indicates yet a different input category, such as a voice command, then payload data 650 can have yet a different input format. For a voice command, payload data 650 may include coded audio. The codec for encoding and decoding the audio of the voice command can be negotiated between source device 120 and sink device 160 via a series of messages. For transmitting a voice command, timestamp field 626 may include a speech- sampling time value. In such an instance, timestamp flag 622 may be set to indicate a timestamp is present, but instead of a timestamp as described above, timestamp field 626 may include a speech- sampling time value for the encoded audio of payload data 650. [00110] In some examples, a voice command may be transmitted as a generic command as described above, in which case input category field 624 may be set to identify the generic command format, and one of the reserved generic IE IDs may be assigned to voice commands. If the voice command is transmitted as a generic command, then a speech sampling rate may be present in timestamp field 626 of data packet header 610 or may be present in payload data 650. [00111] For captured voice command data, the voice data can be encapsulated in multiple ways. For example, the voice command data can be encapsulated using RTP which can provide the payload type to identify the codec and timestamp, with the timestamp being used to identify the sampling rate. The RTP data can be encapsulated using the generic user input format described above, either with or without the optional timestamp. Sink device 160 can transmit the generic input data that carries the voice command data to source device 120 using TPC/IP. [00112] As discussed previously, when coordinates are included as part of a data packet such as data packet 600, in payload data 650 for example, the coordinates may correspond to coordinates scaled based on a negotiated resolution, display window coordinates, normalized coordinates, or coordinates associated with a sink display. In some instances, additional information, may be included, either in the data packet or transmitted separately, for use by a source device to normalize coordinates received in the data packet. [00113] Regardless of the input category for a particular data packet the data packet header may be an application layer packet header, and the data packet may be transmitted over TCP/IP. TCP/IP can enable sink device 160 and source device 120 to perform retransmission techniques in the event of packet loss. The data packet may be sent from sink device 160 to source device 120 to control audio data or video data of source device 120 or for other purposes such as to control an application running on source device 120. [00114] FIG. 7 A is a flowchart of an example method of negotiating capabilities between a sink device and a source device. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in one or more of the flow charts described herein. [00115] The method of FIG. 7 A includes sink device 160 receiving from the source device 120 a first message (701). The message may, for example, comprise a get parameter request. In response to the first message, sink device 160 may send a second message to source device 120 (703). The second message may, for example, comprise a get parameter response that identifies a first list of supported input categories and a plurality of first lists of supported types, wherein each of the supported input categories of the first list of supported input categories has an associated first list of supported types. The supported input categories may, for example, correspond to the same categories used for input category field 624 of FIG. 6. Table 2 above represents one example of supported types for a particular input category (generic inputs in this example). Sink device 160 may receive from source device 120, a third message (705). The third message may, for example, comprise a set parameter request, wherein the set parameter request identifies a port for communication, a second list of supported input categories, and a plurality of second lists of supported types, with each of the supported input categories of the second list of supported input categories having an associated second list of supported types, and each of the supported types of the second lists including a subset of the types of the first lists. Sink device 160 can transmit to source device 120 a fourth message (707). The fourth message may, for example, comprise a set parameter response to confirm that the types of the second lists have been enabled. Sink device 160 can receive from source device 120 a fifth message (709). The fifth message may, for example, comprise a second set parameter request that indicates that a communication channel between the source device 120 and sink device 160 has been enabled. The communication channel may, for example, comprise a user input back channel (UIBC). Sink device 160 can transmit to source device 120 a sixth message (711). The sixth message may, for example, comprise a second set parameter response that confirms receipt of the second set parameter request by sink device 160. [00116] FIG. 7B is a flowchart of an example method of negotiating capabilities between a sink device and a source device. The illustrated example method may be performed by source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00117] The method of FIG. 7B includes source device 120 transmitting to sink device 160 a first message (702). The first message may, for example, comprise a get parameter request. Source device 120 can receive a second message from sink device 160 (704). The second message may, for example, comprise a get parameter response that identifies a first list of supported input categories and a plurality of first lists of supported types, wherein each of the supported input categories of the first list of supported input categories has an associated first list of supported types. Source device 120 may transmit to sink device 160, a third message (706). The third message may, for example, comprise a set parameter request that identifies a port for communication, a second list of supported input categories, and a plurality of second lists of supported types, with each of the supported input categories of the second list of supported input categories having an associated second list of supported types, and each of the supported types of the second lists including a subset of the types of the first lists. Source device 120 can receive from sink device 160 a fourth message (708). The fourth message may, for example, comprise a set parameter response to confirm that the types of the second lists have been enabled. Source device 120 can transmit to sink device 160 a fifth message (710). The fifth message may, for example, comprise a second set parameter request that indicates that a communication channel between the source device 120 and sink device 160 has been enabled. The communication channel may, for example, comprise a user input back channel (UIBC). Source device 120 can receive from sink device 160 a sixth message (712). The sixth message may, for example, comprise a second set parameter response that confirms receipt of the second set parameter request by sink device 160. [00118] FIG. 8 A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00119] The method of FIG. 8A includes obtaining user input data at a wireless sink device, such as wireless sink device 160 (801). The user input data may be obtained through a user input component of wireless sink device 160 such as, for example, user input interface 376 shown in relation to wireless sink device 360. Additionally, sink device 160 may categorize the user input data as, for example, generic, forwarded, or operating system specific. Sink device 160 may then generate a data packet header based on the user input data (803). The data packet header can be an application layer packet header. The data packet header may comprise, among other fields, a field to identify an input category corresponding to the user input data. The input category may comprise, for example, a generic input format or a human interface device command. Sink device 160 may further generate a data packet (805), where the data packet comprises the generated data packet header and payload data. In one example, payload data may include received user input data and may identify one or more user commands. Sink device 160 may then transmit the generated data packet (807) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334 as shown in FIG. 3, for example. Sink device 160 may transfer the data packet over TCP/IP. [00120] FIG. 8B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00121] The method of FIG. 8B includes receiving a data packet (802), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in reference to FIG. 2. Source device 120 may then parse the data packet header (804) included in the data packet, to determine an input category associated with the user input data contained in the pay load data. Source device 120 may process the pay load data based on the determined input category (806). The data packets described with reference to FIGS. 8A and 8B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data and applications at a source device. [00122] FIG. 9A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00123] The method of FIG. 9A includes obtaining user input data at a wireless sink device such as wireless sink device 160 (901). The user input data may be obtained through a user input component of wireless sink device 160 such as, for example, user input interface 376 shown with reference to FIG. 3. Sink device 160 may then generate payload data (903), where the payload data may describe the user input data. In one example, payload data may include received user input data and may identify one or more user commands. Sink device 160 may further generate a data packet (905), where the data packet comprises a data packet header and the generated payload data. Sink device 160 may then transmit the generated data packet (907) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, such as transport unit 333 and wireless modem 334, for example. The data packet can be transmitted to a wireless source device over TCP/IP. [00124] FIG. 9B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00125] The method of FIG. 9B includes receiving a data packet from sink device 360 (902), where the data packet may comprise, among other things, a data packet header and payload data. In one example, payload data may comprise, for example, data describing details of a user input such as input type value. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown with reference to FIG. 2. Source device 120 may then parse the data packet (904) to determine an input type value in an input type field in the payload data. Source device 120 may process the data describing details of the user input based on the determined input type value (906). The data packets described with reference to FIGS. 9 A and 9B may generally take the form of the data packets described with reference to FIG. 6. [00126] FIG. 10A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00127] The method of FIG. 10A includes obtaining user input data at a wireless sink device, such as wireless sink device 160 (1001). The user input data may be obtained through a user input component of wireless sink device 160 such as, for example, user input interface 376 as shown with reference to FIG. 3. Sink device 160 may then generate a data packet header based on the user input (1003). The data packet header may comprise, among other fields, a timestamp flag (e.g., a 1-bit field) to indicate if a timestamp field is present in the data packet header. The timestamp flag may, for example, include a "1" to indicate timestamp field is present and may include a "0" to indicate timestamp field is not present. The timestamp field may be, for example, a 16-bit field containing a timestamp generated by source device 120 and added to video data prior to transmission. Sink device 160 may further generate a data packet (1005), where the data packet comprises the generated data packet header and payload data. In one example, payload data may include received user input data and may identify one or more user commands. Sink device 160 may then transmit the generated data packet (1007) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334, for example as shown in reference to FIG. 3. The data packet can be transmitted to a wireless source device over TCP/IP. [00128] FIG. 10B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00129] The method of FIG. 10B includes receiving a data packet from wireless sink device 160 (1002), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in reference to FIG. 2. Source device 120 may then parse the data packet header (1004) included in the data packet. Source device 120 may determine if a timestamp field is present in the data packet header (1006). In one example, Source device 120 may make the determination based on a timestamp flag value included in the data packet header. If the data packet header includes a timestamp field, Source device 120 may process the payload data based on a timestamp that is in the timestamp field (1008). The data packets described with reference to FIGS. 10A and 10B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00130] FIG. 11 A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00131] The method of FIG. 11 A includes obtaining user input data at a wireless sink device, such as wireless sink device 160 (1101). The user input data may be obtained through a user input component of wireless sink device 160 such as, for example, user input interface 376 shown in reference to FIG. 3. Sink device 160 may then generate a data packet header based on the user input (1103). The data packet header may comprise, among other fields, a timestamp field. The timestamp field may comprise, for example, a 16-bit field containing a timestamp based on multimedia data that was generated by wireless source device 120 and transmitted to wireless sink device 160. The timestamp may have been added to the frame of video data by wireless source device 120 prior to being transmitted to the wireless sink device. The timestamp field may, for example, identify a timestamp associated with a frame of video data being displayed at wireless sink device 160 at the time the user input data was captured. Sink device 160 may further generate a data packet (1105), where the data packet comprises the generated data packet header and payload data. In one example, payload data may include received user input data and may identify one or more user commands. Sink device 160 may then transmit the generated data packet (1107) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334, for example as shown in reference to FIG. 3. The data packet can be transmitted to a wireless source device over TCP/IP. [00132] FIG. 1 IB is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00133] The method of FIG. 1 IB includes receiving a data packet from a wireless sink device, such as wireless sink device 160 (1102), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in reference to FIG. 2. Source device 120 may then identify a timestamp field in the data packet header (1104). Source device 120 may process the payload data based on a timestamp that is in the timestamp field (1106). As part of processing the payload data, based on the timestamp, source device 120 may identify a frame of video data being displayed at the wireless sink device at the time the user input data was obtained and interpret the payload data based on content of the frame. As part of processing the payload data based on the timestamp, source device 120 may compare the timestamp to a current timestamp for a current frame of video being transmitted by source device 120 and may perform a user input command described in the payload data in response to a time difference between the timestamp and the current timestamp being less than a threshold value, or not perform a user input command described in the payload data in response to a time difference between the timestamp and the current timestamp being greater than a threshold value. The data packets described with reference to FIGS. 11 A and 1 IB may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00134] FIG. 12A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00135] The method of FIG. 12A includes obtaining user input data at a wireless sink device, such as wireless sink device 160 (1201). In one example, the user input data may be voice command data, which may be obtained through a user input component of wireless sink device 160 such as, for example, a voice command recognition module included in user input interface 376 in FIG. 3. Sink device 160 may generate a data packet header based on the user input (1203). Sink device 160 may also generate payload data (1205), where the payload data may comprise the voice command data. In one example, payload data may also include received user input data and may identify one or more user commands. Sink device 160 may further generate a data packet (1207), where the data packet comprises the generated data packet header and payload data. Sink device 160 may then transmit the generated data packet (1209) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334, for example as shown in reference to FIG. 3. The data packet can be transmitted to a wireless source device over TCP/IP. [00136] FIG. 12B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00137] The method of FIG. 12B includes receiving a data packet (1202), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data such as voice command data. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in reference to FIG. 2. Source device 120 may then parse the payload data (1204) included in the data packet, to determine if the payload data comprises voice command data. The data packets described with reference to FIGS. 12A and 12B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00138] FIG. 13 A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00139] The method of FIG. 13 A includes obtaining user input data at a wireless sink device, such as wireless sink device 160 (1301). In one example, the user input data may be a multi-touch gesture, which may be obtained through a user input component of wireless sink device 160 such as, for example, UI 167 or user input interface 376 of FIG. 3. In one example, the multi-touch gesture may comprise a first touch input and a second touch input. Sink device 160 may generate a data packet header based on the user input (1303). Sink device 160 may also generate payload data (1305), where the payload data may associate user input data for the first touch input event with a first pointer identification and user input data for the second touch input event with a second pointer identification. Sink device 160 may further generate a data packet (1307), where the data packet comprises the generated data packet header and payload data. Sink device 160 may then transmit the generated data packet (1309) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334, for example as shown in reference to FIG. 3. The data packet can be transmitted to a wireless source device over TCP/IP. [00140] FIG. 13B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00141] The method of FIG. 13B includes receiving a data packet (1302), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data such as multi-touch gesture. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in FIG. 2. Source device 120 may then parse the payload data (1304) included in the data packet, to identify user input data included in the payload data. In one example, the identified data may include user input data for a first touch input event with a first pointer identification and user input data for a second touch input event with a second pointer identification. Source device 120 may then interpret the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture (1306). The data packets described with reference to FIGS. 13 A and 13B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00142] FIG. 14A is a flow chart of an example method of transmitting user input data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00143] The method of FIG. 14A includes obtaining user input data at wireless sink device 360 from an external device (1401). In one example, the external device may be a third party device connected to the sink device. Sink device 160 may generate a data packet header based on the user input (1403). In one example, the data packet header may identify the user input data as forwarded user input data. Sink device 160 may also generate payload data (1405), where the payload data may comprise the user input data. Sink device 160 may further generate a data packet (1407), where the data packet may comprise the generated data packet header and payload data. Sink device 160 may then transmit the generated data packet (1409) to the wireless source device (e.g., source device 120 of FIG. 1 A or 220 of FIG. 2). Sink device 160 may comprise components that allow transfer of data packets, including transport unit 333 and wireless modem 334, for example as shown with reference to FIG. 3. The data packet can be transmitted to a wireless source device over TCP/IP. [00144] FIG. 14B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00145] The method of FIG. 14B includes receiving a data packet (1402), where the data packet may comprise, among other things, a data packet header and payload data. Payload data may include, for example, user input data such as a forwarded user input command indicating user input data was forwarded from a third party device. Source device 120 may comprise communications components that allow transfer of data packets, including transport unit 233 and wireless modem 234, for example as shown in reference to FIG. 2. Source device 120 may then parse the data packet header and may determine that the payload data comprises a forwarded user input command (1404). Source device 120 may then parse the payload data (1406) included in the data packet, to identify an identification associated with the third party device corresponding to the forwarded user input command. Source device 120 may then process the payload data based on the identified identification of the third party device (1408). The data packets described with reference to FIGS. 14A and 14B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00146] FIG. 15 A is a flow chart of an example method of transmitting user data from a wireless sink device to a wireless source device in accordance with this disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (e.g., memory 332) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 331) to perform one or more of the illustrated steps in the flow chart. [00147] The method of FIG. 15 A includes obtaining user input data at the wireless sink device (1501). The user input data can have associated coordinate data. The associated coordinate data may, for example, corresponds to a location of a mouse click event or a location of a touch event. Sink device 160 may then normalize the associated coordinate data to generate normalized coordinate data (1503). Sink device 160 may then generate a data packet that includes the normalized coordinate data (1505). Normalizing the coordinate data can include scaling the associated coordinate data based on a ratio of the resolution of a display window and a resolution of the display of the source, such as display 22 of source device 120. The resolution of the display window can be determined by sink device 160, and the resolution of the display of the source device can be received from source device 120. Sink device 160 may then transmit the data packet with the normalized coordinates to wireless source device 120 (1507). As part of the method of FIG. 15 A, sink device 160 may also determine if the associated coordinate data is within a display window for content being received from the wireless source device, and for example, process a user input locally if the associated coordinate data is outside the display window, or otherwise normalize the coordinates as described if the input is within the display window. [00148] FIG. 15B is a flow chart of an example method of receiving user input data from a wireless sink device at a wireless source device in accordance with this disclosure. The illustrated example method may be performed by source device 120 (FIG. 1 A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (e.g., memory 232) may store instructions, modules, or algorithms that, when executed, cause one or more processors (e.g., processor 231) to perform one or more of the illustrated steps in the flow chart. [00149] The method of FIG. 15B includes receiving a data packet at the wireless source device, where the data packet comprises user input data with associated coordinate data (1502). The associated coordinate data may, for example, corresponds to a location of a mouse click event or a location of a touch event at a sink device. Source device 120 may then normalize the associated coordinate data to generate normalized coordinate data (1504). Source device 120 can normalize the coordinate data by scaling the associated coordinate data based on a ratio of the resolution of the display window and a resolution of the display of the source. Source device 120 can determine the resolution of the display of the source device and can receive the resolution of the display window from the wireless sink device. Source device may then process the data packet based o the normalized coordinate data (1506). The data packets described with reference to FIGS. 15 A and 15B may generally take the form of the data packets described with reference to FIG. 6 and may be used to control audio/video data at a source device. [00150] For simplicity of explanation, aspects of this disclosure have been described separately with reference to FIGS. 7-15. It is contemplated, however, that these various aspects can be combined and used in conjunction with one another and not just separately. Generally, functionality and/or modules described herein may be implemented in either or both of the wireless source device and wireless sink device. In this way, user interface capabilities described in the current example may be used interchangeably between the wireless source device and wireless sink device. [00151] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, and integrated circuit (IC) or a set of ICs (i.e., a chip set). Any components, modules or units have been described provided to emphasize functional aspects and does not necessarily require realization by different hardware units. [00152] Accordingly, the techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, any features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer- readable medium comprising instructions that, when executed in a processor, performs one or more of the methods described above. The computer-readable medium may comprise a tangible and non-transitory computer-readable storage medium and may form part of a computer program product, which may include packaging materials. The computer-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non- volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer. [00153] The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. [00154] Various aspects of the disclosure have been described. These and other aspects are within the scope of the following claims.
Metal silicides form low resistance contacts on semiconductor devices such as transistors. Rough interfaces are formed between metal silicide contacts, such as NiSi and the source/drain regions of a transistor, such as doped source/drain regions. Interfaces with a high degree of roughness result in increased spiking and junction leakage. Interface roughness is minimized by deeply doping the source/drain regions of a silicon on insulator substrate.
What is claimed is: 1. A method of manufacturing a semiconductor device, the method comprising:providing a silicon-containing substrate having an upper surface; forming source/drain implants by ion implanting dopant into the substrate at a sufficient concentration and depth below the upper surface of the substrate to substantially reduce interface roughness between subsequently formed nickel silicide contacts and source/drain regions; depositing a layer of nickel over the upper surface of the substrate; and heating to react the nickel layer with underlying silicon to form the nickel silicide contacts. 2. The method according to claim 1 wherein the dopant is arsenic (As).3. The method according to claim 2, wherein the silicon-containing insulator is a silicon-on-insulator (SOI) substrate comprising a silicon layer on a base layer with an insulating layer therebetween, and the As dopant is implanted into the silicon layer.4. The method according to claim 3, wherein the silicon layer has a thickness of about 500 Å to about 2000 Å.5. The method according to claim 3, comprising ion implanting the As at an implantation dosage of about 1*10<15 >ions/cm<2 >to about 6*10<15 >ions/cm<2 >and an implantation energy of about 15 keV to about 40 keV.6. The method according to claim 3, comprising ion implanting the As to a depth of about 450 Å to about 700 Å below the upper surface of the silicon layer.7. The method according to claim 3, comprising heating at a temperature of about 350[deg.] C. to about 700[deg.] C. to form the nickel silicide contacts.8. The method according to claim 1, wherein the interface between the metal silicide contacts and the source/drain regions has a mean peak to valley roughness of less than about 100 Å.9. A semiconductor device produced by a method comprising:providing a silicon-containing substrate having an upper surface; forming source/drain implants by ion implanting dopant into the substrate at a sufficient concentration and depth below the upper surface of the substrate to substantially reduce interface roughness between subsequently formed nickel silicide contacts and source/drain regions; depositing a layer of nickel over the upper surface of the substrate; and heating to react the nickel layer with underlying silicon to form the nickel silicide contacts. 10. A method of manufacturing a semiconductor device, the method comprising:providing a silicon on insulator substrate comprising a silicon layer on a base layer with an insulating layer therebetween; forming a gate oxide layer on the silicon layer; depositing a conductive gate material layer on the gate oxide layer; patterning the gate material layer and gate oxide layer to form a gate electrode, having an upper surface and opposing side surfaces, with a gate oxide layer thereunder; depositing a layer of an insulating material over the gate electrode and silicon layer; patterning the insulating material to form sidewall spacers on the opposing side surfaces of the gate electrode; forming source/drain implants by implanting a dopant into the silicon layer, such that the dopant concentration and depth significantly reduces the interface roughness between subsequently formed nickel silicide contacts and source/drain regions; heating to activate the source and drain regions; depositing a layer of nickel over the gate electrode and source/drain regions; heating to react the nickel layer with underlying silicon to form metal silicide contacts on the gate electrode and on the source/drain regions; and removing the nickel that did not react to form nickel silicide. 11. The method according to claim 10, wherein the dopant is As.12. The method according to claim 11, comprising forming the silicon layer at a thickness of about 500 Å to about 2000 Å.13. The method according to claim 11, comprising ion implanting As to a depth of about 450 Å to about 700 Å.14. The method according to claim 10, wherein the interface between the nickel silicide contacts and the source/drain regions has a mean peak to valley roughness of less than about 100 Å.
TECHNICAL FIELDThe present invention relates to the field of manufacturing semiconductor devices and, more particularly, to an improved salicide process of forming metal silicide contacts.BACKGROUND OF THE INVENTIONAn important aim of ongoing research in the semiconductor industry is the reduction in the dimensions of the devices used in integrated circuits. Planar transistors, such as metal oxide semiconductor (MOS) transistors, are particularly suited for use in high-density integrated circuits. As the size of the MOS transistors and other active devices decreases, the dimensions of the source/drain regions and gate electrodes, and the channel region of each device, decrease correspondingly.The design of ever-smaller planar transistors with short channel lengths makes it necessary to provide very shallow source/drain junctions. Shallow junctions are necessary to avoid lateral diffusion of implanted dopants into the channel, since such a diffusion disadvantageously contributes to leakage currents and poor breakdown performance. Shallow source/drain junctions, for example on the order of 1.000 Å or less thick are generally required for acceptable performance in short channel devices.Metal silicide contacts are typically used to provide low resistance contacts to source/drain regions and gate electrodes. The metal silicide contacts are conventionally formed by depositing a conductive metal, such as titanium, cobalt, tungsten, or nickel, on the source/drain regions and gate electrodes by physical vapor deposition (PVD), e.g. sputtering or evaporation; or by a chemical vapor deposition (CVD) technique. Subsequently, heating is performed to react the metal with underlying silicon to form a metal silicide layer on the source/drain regions and gate electrodes. The metal silicide has a substantially lower sheet resistance than the silicon to which it is bonded. Selective etching is then conducted to remove unreacted metal from the non-silicided areas, such as the dielectric sidewall spacers. Thus, the silicide regions are aligned only on the electrically conductive areas. This self-aligned silicide process is generally referred to as the "salicide" process.A portion of a typical semiconductor device 40 is schematically illustrated in FIG. 1A and comprises a silicon-containing substrate 4 with source/drain 30 regions formed therein. Gate oxide 10 and gate electrode 12 are formed on the silicon-containing substrate 4. Sidewall spacers 14 are formed on opposing side surfaces 13 of gate electrode 12. Sidewall spacers 14 typically comprise silicon based insulators, such as silicon nitride, silicon oxide, or silicon carbide. The sidewall spacers 14 mask the side surfaces 13 of the gate 12 w hen metal layer 22 is deposited, thereby preventing silicide from forming on the gate electrode side surfaces 13.After metal layer 22 is deposited. heating is conducted at a temperature sufficient to react the metal with underlying silicon in the gate electrode 12 and substrate surface 5 to form conductive metal silicide contacts 24 (FIG. 1B). After the metal silicide contacts 24 are formed, the unreacted metal 22 is removed by etching, as with a wet etchant, e.g., an aqueous H2O2/NH4OH solution. The sidewall spacer 14, therefore, functions as an electrical insulator separating the silicide contact 24 on the gate electrode 12 from the metal silicide contacts 24 on the source/drain regions 30, as shown in FIG. 1B.Various metals react with Si to form a silicide, however, titanium (Ti) and cobalt (Co) are currently the most common metals used to create silicides (TiSi2, CoSi2) when manufacturing semiconductor devices utilizing salicide technology.Use of a TiSi2 layer imposes limitations on the manufacture of semiconductor devices. A significant limitation is that the sheet resistance for lines narrower than 0.35 micrometers is high, i.e., as TiSi2 is formed in a narrower and narrower line, the resistance increases. Another significant limitation is that TiSi2 initially forms a high resistivity phase (C49), and transformation from C49 to a low resistivity phase (C54) is nucleation limited, i.e., a high temperature is required to effect the phase change.Cobalt silicide, unlike TiSi2, exhibits less linewidth dependence of sheet resistance. However, CoSi2 consumes significant amounts of Si during formation, which increases the difficulty of forming shallow junctions. Large Si consumption is also a concern where the amount of Si present is limited, for example, with silicon on insulator (SOI) substrates. Without enough Si to react with Co to form CoSi2, a thin layer of CoSi2 results. The thickness of the silicide layer is an important parameter because a thin silicide layer is more resistive than a thicker silicide layer of the same material, thus a thicker silicide layer increases semiconductor device speed, while a thin silicide layer reduces device speed.Recently, attention has turned towards using nickel to form NiSi utilizing salicide technology. Using NiSi is advantageous over using TiSi2 and CoSi2 because many limitations associated with TiSi2 and CoSi2 are avoided. When forming NiSi, a low resistivity phase is the first phase to form, and does so at a relatively low temperature. Additionally, nickel (Ni), like Co, diffuses through the film into Si, unlike Ti where the Si diffuses into the metal layer. Diffusion of Ni and Co through the film into Si prevents bridging between the silicide layer on the gate electrode and the silicide layer over the source/drain regions. The reaction that forms NiSi requires less Si than when TiSi2 and CoSi2 are formed. Nickel silicide exhibits almost no linewidth dependence of sheet resistance. Nickel silicide is normally annealed in a one step process, versus a process requiring an anneal, an etch, and a second anneal, as is normal for TiSi2 and CoSi2. Nickel silicide also exhibits low film stress, i.e., causes less wafer distortion.Although the use of NiSi in salicide technology has certain advantages over utilizing TiSi2 and CoSi2, there are problems using NiSi in certain situations. Forming NiSi on doped, crystallized Si usually produces a smooth interface between the NiSi layer and the doped, crystallized Si layer. However, when crystallized Si is doped with arsenic (As), a rough interface between the NiSi and the doped, crystallized Si forms, which leads to certain problems.FIG. 2 illustrates the degree of interface 36 roughness between a conventional nickel silicide (NiSi) contact 24 and arsenic doped source/drain region 30. In this system, the mean peak to valley interface roughness height d is about 300 Å to about 400 Å. This large degree of interface roughness can cause a variety of electrical problems such as spiking and increased junction leakage. The interface roughness could penetrate all the way through the source/drain region in a shallow junction device, causing a local short circuit, thereby resulting in junction leakage. In order to prevent these problems, a thinner metal layer can be deposited, thereby resulting in a thinner silicide layer, or the depth of source/drain junction can be increased. However, neither of these approaches is satisfactory: the former approach would result in higher sheet resistance and a slower semiconductor device, and the latter approach runs counter to the trend toward smaller device dimensions, both vertically, and laterally, in order to increase switching speeds.Interface roughness becomes more pronounced as the concentration of the dopant increases. In an As doped device with NiSi contacts, interface roughness is especially a problem where the peak concentration of the doped arsenic is in the vicinity of the upper surface of the source/drain regions. In a typical arsenic doped MOS device the arsenic ions will be implanted with an energy and dose of 10 to 20 keV and 1*10<15 >to 6*10<15 >ions/cm<2>, which results in a peak arsenic concentration at about 200 Å to about 400 Å below the upper surface of the source/drain region. When the peak arsenic concentration is located in this region an unacceptably high degree of interface roughness results when nickel silicide is formed.Implanting the arsenic ions deeper into the silicon substrate reduces the interface roughness. However, this has been avoided in conventional practice. A Gaussian type distribution of dopant concentration versus implant into depth is obtained when dopants are implanted into bulk silicon substrates. Driving the peak concentration of the dopant deeper into the bulk silicon substrate to overcome the interface roughness effects shifts more of the dopant even deeper into the substrate. In a bulk silicon substrate, deep implantation of dopant to overcome the silicide interface roughness problem results in slower, larger-dimension devices.The term semiconductor devices, as used herein, is not to be limited to the specifically disclosed embodiments. Semiconductor devices, as used herein, include a wide variety of electronic devices including flip chips, flip chip/package assemblies, transistors, capacitors, microprocessors, random access memories, etc. In general, semiconductor devices refer to any electrical device comprising semiconductors.SUMMARY OF THE INVENTIONThere exists a need in the semiconductor device art to provide silicide contacts for planar transistors which overcome the problem of silicide contact-source/drain region interface roughness. There exists a need in this art to deeply implant dopant in the source/drain regions to prevent silicide interface roughness while maintaining the desirable dimensional and electrical characteristics of shallow implantation. There exists a need in this art to provide arsenic doped source/drain regions with nickel silicide contacts without an unacceptably high degree of silicide-source/drain interface roughness.These and other needs are met by embodiments of the present invention, which provide a method of manufacturing a semiconductor device comprising providing a silicon-containing substrate having an upper surface. The substrate is doped by ion implantation to form source/drain regions such that the concentration of the dopant and the depth below the upper surface of the implant substantially reduce interface roughness between subsequently formed nickel silicide contacts and the source/drain regions. A nickel layer is deposited over the upper surface of the substrate and the nickel layer is heated so that the nickel layer reacts with the silicon layer to form nickel silicide contacts.The earlier stated needs are also meet by other embodiments of the instant invention that provide a semiconductor device comprising a silicon-containing substrate having an upper surface. The silicon-containing substrate contains doped source/drain regions and nickel silicide contacts, wherein the doping concentration and depth below the upper surface of the substrate are such that interface roughness between nickel silicide contacts and the source/drain regions is substantially reduced with respect to conventional semiconductor devices comprising nickel silicide contacts.The earlier stated needs are further met by other embodiments of the instant invention that provide a method of manufacturing a semiconductor device comprising providing a silicon on insulator substrate comprising an insulating layer on a substrate base and a silicon layer is on the insulating layer. A gate oxide layer and conductive gate material are, in turn, formed over the silicon layer. The gate material layer and gate oxide layer are then patterned to form a gate electrode having an upper surface and opposing side surfaces. An insulating material is deposited over the gate electrode and silicon layer. The insulating material is patterned to form sidewall spacers on the opposing sides of the gate electrode. Source/drain implants are formed by ion implanting a dopant into the silicon layer, such that the dopant concentration and depth significantly reduces the interface roughness between subsequently formed nickel silicide contacts and source/drain regions. The source/drain implants are subsequently heated to activate the source/drain regions and then a nickel layer is deposited over the gate electrode and source/drain regions. The nickel layer is heated so that the nickel reacts to form nickel silicide contacts with the underlying silicon on the gate electrode and source/drain regions. The unreacted portions of the nickel layer are removed from the device.This invention addresses the needs for an improved method of forming high conductivity silicide contacts to source/drain regions and gate electrodes with reduced silicide interface roughness and improved electrical characteristics. The present invention reduces the possibility of spiking and junction leakage.The foregoing and other features, aspects, and advantages of the present invention will become apparent in the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1A and 1B schematically illustrate a prior art semiconductor device before and after forming silicide contacts.FIG. 2 is a detailed view of a silicide-source/drain region interface of the prior art.FIGS. 3A-3G schematically illustrate the formation of metal silicide contacts for semiconductor devices according to an embodiment of the present invention.FIG. 4 is a detailed view of a silicide-source/drain region interface formed according to the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention enables the production of semiconductor devices with improved performance and reduced failure rates by reducing source/drain junction interface roughness. The present invention provides semiconductor devices with reduced spiking and junction leakage. The present invention provides an improved semiconductor device with nickel silicide contacts. These are achieved by forming source/drain implants by ion implantation into the substrate at a predetermined concentration and depth.The invention will be described in conjunction with the formation of the semiconductor device in the accompanying drawings. However, this is exemplary only as the claimed invention is not limited to the formation of and the specific device illustrated in the drawings.A silicon on insulator (SOI) substrate 2 is illustrated in FIG. 3A. The SOI substrate 2 comprises a semiconductor substrate base layer 3 with an insulating layer 6 on the base layer 3 and a silicon layer 8 on the insulating layer 6. The substrate base layer 3 comprises a conventional semiconductor substrate, such as a silicon wafer. The insulating layer 6 comprises a conventional insulating material such as silicon dioxide or silicon nitride. The thickness of silicon layer 8 is about 500 Å to about 2,000 Å. A gate oxide layer 10 and a conductive gate material layer 12. such as polysilicon, are formed on the silicon layer upper surface 9. The gate oxide 10 and gate electrode 12 layers are patterned by conventional photolithographic techniques to form gate electrode 12 and the underlying gate oxide layer 10, as shown in FIG. 3B. An insulating layer, such as silicon dioxide, silicon nitride, or silicon carbide, is deposited over the substrate 2 and patterned using an anisotropic etch to form sidewall spacers 14 on the opposing sides 13 of the gate electrode 12, as shown in FIG. 3C.Using the gate electrode 12 and sidewall spacers 14 as masks, dopant 16 is introduced into the silicon layer 8, as shown in FIG. 3D, forming source/drain regions 18. Conventional dopants. such as antimony, arsenic, phosphorous, or boron can be introduced into the source/drain regions 18. The dopant can be introduced by ion implantation. The dopant ions are implanted to a predetermined depth belong the upper surface 9 of the silicon layer 8.To minimize silicide-source/drain region interface roughness, the ions are implanted to a predetermined depth so that the concentration of dopant ions is greatest at a depth of about 450 Å to about 700 Å below the substrate upper surface 9. In order to implant the ions at this depth, the tons are implanted with an energy of about 15 keV to about 40 keV, at a dose of about 1*10<15 >to about 6*10<15 >ions/cm<2>. When arsenic is implanted into the source/drain regions, a suitable peak arsenic concentration is about 1*10<20 >ions/cm<3 >to about 4*10<20 >ions/cm<3>. After ion implantation, the source/drain regions are activated by a first rapid thermal anneal at a temperature greater than 1000[deg.] C. for about 5 to about 30 seconds.By comparison, in the prior art method of FIG. 1A, the source/drain regions 30 are formed in a bulk silicon substrate 4. The ions implanted in the source/drain regions 30 are near the upper surface 5 of the bulk silicon substrate 4.Referring to FIG. 3E, after the first rapid thermal anneal, a metal layer 22 is deposited over the source/drain region 18 and the gate electrode 12. Metal layers are deposited by a PVD method such as sputtering or evaporation, or a CVD method. The metal is deposited to a layer thickness of about 100 Å to about 500 Å. The metal layer 22 can comprise Co, Ni, Ti, Mo, Ta, W, Cr, Pt, or Pd. Because it forms silicide by a low temperature, single step anneal, among the other reasons herein described, nickel is a preferred metal.The deposited nickel layer 22 is subsequently annealed in a second rapid thermal anneal step to form the metal silicide contacts 24, as depicted in FIG. 3F. The nickel layer 22 is annealed for about 15 to about 120 seconds at about 350[deg.] C. to about 700[deg.] C. to form NiSi. If the annealing temperature is below about 350[deg.] C. or greater than 700[deg.] C. relatively low conductivity Ni2Si or NiSi2 are respectively formed.Silicide contacts 24 are formed on the gate electrode 12 and source and drain regions 18 as shown in FIG. 3F. As shown in FIG. 3G and FIG. 4, interface 26 is formed between silicide contact 24 and source/drain region 18. The silicide-source/drain region interface 36 formed according to the prior art process, FIG. 3, has a larger peak to valley distance d than the peak to valley distance d of the silicide interface 26 formed according to the present invention. The prior art mean peak to valley distance d is about 300 Å to about 400 Å. In embodiments of the present invention the mean peak to valley distance d of the silicide-source/drain interface is reduced to less than 100 Å.The methods of the present invention provide reduced silicide/silicon interface roughness by deeply doping while maintaining the favorable electrical characteristics of shallow doping. Deeply implanting dopant into a bulk silicon substrate would result in forming source/drain junctions deeper than 1000 Å below the upper surface of the silicon-containing substrate. In certain embodiments of the present invention the source/drain junctions are confined to the silicon layer 8. Oxide layer 6 prevents the source/drain junctions from extending deeper into the substrate.For example, in one embodiment silicon layer 8 is about 1000 Å thick. Oxide layer 6 prevents the source/drain regions from extending deeper into the substrate. as they would if they were implanted Into a bulk silicon substrate. By confining the source/drain regions to the silicon layer thickness, the present Invention provides greater conductivity in the source/drain junctions and prevents spiking and junction leakage because of interface roughness. The present invention produces silicide contact-source/drain region interfaces with reduced interface roughness and increased conductivity in a novel, elegant manner.The embodiments illustrated in the instant disclosure are for illustrative purposes only. It should not be construed to limit the scope of the claims. As is clear to one of ordinary skill in the art, the instant disclosure encompasses a wide variety of embodiments not specifically illustrated herein.
A processing device implements a set of instructions to perform a centrifuge operation using vector or general purpose registers. In one embodiment, the centrifuge operation separates bits in a source register to opposing regions of a destination register based on a control mask, where each source register bit with a corresponding control mask value of one is written to one region in a destination register, while source register bits with a corresponding control mask value of zero are written to an opposing region of the destination register.
CLAIMSWhat is claimed is:1. A processing apparatus comprising:decode logic to decode a first instruction into a decoded first instruction including a first operand and a second operand; andan execution unit to execute the first decoded instruction to perform a centrifuge operation to separate bits from a source register specified by the second operand based on a control mask indicated by the first operand.2. The processing apparatus as in claim 1 further comprising an instruction fetch unit to fetch the first instruction, wherein the instruction is a single machine-levelinstruction.3. The processing apparatus as in claim 1 further comprising a register file unit to commit a result of the centrifuge operation to a location specified by a destination operand.4. The processing apparatus as in claim 3 wherein the register file unit further to store a set of registers comprising:a first register to store a first source operand value;a second register to store a second source operand value; anda third register to store at least one data element of the result of the centrifuge operation.5. The processing apparatus as in claim 4 wherein the first register to store the control mask, each bit of the control mask to indicate a position within the third register to write a value from the second register.6. The processing apparatus as in claim 5 wherein a control mask bit of one indicates that the value from the second register is to be written to a first region in the third register and a control mask bit of zero indicates that the value is to be written to a second region in the third register.7. The processing apparatus as in claim 6 wherein the first region of the third register includes low byte-order bits, the second region of the third register includes high byte-order bits, and the first and second region are in opposition.8. The processing apparatus as in claim 4 wherein the first or second register is a 32-bit or a 64-bit general-purpose register.9. The processing apparatus as in claim 4 wherein the first or second register is a vector register.10. The processing apparatus as in claim 9 wherein the vector register is a 128-bit, 256-bit, or 512-bit register to store packed data elements.11. The processing apparatus as in claim 10 wherein the packed data elements include a byte, word, double word, or quad word data element.12. A processor implemented method comprising:fetching a single instruction to perform a inverse centrifuge operation, the instruction having two source operands and a destination operand;decoding the single instruction into a decoded instruction;fetching source operand values associated with at least one operand; and executing the decoded instruction to separate bits from opposing regions of a source register specified by a second source operand based on a control mask indicated by a first source operand.13. The method as in claim 12 wherein the first source operand is an immediate operand.14. The method as in claim 12 wherein the first source operand specifies a register including the control mask.15. The method as in claim 12 further including writing a result to a location indicated by the destination operand.16. The method as in claim 15 wherein the destination operand indicates vector register.17. The method as in claim 15 wherein executing the decoded instruction includesperforming at least one parallel extract operation to extract a field of bits from arbitrary positions in a source register and write the field to a contiguous region in a destination register.18. The method as in claim 17 wherein the destination register is a temporary register.19. The method as in claim 18 further including performing multiple parallel extractoperations to multiple temporary registers.20. The method as in claim 19 further comprising performing an OR operation on themultiple temporary registers before writing the result to the location indicated by the destination operand.21. A system comprising means to perform a method as in any one of claims 12-20.22. A machine-readable medium having stored thereon data, which if performed by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform a method as in any one of claims 12-20.
INSTRUCTION AND LOGIC TO PERFORM A CENTRIFUGE OPERATIONFIELD OF THE INVENTION[0001] The present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations.DESCRIPTION OF RELATED ART[0002] Certain types of applications often require the same operation to be performed on a large number of data items (referred to as "data parallelism"). Single Instruction Multiple Data (SIMD) refers to a type of instruction that causes a processor to perform an operation on multiple data items. SIMD technology is especially suited to processors that can logically divide the bits in a register into a number of fixed-sized data elements, each of which represents a separate value. For example, the bits in a 256-bit register may be specified as a source operand to be operated on as four separate 64-bit packed data elements (quad- word (Q) size data elements), eight separate 32-bit packed data elements (double word (D) size data elements), sixteen separate 16-bit packed data elements (word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). This type of data is referred to as "packed" data type or a "vector" data type, and operands of this data type are referred to as packed data operands or vector operands. In other words, a packed data item or vector refers to a sequence of packed data elements, and a packed data operand or a vector operand is a source or destination operand of a SIMD instruction (also known as a packed data instruction or a vector instruction). DESCRIPTION OF THE FIGURES[0003] Embodiments are illustrated by way of example and not limitation in the Figures of the accompanying drawings, in which:[0004] FIG. 1A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments;[0005] FIG. IB is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments;[0006] FIG. 2A-B are block diagrams of a more specific exemplary in-order core architecture[0007] FIG. 3 is a block diagram of a single core processor and a multicore processor with integrated memory controller and special purpose logic;[0008] FIG. 4 illustrates a block diagram of a system in accordance with an embodiment;[0009] FIG. 5 illustrates a block diagram of a second system in accordance with an embodiment;[0010] FIG. 6 illustrates a block diagram of a third system in accordance with anembodiment;[0011] FIG. 7 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment;[0012] FIG. 8 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments;[0013] FIG. 9A-E are a block diagrams illustrating bit manipulation operations to perform a centrifuge operation, according to an embodiment[0014] FIG. 10 is a block diagram of a processor core including in accordance with embodiments described herein;[0015] FIG. 11 is a block diagram of a processing system including logic to perform a centrifuge operation according to an embodiment; [0016] FIG. 12 is a flow diagram for logic to process an exemplary centrifuge instruction, according to an embodiment;[0017] FIGS. 13A-B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments;[0018] FIGS. 14A-D are block diagrams illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention; and[0019] FIG. 15 is a block diagram of scalar and vector register architecture according to an embodiment.DETAILED DESCRIPTION[0020] The SIMD technology, such as that employed by the Intel® Core™ processors having an instruction set including x86, MMX™, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1, and SSE4.2 instructions, has enabled a significant improvement in application performance. An additional set of SIMD extensions, referred to the Advanced VectorExtensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme, has been released (see, e.g., see Intel®64 and IA-32 Architectures Software Developers Manual, September 2014; and see Intel® Architecture Instruction Set Extensions Programming Reference, September 2014). Architectural extensions are described which extend the Intel Architecture (IA). However, the underlying principles are not limited to any particular ISA.[0021] In one embodiment, a processing device implements a set of instructions to perform a centrifuge operation using vector or general purpose registers. In the centrifuge operation, also referred to as 'sheep and goats,' bits under a mask bit of 1 are separated on one side (e.g., right side) and bits under 0 are put one the other side (e.g., left side) of the destination element. The instructions use a control mask to determine which side of the destination register to write a source bit. The centrifuge instructions may be used to implement basic functionality that is a component of many bit- manipulation routines.[0022] Described below are processor core architectures followed by descriptions of exemplary processors and computer architectures according to embodiments described herein. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the various embodiments.[0023] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Processors may be implemented using a single processor core or can include a multiple processor cores. The processor cores within the processor may be homogenous or heterogeneous in terms of architecture instruction set.[0024] Implementations of different processors include: 1) a central processor including one or more general purpose in-order cores for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (e.g., many integrated core processors). Such different processors lead to different computer system architectures including: 1) the coprocessor on a separate chip from the central system processor; 2) the coprocessor on a separate die, but in the same package as the central system processor; 3) the coprocessor on the same die as other processor cores (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described processor (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality.Exemplary Core ArchitecturesIn-order and out-of-order core block diagram[0025] Figure 1A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline, according to anembodiment. Figure IB is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to an embodiment. The solid lined boxes in Figures 1A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out- of-order aspect will be described.[0026] In Figure 1A, a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 110, a scheduling (also known as a dispatch or issue) stage 112, a register read/memory read stage 114, an execute stage 116, a write back/memory write stage 118, an exception handling stage 122, and a commit stage 124.[0027] Figure IB shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170. The core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[0028] The front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140. The decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 140 or otherwise within the front end unit 130). The decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150.[0029] The execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156. The scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158. Each of the physical register file(s) units 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 154 and the physical register file(s) unit(s) 158 are coupled to the execution cluster(s) 160. The execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164. The execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out- of-order issue/execution and the rest in-order.[0030] The set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 176. In one exemplary embodiment, the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170. The instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170. The L2 cache unit 176 is coupled to one or more other levels of cache and eventually to a main memory.[0031] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1) the instruction fetch 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 110; 4) the scheduler unit(s) 156 performs the schedule stage 112; 5) the physical register file(s) unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 116; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 118; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.[0032] The core 190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM® instruction set (with optional additional extensions such as NEON) of ARM Holdings of Cambridge, England), including the instruction(s) described herein. In one embodiment, the core 190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2, etc.), allowing the operations used by many multimedia applications to be performed using packed data.[0033] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyper- Threading Technology).[0034] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core Architecture[0035] Figures 2A-B are block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[0036] Figure 2A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 202 and with its local subset of the Level 2 (L2) cache 204, according to an embodiment. In one embodiment, an instruction decoder 200 supports the x86 instruction set with a packed data instruction set extension. An LI cache 206 allows low- latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 208 and a vector unit 210 use separate register sets (respectively, scalar registers 212 and vector registers 214) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 206, alternativeembodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[0037] The local subset of the L2 cache 204 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 204. Data read by a processor core is stored in its L2 cache subset 204 and can be accessed quickly and in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 204 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction.[0038] Figure 2B is an expanded view of part of the processor core in Figure 2A according to an embodiment. Figure 2B includes an LI data cache 206 A part of the LI cache 204, as well as more detail regarding the vector unit 210 and the vector registers 214. Specifically, the vector unit 210 is a 16-wide vector-processing unit (VPU) (see the 16-wide ALU 228), which executes one or more of integer, single-precision float, and double precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 220, numeric conversion with numeric convert units 222A-B, and replication with replication unit 224 on the memory input. Write mask registers 226 allow predicating resulting vector writes.Processor with integrated memory controller and special purpose logic[0039] Figure 3 is a block diagram of a processor 300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to an embodiment. The solid lined boxes in Figure 3 illustrate a processor 300 with a single core 302A, a system agent 310, a set of one or more bus controller units 316, while the optional addition of the dashed lined boxes illustrates an alternative processor 300 with multiple cores 302A-N, a set of one or more integrated memory controller unit(s) 314 in the system agent unit 310, and special purpose logic 308.[0040] Thus, different implementations of the processor 300 may include: 1) a CPU with the special purpose logic 308 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 302A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 302A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 302A-N being a large number of general purpose in-order cores. Thus, the processor 300 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 300 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0041] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 306, and external memory (not shown) coupled to the set of integrated memory controller units 314. The set of shared cache units 306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 312 interconnects the integrated graphics logic 308, the set of shared cache units 306, and the system agent unit 310/integrated memory controller unit(s) 314, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 306 and cores 302- A-N.[0042] In some embodiments, one or more of the cores 302A-N are capable of multithreading. The system agent 310 includes those components coordinating and operating cores 302A-N. The system agent unit 310 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 302A-N and the integrated graphics logic 308. The display unit is for driving one or more externally connected displays.[0043] The cores 302A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 302A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer Architectures[0044] Figures 4-7 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0045] Figure 4 shows a block diagram of a system 400 in accordance with anembodiment. The system 400 may include one or more processors 410, 415, which are coupled to a controller hub 420. In one embodiment the controller hub 420 includes a graphics memory controller hub (GMCH) 490 and an Input/Output Hub (IOH) 450 (which may be on separate chips); the GMCH 490 includes memory and graphics controllers to which are coupled memory 440 and a coprocessor 445; the IOH 450 is couples input/output (I/O) devices 460 to the GMCH 490. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 440 and the coprocessor 445 are coupled directly to the processor 410, and the controller hub 420 in a single chip with the IOH 450.[0046] The optional nature of additional processors 415 is denoted in Figure 4 with broken lines. Each processor 410, 415 may include one or more of the processing cores described herein and may be some version of the processor 300.[0047] The memory 440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 420 communicates with the processor(s) 410, 415 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 495.[0048] In one embodiment, the coprocessor 445 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 420 may include an integrated graphics accelerator.[0049] There can be a variety of differences between the physical resources 410, 415 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. [0050] In one embodiment, the processor 410 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 410 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 445. Accordingly, the processor 410 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 445. Coprocessor(s) 445 accept and execute the received coprocessor instructions.[0051] Figure 5 shows a block diagram of a first more specific exemplary system 500 in accordance with an embodiment. As shown in Figure 5, multiprocessor system 500 is a point- to-point interconnect system, and includes a first processor 570 and a second processor 580 coupled via a point-to-point interconnect 550. Each of processors 570 and 580 may be some version of the processor 300. In one embodiment of the invention, processors 570 and 580 are respectively processors 410 and 415, while coprocessor 538 is coprocessor 445. In another embodiment, processors 570 and 580 are respectively processor 410 coprocessor 445.[0052] Processors 570 and 580 are shown including integrated memory controller (IMC) units 572 and 582, respectively. Processor 570 also includes as part of its bus controller units point-to-point (P-P) interfaces 576 and 578; similarly, second processor 580 includes P-P interfaces 586 and 588. Processors 570, 580 may exchange information via a point-to-point (P- P) interface 550 using P-P interface circuits 578, 588. As shown in Figure 5, EVICs 572 and 582 couple the processors to respective memories, namely a memory 532 and a memory 534, which may be portions of main memory locally attached to the respective processors.[0053] Processors 570, 580 may each exchange information with a chipset 590 via individual P-P interfaces 552, 554 using point to point interface circuits 576, 594, 586, 598. Chipset 590 may optionally exchange information with the coprocessor 538 via a high- performance interface 539. In one embodiment, the coprocessor 538 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[0054] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0055] Chipset 590 may be coupled to a first bus 516 via an interface 596. In one embodiment, first bus 516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[0056] As shown in Figure 5, various I/O devices 514 may be coupled to first bus 516, along with a bus bridge 518 that couples first bus 516 to a second bus 520. In one embodiment, one or more additional processor(s) 515, such as coprocessors, high-throughput MICprocessors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 516. In one embodiment, second bus 520 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 520 including, for example, a keyboard and/or mouse 522, communication devices 527 and a storage unit 528 such as a disk drive or other mass storage device that may include instructions/code and data 530, in one embodiment. Further, an audio I/O 524 may be coupled to the second bus 520. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 5, a system may implement a multi-drop bus or other such architecture.[0057] Figure 6 shows a block diagram of a second more specific exemplary system 600 in accordance with an embodiment. Like elements in Figures 5 and 6 bear like reference numerals, and certain aspects of Figure 5 have been omitted from Figure 6 in order to avoid obscuring other aspects of Figure 6.[0058] Figure 6 illustrates that the processors 570, 580 may include integrated memory and I/O control logic ("CL") 572 and 582, respectively. Thus, the CL 572, 582 include integrated memory controller units and include I/O control logic. Figure 6 illustrates that not only are the memories 532, 534 coupled to the CL 572, 582, but also that I/O devices 614 are also coupled to the control logic 572, 582. Legacy I/O devices 615 are coupled to the chipset 590.[0059] Figure 7 shows a block diagram of a SoC 700 in accordance with an embodiment. Similar elements in Figure 3 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 7, an interconnect unit(s) 702 is coupled to: an application processor 710 which includes a set of one or more cores 202A-N and shared cache unit(s) 306; a system agent unit 310; a bus controller unit(s) 316; an integrated memory controller unit(s) 314; a set or one or more coprocessors 720 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 730; a direct memory access (DMA) unit 732; and a display unit 740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 720 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[0060] Embodiments of the mechanisms disclosed herein are implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments are implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[0061] Program code, such as code 530 illustrated in Figure 5, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0062] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[0063] One or more aspects of at least one embodiment may be implemented by representative data stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium ("tape") and supplied to various customers ormanufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as processors developed by ARM Holdings, Ltd. and the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees and implemented in processors produced by these customers or licensees.[0064] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), rewritable compact disks (CD- RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0065] Accordingly, embodiments also include non-transitory, tangible machine -readable media containing instructions or containing design data, such as Hardware DescriptionLanguage (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)[0066] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. [0067] Figure 8 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to an embodiment. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 8 shows a program in a high level language 802 may be compiled using an x86 compiler 804 to generate x86 binary code 806 that may be natively executed by a processor with at least one x86 instruction set core 816.[0068] The processor with at least one x86 instruction set core 816 represents any processor that can perform substantially the same functions as an Intel® processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel® x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel® processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The x86 compiler 804 represents a compiler that is operable to generate x86 binary code 806 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 816. Similarly, Figure 8 shows the program in the high level language 802 may be compiled using an alternative instruction set compiler 808 to generate alternative instruction set binary code 810 that may be natively executed by a processor without at least one x86 instruction set core 814 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Cambridge, England).[0069] The instruction converter 812 is used to convert the x86 binary code 806 into code that may be natively executed by the processor without an x86 instruction set core 814. This converted code is not likely to be the same as the alternative instruction set binary code 810 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 812 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 806.Centrifuge InstructionCentrifuge Operation[0070] Embodiments described herein implement a bitwise centrifuge operation. The centrifuge operation, also referred to as 'sheep and goats,' separates bits from a source register to one side or the other of a destination register based on bits in a control mask. In one embodiment, source bits associated with a control mask bit of one are separated to the right (e.g., low order) side of the destination register, while bits associated with a control mask bit of zero are separated to the left (e.g., high order) side of the destination register. General purpose or vector registers may be used as source or destination registers. In one embodiment, general- purpose registers including 32-bit or 64-bit registers are supported. In one embodiment, vector registers including 128-bit, 256-bit, or 512-bits are supported, with the vector registers having support for packed byte, word, double word, or quad word data elements.[0071] Performing a centrifuge using instructions from existing instruction sets requires a sequence of multiple instructions. Embodiments described herein implement centrifuge functionality in a single instruction. In one embodiment a centrifuge instruction as described herein includes a first source operand indicating a mask value. Each bit of the mask with a value of one indicates that a corresponding bit for in the source register is to be separated to the 'right' side of a source register. Mask bits with value of zero are to be separated to the 'left' side of the source register. In one embodiment the source register is indicated by a second source operand.[0072] Exemplary source and destination register values for a centrifuge instruction are shown in Table 1 below.Table 1 - Centrifuge Instruction[0073] In Table 1 above, the SRC1 operand indicates mask register storing a bitmask value. The SRC2 operand indicates a register storing a source value for the centrifuge operation. The letters used to illustrate the SRC2 value are shown not to indicate a particular value, but to indicate a particular bit position within a bit field. The DEST operand indicates a destination register to store the output of the centrifuge instruction. While an exemplary 16 bits are shown in Table 1, in various embodiments the instruction accept 32-bit or 64-bit general-purpose register operands. In one embodiment, vector instructions are implemented to act upon vector registers having packed byte, word, double word, or quad word data elements. In one embodiment the registers include 128-bit, 256-bit, and 512-bit registers.[0074] To illustrate the operation of an exemplary instruction, Table 2 below shows an exemplary sequence of multiple Intel Architecture (IA) instructions that may be used to perform a centrifuge operation on set of registers. The exemplary instructions include a population count instruction, a parallel extract instruction, and a shift instruction. In one embodiment, vector instructions may also be used to perform in parallel across multiple vector data elements. Table 2 -Centrifuge Operation01 popcnt rsi, rbx02 pext rex, rax, rbx03 not rbx04 pext rdx, rax, rbx05 shlx rdx, rdx, rsi06 or rex, rdx[0075] In the exemplary centrifuge logic shown in Table 2 above, the 'popcnt' symbol indicates a population count instruction. The population count instruction computes the Hamming weight of an input bit field (e.g., the hamming distance of the bit field from a zero bit field of equal length). This instruction is used on the bitmask to determine the number of bits that are set. In one embodiment, the number of bits that are set in the bit field determines the divider between the 'right' and the 'left' side of the register. The 'pext' symbol indicates a parallel extract instruction. In one embodiment the parallel extract instruction extracts a single field of bits from arbitrary positions in a source register and right justifies the bits into a destination register. The 'shlx' symbol indicates a logical shift left instruction, which shifts a source bit field right by a specified number of bit positions. [0076] The exemplary 'not' and 'or' instructions shown each perform the logical operations for which the instructions are named. The 'not' instruction computes the logical complement of the value in the input (e.g., each one bit becomes a zero bit). The 'or' instruction computes a logical or of the values in the registers indicated by the source operands. The logical operations to compute the DEST value of Table 1 from the SRC1 and SRC2 values are illustrated inFigures 9A-E using the exemplary logic of Table 2.[0077] Figure 9A-E are a block diagrams illustrating bit manipulation operations to perform a centrifuge operation, according to an embodiment. As illustrated in Figure 9A, a parallel extract operation, also shown at line (2) of Table 2, extracts bits from SRC2 902 to a temporary register (e.g., TMP1 906) based on the control mask bits provided in SRC1 904.[0078] As illustrated in Figure 9B, a not operation, also shown at line (3) of Table 2, negates bits from SRC1 904 to create a negative control mask (e.g., SRC1' 914).[0079] As illustrated in Figure 9C, a second parallel extract operation, also shown at line (4) of Table 2, extracts bits from SRC2 902 to a second temporary register (e.g., TMP2 916) based on the bits provided in SRC1' 914.[0080] As illustrated in Figure 9D, a shift left operation, also shown at line (5) of Table 2, shifts bits from TMP2 916 to a created a shifted temporary register (e.g., TMP2' 926). The number of positions to shift TMP2 902 is determined by the population count instruction shown at line (1) of Table 2.[0081] As illustrated in Figure 9E, an 'or' operation, also shown at line (6) of Table 2, combines bits from TMP2' 926 and TMP1 906 to a destination register (e.g., DEST 936).According to embodiments, the destination register contains the result of the centrifuge operation.Exemplary Processor Implementation[0082] Figure 10 is a block diagram of a processor core 1000 including logic to perform operations in accordance with embodiments described herein. In one embodiment the in-order front end 1001 is the part of the processor core 1000 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. In one embodiment, the front end 1001 is similar to the front end unit 130 of Figure 1, additionally including components including an instruction pref etcher 1026 to preemptively fetch instructions from memory. Fetched instructions may be fed to an instruction decoder 1028 to decode or interprets the instructions.[0083] In one embodiment, the instruction decoder 1028 decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the microarchitecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 1029 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 1034 for execution.[0084] In one embodiment the processor core 1000 implements a complex instruction set. When the trace cache 1029 encounters a complex instruction, a microcode ROM 1032 provides the uops needed to complete the operation. Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 1028. In another embodiment, an instruction can be stored within the microcode ROM 1032 should a number of micro-ops be needed to accomplish the operation. For example, in one embodiment if more than four micro-ops are needed to complete an instruction, the decoder 1028 accesses the microcode ROM 1032 to perform the instruction.[0085] The trace cache 1029 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 1032. After the microcode ROM 1032 finishes sequencing micro-ops for an instruction, the front end 1001 of the machine resumes fetching micro-ops from the trace cache 1029. In one embodiment, the processor core 1000 includes an out-of-order execution engine 1003 where instructions are prepared for execution. The out-of-order execution logic has a number of buffers to re-order instruction flow to optimize performance as the instructions proceed through the instruction pipeline. For embodiments configured for microcode support, allocator logic allocates the machine buffers and resources that each uop uses during execution. Additionally, register-renaming logic renames logical registers to physical registers in the physical registers in a register file.[0086] In one embodiment the allocator allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 1002, slow/general floating point scheduler 1004, and simple floating point scheduler 1006. The uop schedulers 1002, 1004, 1006, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 1002 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0087] Register files 1008, 1010, sit between the schedulers 1002, 1004, 1006, and the execution units 1012, 1014, 1016, 1018, 1020, 1022, 1024 in the execution block 1011. In one embodiment there are a separate register files 1008, 1010, for integer and floating point operations, respectively. In one embodiment each register file 1008, 1010 includes a bypass network that can bypass or forward completed results that have not yet been written into the register file to new dependent uops. The integer register file 1008 and the floating point register file 1010 are also capable of communicating data with the other. For one embodiment, the integer register file 1008 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. In one embodiment the floating point register file 1010 has 128 bit wide entries.[0088] The execution block 1011 contains the execution units 1012, 1014, 1016, 1018, 1020, 1022, 1024 to execute instructions. The register files 1008, 1010 store the integer and floating point data operand values that the micro-instructions need to execute. The processor core 1000 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 1012, AGU 1014, fast ALU 1016, fast ALU 1018, slow ALU 1020, floating point ALU 1022, floating point move unit 1024. For one embodiment, the floating point execution blocks 1022, 1024, execute floating point, MMX, SEVID, and SSE, or other operations. The floating point ALU 1022 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops.[0089] In one embodiment, instructions involving a floating point value may be handled with the floating point hardware. The ALU operations go to the high-speed ALU execution units 1016, 1018. The fast ALUs 1016, 1018, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 1020 as the slow ALU 1020 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 1012, 1014. For one embodiment, the integer ALUs 1016, 1018, 1020, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 1016, 1018, 1020, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 1022, 1024, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 1022, 1024, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.[0090] In one embodiment, the uops schedulers 1002, 1004, 1006, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed, the processor core 1000 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re- executes instructions that use incorrect data. In one embodiment only the dependent operations need to be replayed and the independent ones are allowed to complete.[0091] In one embodiment a memory execution unit (MEI) 1041 is included. The MEU 1041 includes a memory order buffer (MOB) 1042, an SRAM unit 1030, a data TLB unit 1072, a data cache unit 1074, and an L2 cache unit 1076.[0092] The processor core 1000 may be configured for simultaneous multithreaded operation by sharing or partitioning various components. Any thread operating on the processor may access shared components. For example, space in a shared buffer or shared cache can be allocated to thread operations without regard to the requesting thread. In one embodiment, partitioned components are allocated per thread. Specifically which components are shared and which components are partitioned varies according to embodiments. In one embodiment, processor execution resources such as execution units (e.g., execution block 1011) and data caches (e.g., data TLB unit 1072, data cache unit 1074) are shared resources. In one embodiment multi-level caches including the L2 cache unit 1076 and other higher level cache units (e.g., L3 cache, L4 cache) are shared among all executing threads. Other processor resources are portioned and assigned or allocated on a per-thread basis, with specific partitions of the partitioned resources dedicated to specific threads. Exemplary partitioned resources include the MOB 1042, the register alias table (RAT) and reorder buffer (ROB) of the out of order engine 1003 (e.g., within the rename/allocator unit 152 and retirement unit 154 of Figure IB), and one or more instruction decode queues associated with the instruction decoder 1028 of the front end 1001. In one embodiment, the instruction TLB (e.g., instruction TLB unit 136 of Figure IB) and branch prediction unit (e.g., branch prediction unit 132 of Figure IB) are also partitioned.[0093] The Advanced Configuration and Power Interface (ACPI) specification describes a power management policy that includes various "C states" that may be supported by processors and/or chipsets. For this policy, CO is defined as the Run Time state in which the processor operates at high voltage and high frequency. CI is defined as the Auto HALT state in which the core clock is stopped internally. C2 is defined as the Stop Clock state in which the core clock is stopped externally. C3 is defined as a Deep Sleep state in which all processor clocks are shut down, and C4 is defined as a Deeper Sleep state in which all processor clocks are stopped and the processor voltage is reduced to a lower data retention point. Various additional deeper sleep power states, C5 and C6 are also implemented in some processors. During the C6 state, all threads are stopped, thread state is stored in a C6 SRAM that remains powered during the C6 state, and voltage to the processor core is reduced to zero.[0094] Figure 11 is a block diagram of a processing system including logic to perform a centrifuge operation according to an embodiment. The exemplary processing system includes a processor 1155 coupled to main memory 1100. The processor 1155 includes a decode unit 1130 with decode logic 1131 for decoding the centrifuge instructions. Additionally, a processor execution engine unit 1140 includes additional execution logic 1141 for executing thecentrifuge instructions. Registers 1105 provide register storage for operands, control data and other types of data as the execution unit 1140 executes the instruction stream.[0095] The details of a single processor core ("Core 0") are illustrated in Figure 11 for simplicity. It will be understood, however, that each core shown in Figure 11 may have the same set of logic as Core 0. As illustrated, each core may also include a dedicated Level 1 (LI) cache 1112 and Level 2 (L2) cache 1111 for caching instructions and data according to a specified cache management policy. The LI cache 1111 includes a separate instruction cache 1320 for storing instructions and a separate data cache 1121 for storing data. The instructions and data stored within the various processor caches are managed at the granularity of cache lines, which may be a fixed size (e.g., 64, 128, 512 Bytes in length). Each core of this exemplary embodiment has an instruction fetch unit 1110 for fetching instructions from main memory 1100 and/or a shared Level 3 (L3) cache 1116; a decode unit 1130 for decoding the instructions; an execution unit 1340 for executing the instructions; and a write back/retire unit 1150 for retiring the instructions and writing back the results.[0096] The instruction fetch unit 1110 includes various well known components including a next instruction pointer 1103 for storing the address of the next instruction to be fetched from memory 1100 (or one of the caches); an instruction translation look-aside buffer (ITLB) 1104 for storing a map of recently used virtual-to-physical instruction addresses to improve the speed of address translation; a branch prediction unit 1102 for speculatively predicting instruction branch addresses; and branch target buffers (BTBs) 1101 for storing branch addresses and target addresses. Once fetched, instructions are then streamed to the remaining stages of the instruction pipeline including the decode unit 1130, the execution unit 1140, and the write back/retire unit 1150.[0097] Figure 12 is a flow diagram for logic to process an exemplary centrifuge instruction, according to an embodiment. At block 1202, the instruction pipeline beings with a fetch of an instruction to perform a centrifuge operation. In some embodiments the instruction accepts a first input operand, a second input operand, and a destination operand. In such embodiments, the input operands include a control mask and a source register. The source register may be a general-purpose register or a vector register storing packed byte, word, double word, or quad word values. The control mask may be provided in a general purpose register that is used to control the separation of bits from a source general-purpose register or for each element of a source vector register. In one embodiment the control mask may be provided via vector register to control bit separation from a source vector register. In one embodiment, the destination operand provides a destination register, which may be a general-purpose register or a vector register configured to store packed byte, word, double word or quad word values.[0098] At block 1204, a decode unit decodes the instruction into a decoded instruction. In one embodiment, the decoded instruction is a single operation. In one embodiment the decoded instruction includes one or more logical micro-operations to perform each sub-element of the instruction. The micro-operations can be hard- wired or microcode operations can cause components of the processor, such as an execution unit, to perform various operations to implement the instruction.[0099] At block 1206 an execution unit of the processor executes the decoded instruction to perform a centrifuge (e.g., sheep and goats) operation that separates bits from a source register into opposing sides of a destination register or data element of a vector register based on a control mask. Exemplary logic operations to perform a centrifuge operation are shown in Figures 9A-E, although the specific operations performed may vary according to embodiments, and alternative or additional logic may be used to perform the centrifuge operation. During execution, one or more execution units of the processor read bits of source data from the source register or source register vector element and write the bits to one side or an opposing side of a destination register based on the control mask. In one embodiment, a control mask bit of one indicates that a value is to be written to the 'right' side of a register, while a control mask bit of zero indicates that a value is to be written to the 'left' side of the register. According to embodiments, the 'right' and 'left' side of the register may respectively indicate the low order and high order bits of the register. As described herein, the high and low order bits are defined as the most significant and least significant bits independent of the convention used to interpret the bytes making up a data word when those bytes are stored in computer memory. However, as byte order may vary according to embodiments and configurations, it will be understood that the byte order associated with the respective register sides and word addresses/offsets may differ without violating the scope of the various embodiments.[0100] At block 1408 the processor writes the result of the executed instruction to the processor register file. The processor register file includes one or more physical register files that store various data types, including scalar integer or packed integer data types. In oneembodiment the register file includes the general purpose or vector register indicated as the destination register by the instruction destination operand.Exemplary Instruction Formats[0101] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below.Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.[0102] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[0103] Figures 13A-13B are block diagrams illustrating a generic vector friendlyinstruction format and instruction templates thereof according to an embodiment. Figure 13A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to an embodiment; while Figure 13B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to an embodiment. Specifically, a generic vector friendly instruction format 1300 for which are defined class A and class B instruction templates, both of which include no memory access 1305 instruction templates and memory access 1320 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set. [0104] Embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes).However, alternate embodiments support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0105] The class A instruction templates in Figure 13A include: 1) within the no memory access 1305 instruction templates there is shown a no memory access, full round control type operation 1310 instruction template and a no memory access, data transform type operation 1315 instruction template; and 2) within the memory access 1320 instruction templates there is shown a memory access, temporal 1325 instruction template and a memory access, non- temporal 1330 instruction template. The class B instruction templates in Figure 13B include: 1) within the no memory access 1305 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1312 instruction template and a no memory access, write mask control, vsize type operation 1317 instruction template; and 2) within the memory access 1320 instruction templates there is shown a memory access, write mask control 1327 instruction template.[0106] The generic vector friendly instruction format 1300 includes the following fields listed below in the order illustrated in Figures 13A-13B.[0107] Format field 1340 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format. [0108] Base operation field 1342 - its content distinguishes different base operations.[0109] Register index field 1344 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[0110] Modifier field 1346 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 1305 instruction templates and memory access 1320 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non- memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[0111] Augmentation operation field 1350 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 1368, an alpha field 1352, and a beta field 1354. The augmentation operation field 1350 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.[0112] Scale field 1360 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale* index + base).[0113] Displacement Field 1362A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).[0114] Displacement Factor Field 1362B (note that the juxtaposition of displacement field 1362A directly over displacement factor field 1362B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement).Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 1374 (described later herein) and the data manipulation field 1354C. The displacement field 1362A and the displacement factor field 1362B are optional in the sense that they are not used for the no memory access 1305 instruction templates and/or different embodiments may implement only one or none of the two.[0115] Data element width field 1364 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0116] Write mask field 1370 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging- writemasking, while class B instruction templates support both merging- and zeroing- writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 1370 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the write mask field's 1370 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 1370 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 1370 content to directly specify the masking to be performed.[0117] Immediate field 1372 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[0118] Class field 1368 - its content distinguishes between different classes of instructions. With reference to Figures 13A-B, the contents of this field select between class A and class B instructions. In Figures 13A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1368A and class B 1368B for the class field 1368 respectively in Figures 13A-B).Instruction Templates of Class A[0119] In the case of the non-memory access 1305 instruction templates of class A, the alpha field 1352 is interpreted as an RS field 1352A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1352A.1 and data transform 1352A.2 are respectively specified for the no memory access, round type operation 1310 and the no memory access, data transform type operation 1315 instruction templates), while the beta field 1354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1305 instruction templates, the scale field 1360, the displacement field 1362A, and the displacement scale filed 1362B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[0120] In the no memory access full round control type operation 1310 instruction template, the beta field 1354 is interpreted as a round control field 1354A, whose content(s) provide static rounding. While in the described embodiments the round control field 1354A includes a suppress all floating point exceptions (SAE) field 1356 and a round operation control field 1358, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 1358).[0121] SAE field 1356 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 1356 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[0122] Round operation control field 1358 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round- to-nearest). Thus, the round operation control field 1358 allows for the changing of the rounding mode on a per instruction basis. In one embodiment a processor includes a control register for specifying rounding modes and the round operation control field's 1350 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[0123] In the no memory access data transform type operation 1315 instruction template, the beta field 1354 is interpreted as a data transform field 1354B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[0124] In the case of a memory access 1320 instruction template of class A, the alpha field 1352 is interpreted as an eviction hint field 1352B, whose content distinguishes which one of the eviction hints is to be used (in Figure 13A, temporal 1352B.1 and non-temporal 1352B.2 are respectively specified for the memory access, temporal 1325 instruction template and the memory access, non-temporal 1330 instruction template), while the beta field 1354 is interpreted as a data manipulation field 1354C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 1320 instruction templates include the scale field 1360, and optionally the displacement field 1362A or the displacement scale field 1362B. [0125] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask. Memory Access Instruction Templates - Temporal[0126] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-Temporal[0127] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the lst-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely. Instruction Templates of Class B[0128] In the case of the instruction templates of class B, the alpha field 1352 is interpreted as a write mask control (Z) field 1352C, whose content distinguishes whether the write masking controlled by the write mask field 1370 should be a merging or a zeroing.[0129] In the case of the non-memory access 1305 instruction templates of class B, part of the beta field 1354 is interpreted as an RL field 1357 A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1357A.1 and vector length (VSIZE) 1357A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1312 instruction template and the no memory access, write mask control, VSIZE type operation 1317 instruction template), while the rest of the beta field 1354 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1305 instruction templates, the scale field 1360, the displacement field 1362A, and the displacement scale filed 1362B are not present.[0130] In the no memory access, write mask control, partial round control type operation 1310 instruction template, the rest of the beta field 1354 is interpreted as a round operation field 1359A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler). [0131] Round operation control field 1359A - just as round operation control field 1358, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round- towards-zero and Round- to-nearest). Thus, the round operation control field 1359A allows for the changing of the rounding mode on a per instruction basis. In one embodiment a processor includes a control register for specifying rounding modes and the round operation control field's 1350 content overrides that register value.[0132] In the no memory access, write mask control, VSIZE type operation 1317 instruction template, the rest of the beta field 1354 is interpreted as a vector length field 1359B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[0133] In the case of a memory access 1320 instruction template of class B, part of the beta field 1354 is interpreted as a broadcast field 1357B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 1354 is interpreted the vector length field 1359B. The memory access 1320 instruction templates include the scale field 1360, and optionally the displacement field 1362A or the displacement scale field 1362B.[0134] With regard to the generic vector friendly instruction format 1300, a full opcode field 1374 is shown including the format field 1340, the base operation field 1342, and the data element width field 1364. While one embodiment is shown where the full opcode field 1374 includes all of these fields, the full opcode field 1374 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 1374 provides the operation code (opcode).[0135] The augmentation operation field 1350, the data element width field 1364, and the write mask field 1370 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[0136] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.[0137] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of- order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction Format[0138] Figure 14 is a block diagram illustrating an exemplary specific vector friendly instruction format according to an embodiment. Figure 14 shows a specific vector friendly instruction format 1400 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1400 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 13 into which the fields from Figure 14 map are illustrated.[0139] It should be understood that, although embodiments are described with reference to the specific vector friendly instruction format 1400 in the context of the generic vector friendly instruction format 1300 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 1400 except where claimed. For example, the generic vector friendly instruction format 1300 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1400 is shown as having fields of specific sizes. By way of specific example, while the data element width field 1364 is illustrated as a one bit field in the specific vector friendly instruction format 1400, the invention is not so limited (that is, the generic vector friendly instruction format 1300 contemplates other sizes of the data element width field 1364).[0140] The generic vector friendly instruction format 1300 includes the following fields listed below in the order illustrated in Figure 14A.[0141] EVEX Prefix (Bytes 0-3) 1402 - is encoded in a four-byte form.[0142] Format Field 1340 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 1340 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).[0143] The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.[0144] REX field 1405 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 1357BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 111 IB, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B. [0145] REX' field 1310 - this is the first part of the REX' field 1310 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.[0146] Opcode map field 1415 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[0147] Data element width field 1364 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32- bit data elements or 64-bit data elements).[0148] EVEX.vvvv 1420 (EVEX Byte 2, bits [6:3]-ww)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 111 lb. Thus, EVEX.vvvv field 1420 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[0149] EVEX.U 1368 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.[0150] Prefix encoding field 1425 (EVEX byte 2, bits [l:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field' s content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[0151] Alpha field 1352 (EVEX byte 3, bit [7] - EH; also known as EVEX. EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[0152] Beta field 1354 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2_0, EVEX.r2_0,EVEX.rrl, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.[0153] REX' field 1310 - this is the remainder of the REX' field and is the EVEX.V bit field (EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V, EVEX. vvvv.[0154] Write mask field 1370 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[0155] Real Opcode Field 1430 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[0156] MOD R/M Field 1440 (Byte 5) includes MOD field 1442, Reg field 1444, and R/M field 1446. As previously described, the MOD field's 1442 content distinguishes between memory access and non-memory access operations. The role of Reg field 1444 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1446 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[0157] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 1350 content is used for memory address generation. SIB.xxx 1454 and SIB.bbb 1456 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.[0158] Displacement field 1362A (Bytes 7-10) - when MOD field 1442 contains 10, bytes 7-10 are the displacement field 1362A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[0159] Displacement factor field 1362B (Byte 7) - when MOD field 1442 contains 01, byte 7 is the displacement factor field 1362B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1362B is a reinterpretation of disp8; when using displacement factor field 1362B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1362B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1362B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).[0160] Immediate field 1372 operates as previously described.Full Opcode Field[0161] Figure 14B is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the full opcode field 1374 according to one embodiment. Specifically, the full opcode field 1374 includes the format field 1340, the base operation field 1342, and the data element width (W) field 1364. The base operation field 1342 includes the prefix encoding field 1425, the opcode map field 1415, and the real opcode field 1430.Register Index Field[0162] Figure 14C is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the register index field 1344 according to oneembodiment. Specifically, the register index field 1344 includes the REX field 1405, the REX' field 1410, the MODR/M.reg field 1444, the MODR/M.r/m field 1446, the WW field 1420, xxx field 1454, and the bbb field 1456.Augmentation Operation Field[0163] Figure 14D is a block diagram illustrating the fields of the specific vector friendly instruction format 1400 that make up the augmentation operation field 1350 according to one embodiment. When the class (U) field 1368 contains 0, it signifies EVEX.UO (class A 1368A); when it contains 1, it signifies EVEX.Ul (class B 1368B). When U=0 and the MOD field 1442 contains 11 (signifying a no memory access operation), the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 1352A. When the rs field 1352A contains a 1 (round 1352A.1), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 1354A. The round control field 1354A includes a one bit SAE field 1356 and a two bit round operation field 1358. When the rs field 1352A contains a 0 (data transform 1352A.2), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 1354B. When U=0 and the MOD field 1442 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 1352B and the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 1354C.[0164] When U=l, the alpha field 1352 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 1352C. When U=l and the MOD field 1442 contains 11(signifying a no memory access operation), part of the beta field 1354 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 1357A; when it contains a 1 (round 1357A.1) the rest of the beta field 1354 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the round operation field 1359A, while when the RL field 1357 A contains a 0 (VSIZE 1357.A2) the rest of the beta field 1354 (EVEX byte 3, bit [6-5]- S2_i) is interpreted as the vector length field 1359B (EVEX byte 3, bit [6-5]- Li_o). When U=l and the MOD field 1442 contains 00, 01, or 10 (signifying a memory access operation), the beta field 1354 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 1359B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 1357B (EVEX byte 3, bit [4]- B).Exemplary Register Architecture[0165] Figure 15 is a block diagram of a register architecture 1500 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 1510 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmmO-15. The specific vector friendly instruction format 1400 operates on these overlaid registers as illustrated in Table 3 below.Table 3 - Register Filethat do include the U=l) registers (the vector vector length field length is 64 byte, 321359B byte, or 16 byte)depending on the vector length field 1359B[0166] In other words, the vector length field 1359B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 1359B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1400 operate on packed or scalar single/double- precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0167] Write mask registers 1515 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1515 are 16 bits in size. As previously described, in one embodiment the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0168] General-purpose registers 1525 - in the embodiment illustrated, there are sixteen 64- bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0169] Scalar floating point stack register file (x87 stack) 1545, on which is aliased the MMX packed integer flat register file 1550 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0170] Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.[0171] Described herein is system of one or more computers that can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination thereof installed on the system to cause the system to perform actions.Additionally, one or more computer programs can be configured to perform particular operations or actions by virtue of including instructions or hardware logic that, when executed or utilized by a processing apparatus, cause the apparatus to perform the actions described herein. In one embodiment the processing apparatus includes decode logic to decode a first instruction into a decoded first instruction including a first operand and a second operand. The processing apparatus additionally includes an execution unit to execute the first decoded instruction to perform a centrifuge operation.[0172] The centrifuge operation is to separate bits from a source register specified by the second operand based on a control mask indicated by the first operand. In one embodiment the second operand specifies the source register insofar as it names an architectural register, which may be a general purpose or vector register storing source data or source data elements. The first operand indicates the control mask insofar as it lists a architectural register, or, in one embodiment, may directly indicate a control mask value as an immediate operand, or may include a memory address that includes a control mask. Other embodiments include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform action specified herein.[0173] For example, one embodiment the processing apparatus further includes an instruction fetch unit to fetch the first instruction, where the instruction is a single machine- level instruction. In one embodiment the processing apparatus further includes a register file to commit a result of the centrifuge operation described herein to a location specified by a destination operand, which may be a general purpose or vector register. The register file unit can be configured to store a set of physical registers including a first register to store a first source operand value, a second register to store a second source operand value, and a third register to store at least one data element of the result of the aforementioned centrifuge operation.[0174] In one embodiment the first register is to store the control mask, where the control mask includes multiple bits, where each bit of the control mask to indicate a bit position in a destination register to write a value from the source register to read a value. In one embodiment a control mask bit of one indicates that a value is to be written to a first region of the third register, while a control mask bit of zero indicates that a value is to be written to a second region of the third register.[0175] In one embodiment the first region of the third register includes low byte-order bits of the register and the second region of the third register includes high byte-order bits of the register. In one embodiment, the lower byte-order bits of the first region are classified as the 'right' side of the register, while the high byte-order bits of the second region are classified as the 'left' side of the register. It will be understood, however, that the centrifuge operation can be configured to operate on opposing sides of a register, or multiple vector elements in the case of a vector register, without limit as to byte order or address convention associated with the register.[0176] In one embodiment the instructions described herein refer to specific configurations of hardware, such as application specific integrated circuits (ASICs), configured to perform certain operations or having a predetermined functionality. Such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine -readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic respectively represent one or more machine-readable storage media and machine-readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. [0177] In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. In certain instances, well-known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Accordingly, the scope and spirit of the invention should be judged in terms of the claims that follow.
Some embodiments include a semiconductor construction having a gate extending into a semiconductor base. Conductively-doped source and drain regions are within the base adjacent the gate. A gate dielectric has a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments. At least a portion of the gate dielectric comprises ferroelectric material. In some embodiments the ferroelectric material is within each of the first, second and third segments. In some embodiments, the ferroelectric material is within the first segment or the third segment. In some embodiments, a transistor has a gate, a source region and a drain region; and has a channel region between the source and drain regions. The transistor has a gate dielectric which contains ferroelectric material between the source region and the gate.
CLAIMS The invention claimed is: 1. A semiconductor construction, comprising: a semiconductor base; a gate extending into the base; a first region of the base adjacent the gate being a conductively-doped source region, and a second region of the base adjacent the gate and spaced from the first region being a conductively-doped drain region; a gate dielectric comprising a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments; and wherein at least a portion of the gate dielectric comprises ferroelectric material. 2. The semiconductor construction of claim 1 wherein the ferroelectric material comprises one or more of Hf, Zr, Si, O, Y, Ba, Mg and Ti. 3. The semiconductor construction of claim 1 wherein the ferroelectric material is within the first, second and third segments. 4. The semiconductor construction of claim 3 wherein: the gate dielectric, along a cross-section, is configured as an upwardly-opening container having the gate therein; the first segment of the gate dielectric comprises a first substantially vertical leg of the container, the second segment of the gate dielectric comprises a second substantially vertical leg of the container, and the third segment of the gate dielectric comprises a bottom of the container; the gate dielectric comprises a first material as an outer boundary of the container and which is directly against the semiconductor base; the gate dielectric comprises a second material between the first material and the gate; the second material is the ferroelectric material; and the first material is a non-ferroelectric material. 5. The semiconductor construction of claim 4 wherein the first material is a substantially consistent thickness along an entirety of the container. 6. The semiconductor construction of claim 4 wherein the first material is thicker along the bottom of the container than along the first and second substantially vertical legs of the container. 7. The semiconductor construction of claim 4 wherein the non- ferroelectric material consists of silicon dioxide or silicon nitride. 8. The semiconductor construction of claim 1 wherein the ferroelectric material is only within the first segment. 9. The semiconductor construction of claim 1 wherein the ferroelectric material is only within the third segment. 10. The semiconductor construction of claim 1 wherein the source region comprises a dopant gradient in which dopant concentration is lighter in a location relatively deep within the source region as compared to a location relatively shallow within the source region. 11. The semiconductor construction of claim 1 wherein the drain region is more heavily doped than at least some of the source region. 12. The semiconductor construction of claim 1 wherein the entirety of the drain region is more heavily doped than any portion of the source region. 13. The semiconductor construction of claim 1 further comprising a charge storage device electrically coupled to the drain region. 14. A transistor, comprising: a gate; a source region; a drain region; a channel region between the source and drain regions; and a gate dielectric between the gate and the source, drain and channel regions; the gate dielectric comprising ferroelectric material between the source region and the gate. 15. The transistor of claim 14 wherein the ferroelectric material comprises one or more of Hf, Zr, Si, O, Y, Ba, Mg and Ti. 16. The transistor of claim 14 wherein the gate dielectric consists of non- ferroelectric material along an entirety of an interface between the gate dielectric and the drain region. 17. The transistor of claim 14 wherein the gate dielectric consists of ferroelectric material along at least a portion of an interface between the gate dielectric and the source region. 18. The transistor of claim 14 wherein the gate dielectric comprises a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments; wherein only a portion of the first segment consists of ferroelectric material; and wherein a remainder of the first segment together with entireties of the second and third segments consist of non-ferroelectric material. 19. The transistor of claim 18 wherein the portion of the first segment directly contacts both the gate and the source region. 20. The transistor of claim 18 wherein the portion of the first segment directly contacts the gate and is spaced from the source region by non-ferroelectric material. 21. The transistor of claim 20 wherein the non-ferroelectric material comprises one or both of silicon dioxide and silicon nitride. 22. The transistor of claim 14 wherein: the gate and gate dielectric are recessed within a semiconductor material; and the source and drain regions correspond to conductively-doped regions of the semiconductor material. 23. The transistor of claim 22 wherein the source region comprises a dopant gradient in which dopant concentration is lighter in a location relatively deep within the source region as compared to a location relatively shallow within the source region. 24. The transistor of claim 23 wherein an entirety of the drain region is more heavily doped than any of the source region. 25. A memory cell comprising a charge storage device electrically coupled to the drain region of the transistor of claim 14. 26 The memory cell of claim 25 wherein the charge storage device is a capacitor. 27 A semiconductor construction, comprising: a semiconductor base; a gate extending into the base; a region of the base on one side of the gate being a conductively-doped source region, and a region of the base on an opposing side of the gate relative to said one side being a conductively-doped drain region; the drain region being more heavily doped than the source region; a gate dielectric comprising a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments; wherein the gate dielectric, along a cross-section, is configured as an upwardly- opening container having the gate therein; wherein the first segment of the gate dielectric comprises a first substantially vertical leg of the container, wherein the second segment of the gate dielectric comprises a second substantially vertical leg of the container, and wherein the third segment of the gate dielectric comprises a bottom of the container; the gate dielectric comprising non-ferroelectric material directly against ferroelectric material, with the non-ferroelectric material being a boundary of the container directly against the semiconductor base; and wherein the non-ferroelectric material is thicker along the bottom of the container than along the first and second substantially vertical legs of the container. 28. The semiconductor construction of claim 27 wherein the non- ferroelectric material comprises one or both of silicon dioxide and silicon nitride. 29. The semiconductor construction of claim 27 further comprising a capacitor electrically coupled to the drain region. 30. The semiconductor construction of claim 27 wherein the source region comprises a dopant gradient in which dopant concentration is lighter in a location relatively deep within the source region as compared to a location relatively shallow within the source region.
DESCRIPTION TRANSISTORS, MEMORY CELLS AND SEMICONDUCTOR CONSTRUCTIONS TECHNICAL FIELD Transistors, memory cells and semiconductor constructions. BACKGROUND Memory is one type of integrated circuitry, and is used in computer systems for storing data. Integrated memory is usually fabricated in one or more arrays of individual memory cells. The memory cells may be volatile, semi-volatile, or nonvolatile. Nonvolatile memory cells can store data for extended periods of time, and in some instances can store data in the absence of power. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage. The memory cells are configured to retain or store information in at least two different selectable states. In a binary system, the states are considered as either a "0" or a "1". In other systems, at least some individual memory cells may be configured to store more than two selectable states of information. Dynamic random access memory (DRAM) is one type of memory, and is utilized in numerous electronic systems. A DRAM cell may comprise a transistor in combination with a charge storage device (for instance, a capacitor). DRAM has an advantage of having rapid read/write; but has disadvantages of being highly volatile (often requiring refresh of several hundreds of times per second) and of being erased in the event of power loss. It is desired to develop improved memory devices. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagrammatic cross-sectional view of a portion of a semiconductor construction illustrating an example embodiment transistor incorporated into an example embodiment memory cell. FIG. 2 diagrammatically illustrates the memory cell of FIG. 1 in two different example memory states. FIGS. 3-7 diagrammatically illustrate example embodiment transistors incorporated into example embodiment memory cells. FIG. 8 illustrates another embodiment memory cell comprising the example embodiment transistor of FIG. 1. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS Some embodiments include transistors which comprise ferroelectric material incorporated into gate dielectric. In some embodiments, such transistors may be incorporated into memory cells. Example embodiments are described with reference to FIGS. 1-8. Referring to FIG. 1, an example embodiment memory cell 40 is illustrated as part of a semiconductor construction 10. The construction 10 includes a base 12. The base 12 may comprise semiconductor material, and in some embodiments may comprise, consist essentially of, or consist of monocrystalline silicon. In some embodiments, base 12 may be considered to comprise a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. In some embodiments, base 12 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit fabrication. Some of the materials may be under the shown region of base 12 and/or may be laterally adjacent the shown region of base 12; and may correspond to, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, etc. A transistor gate 14 extends into base 12. The transistor gate comprises gate material 16. Such gate material may be any suitable composition or combination of compositions; and in some embodiments may comprise, consist essentially of, or consist of one or more of various metals (for example, tungsten, titanium, etc.), metal-containing compositions (for instance, metal nitride, metal carbide, metal silicide, etc.), and conductively-doped semiconductor materials (for instance, conductively-doped silicon, conductively-doped germanium, etc.). In some example embodiments, the gate material 16 may comprise, consist essentially of, or consist of one or more of titanium nitride, titanium aluminum nitride, tungsten nitride, copper and tantalum nitride. Gate dielectric 18 is between gate material 14 and base 12. The gate dielectric is configured as an upwardly-opening container 24 along the cross-section of FIG. 1, and the gate 14 is within such container. The gate dielectric comprises two separate materials 20 and 22 in the embodiment of FIG. 1, which may be referred to as a first material and a second material, respectively. The first material 20 forms an outer boundary of the container 24, and is directly against the semiconductor base 12. The second material 22 is between the first material 20 and the gate 14. In some embodiments, the first material 20 is a non-ferroelectric material, and the second material is a ferroelectric material. In such embodiments, the first material 20 may comprise, consist essentially of, or consist of one or both of silicon dioxide and silicon nitride; and the second material 22 may comprise, consist essentially of, or consist of one or more of yttrium-doped zirconium oxide, yttrium-doped hafnium oxide, magnesium-doped zirconium oxide, magnesium- doped hafnium oxide, silicon-doped hafnium oxide, silicon-doped zirconium oxide and barium-doped titanium oxide. Accordingly, in some embodiments the first material 20 may comprise one or more of silicon, nitrogen and oxygen; and the second material 22 may comprise one or more of Hf, Zr, Si, O, Y, Ba, Mg and Ti. In some embodiments, the ferroelectric material 22 may have a thickness within a range of from about 10 angstroms to about 200 angstroms, and the non-ferroelectric material 20 may have a thickness within a range of from about 10 angstroms to about 20 angstroms. Construction 10 comprises a conductively-doped source region 26 extending into base 12, and a conductively-doped drain region 28 extending into the base. Lower boundaries of the source and drain regions are diagrammatically illustrated with dashed lines. The source and drain regions are both adjacent to gate 14, and are spaced from the gate by the gate dielectric 18. The source and drain regions are spaced from one another by a channel region 30 that extends under the gate 14. In some embodiments, the source region 26 may be referred to as a first region of the base adjacent to the gate 14, and the drain region 28 may be referred to as a second region of the base adjacent to the gate. Such first and second regions of the base are spaced from one another by an intervening region of the base comprising the channel region 30. The gate dielectric 18 may be considered to comprise a first segment 23 between the source region 26 and the gate 14, a second segment 25 between the drain region 28 and the gate 14, and a third segment 27 between the first and second segments. In some embodiments, the segment 23 may be considered to correspond to a first substantially vertical leg of container 24, the segment 25 may be considered to correspond to a second substantially vertical leg of the container, and the segment 27 may be considered to comprise a bottom of the container. In the shown embodiment, all of the first, second and third segments (23, 25 and 27) of gate dielectric 18 comprise ferroelectric material 22. In other embodiments (some of which are discussed below with reference to FIGS. 4-6), the ferroelectric material 22 may be omitted from one or more of such segments. In some embodiments, the non-ferroelectric material 20 provides a barrier between ferroelectric material 22 and base 12 to avoid undesired diffusion of constituents between the ferroelectric material and the base and/or to avoid undesired reaction or other interaction between the ferroelectric material and the base. In such embodiments, the non-ferroelectric material 20 may be provided entirely along an outer edge of the gate dielectric (as shown) to form a boundary of the container 24 against the semiconductor base 12 (with source and drain regions 26 and 28 being considered to be part of the base). In some embodiments, diffusion and/or other interactions are not problematic relative to the ferroelectric material 22 even in the absence of at least some of the non-ferroelectric material, and accordingly some or all the non-ferroelectric material 20 may be omitted from one or more of the segments 23, 25 and 27. In the shown embodiment, the non-ferroelectric material 20 is a substantially consistent thickness along an entirety of container 24. In other embodiments (one of which is discussed below with reference to FIG. 7), the non-ferroelectric material 20 may have a different thickness along one region of container 24 as compared to another region. In the shown embodiment, source region 26 is electrically coupled to circuitry 32, drain region 28 is electrically coupled to circuitry 34, and gate 14 is electrically coupled to circuitry 36. A transistor 38 comprises the gate 14 together with the source/drain regions 26 and 28, and such transistor is incorporated into an integrated circuit through circuitry 32, 34 and 36. Although the embodiment of FIG. 1 utilizes transistor 38 as part of a memory cell 40, in other embodiments the transistor 38 may be utilized in other applications. For instance, transistor 38 may be utilized in logic or other circuitry in place of a conventional transistor. The ferroelectric material 22 of gate dielectric 18 may be polarized into either of two stable orientations, which may enable two selectable states of memory cell 40. Example memory states are shown in FIG. 2, with the memory states being labeled as "MEMORY STATE 1" and "MEMORY STATE 2". The illustrated memory cell of FIG. 2 has n-type doped source and drain regions 26 and 28, and a p-type doped channel region. In other embodiments, the source and drain regions may be p-type doped and the channel region may be n-type doped. MEMORY STATE 1 and MEMORY STATE 2 differ from one another relative to the orientation of charge within ferroelectric material 22. Such charge orientation is diagrammatically illustrated with "+" and "-" in the diagrammatic illustrations of FIG. 2. Specifically, the memory states of FIG. 2 are shown to differ from one another relative to charge polarization within ferroelectric material 22. A double-headed arrow 41 is provided in FIG. 2 to diagrammatically illustrate that the memory cell 40 may be reversibly transitioned between the shown memory states. In the shown embodiment, the polarization change within ferroelectric material 22 specifically occurs within the region 23 between gate 14 and source region 26 (the polarization change may also occur in other regions, such as adjacent the channel in some embodiments; or may occur only in the region 23 as shown in FIG. 2). The MEMORY STATE 1 comprises a "+" component of the polarized ferroelectric material along the n-type doped source region 26, and the MEMORY STATE 2 comprises a "-" component of the polarized ferroelectric material along the n-type doped source region 26. The "-" component of the ferroelectric material is shown to induce a depletion region 42 within the n-type doped source region 26 (a boundary of the depletion region is diagrammatically illustrated with the dashed line 43). In the illustrated embodiment, the depletion region 42 is deep within the source region 26, and specifically is along a portion of the source region that interfaces with channel region 30. The transistor 38 may have an increased effective channel length relative to an analogous transistor lacking the depletion region, which may reduce short channel effects and thereby improve scalability of the memory cell for higher levels of integration. In the shown embodiment, the non-ferroelectric material 20 is between ferroelectric material 22 and source region 26, and accordingly the depletion region 42 is spaced from the ferroelectric material 22 by a segment of non-ferroelectric material 20. In other embodiments, the non-ferroelectric material 20 may be omitted, and the depletion region 42 may directly contact the ferroelectric material 22. The memory cell 40 of FIG. 2 may have advantages of being substantially nonvolatile, and of retaining stored information in the absence of power. The memory cell 40 may be programmed with any suitable operation, and in some example embodiments may be programmed utilizing voltage differentials between gate 14 and source 26 of less than or equal to about 10 volts; in some example embodiments utilizing voltage differentials of less than or equal to about 5 volts; and in some example embodiments utilizing voltage differentials of from about 0.5 volts to about 5 volts. The dopant concentrations utilized within source region 26 and drain region 28 may be any suitable dopant concentrations. In some embodiments, the drain region may be more heavily doped than at least some of the source region; and in some embodiments the entirety of the drain region may be more heavily doped than any portion of the source region. In some embodiments, relatively heavy doping of the drain region alleviates influence of ferroelectric polarization on operation of the drain side of transistor 38, while relatively light doping of at least some of the source region enables the influence of the ferroelectric polarization on the source side of the transistor to be enhanced relative to the influence that would occur with heavier doping of the source region. The terms "relatively heavy doping" and "relatively light doping" are utilized with reference to one another, and thus the term "relatively heavy doping" means doping heavier than the doping indicated by the term "relatively light doping". In some embodiments the drain region 28 may be n-type doped, and some or all of the drain region may comprise a dopant concentration of at least about 1 x 10 20 atoms/centimeter ; such as, for example, a dopant concentration within a range of from about 1 x 10 18 atoms/centimeter 3 to about 1 x 1020 atoms/centimeter 3. In some embodiments the source region 26 may be n-type doped, and at least some of the source region may comprise a dopant concentration of less than about 1 x 10 20 atoms/centimeter ; such as, for example, a dopant concentration within a range of from about 1 x 10 16 atoms/centimeter 3 to about 1 x 1019 5 atoms/centimeter 3. In some embodiments, the source region 26 may comprise a gradient of dopant concentration, with dopant concentration being lighter at deeper locations of the source region as compared to shallower locations of the source region. FIG. 3 shows a construction 10a illustrating an example embodiment memory cell 40a having decreasing dopant concentration with increasing depth in the source region, (the dopant concentration is illustrated as [DOPANT]). The construction of FIG. 3 advantageously may comprise the lighter dopant concentration within the source region at a location where the depletion region 42 forms during programming of a memory state analogous to the MEMORY STATE 2 of FIG. 2. The example embodiment memory cell 40 shown in FIG. 1 comprises both ferroelectric material 22 and non-ferroelectric material 20 within all of the segments 23, 25 and 27 of dielectric material 18. FIG. 4 shows an alternative example embodiment memory cell 40b having ferroelectric material 22 only within segment 23. The memory cell 40b is part of a construction 10b, and comprises a transistor 38b containing gate dielectric 18b. The gate dielectric 18b comprises the non-ferroelectric material 20 between ferroelectric material 22 and source region 26, and comprises additional non-ferroelectric material 50 throughout the segments 25 and 27 (i.e., the segments along drain region 28 and channel region 30). The non-ferroelectric material 50 may comprise any suitable composition or combination of compositions. In some embodiments, the non-ferroelectric material 50 may comprise a same composition as non-ferroelectric material 20, and in other embodiments may comprise a different composition than non-ferroelectric material 20. In some embodiments, non-ferroelectric material 50 may comprise, consist essentially of, or consist of one or both of second dioxide and second nitride. The memory cell 40b of FIG. 4, like the above-discussed embodiment of FIG. 1, comprises non-ferroelectric material entirely along an interface of the source region 26 and the gate dielectric, and entirely along an interface of the drain region 28 and the gate dielectric. FIG. 5 shows a memory cell analogous to that of FIG. 4, but in which an interface of the gate dielectric with the source region comprises ferroelectric material. Specifically, FIG. 5 shows a construction 10c comprising a memory cell 40c having a transistor 38c with gate dielectric 18c. The gate dielectric 18c comprises ferroelectric material 22 and non-ferroelectric material 50. The ferroelectric material 22 directly contacts both the source region 26 and the gate 14. In the shown embodiment, a portion of the segment of the gate dielectric between the source region and the transistor gate (i.e., a portion of the segment 23 of the gate dielectric) consists of ferroelectric material, and the remainder of the gate dielectric (i.e., the remainder segment 23, together with segments 25 and 27) consists of non- ferroelectric material. In the shown embodiment, only a portion of an interface between the gate dielectric 18c and the source region 26 consists of ferroelectric material 22. In other embodiments, an entirety of the interface between the gate dielectric and the source region may consist of the ferroelectric material. FIG. 6 shows a construction lOd illustrating another example embodiment memory cell 40d. The memory cell comprises a transistor 38d having gate dielectric 18d. The gate dielectric comprises non-ferroelectric material 50 throughout the entirety of the segment between the source region 26 and the gate 14 (i.e., the segment 23), and throughout the entirety of the segment between the drain region 28 and the gate 14 (i.e., the segment 25). The gate dielectric further comprises ferroelectric material 22 within at least some of the segment along the channel region 30 (i.e., the segment 27). Such may enable selective coupling of the ferroelectric material with the channel region, exclusive of coupling between the ferroelectric material and the source region and/or drain region, which may enable operational characteristics of the memory cell to be tailored for particular applications. Further, if transistor 38d is utilized in place of a conventional transistor in an integrated circuit application other than as a part of a memory cell, the selective coupling to the channel region may enable operational aspects of such transistor to be tailored for specific applications. The embodiment of FIG. 6 shows the non-ferroelectric material 20 provided between ferroelectric material 22 and base 12. In other embodiments, the non- ferroelectric material 20 may be omitted so that ferroelectric material 22 directly contacts base 12. Another example embodiment memory cell 40e is shown in FIG. 7 as part of a construction lOe comprising a transistor 38e with gate dielectric 18e. The memory cell 40e of FIG. 7 is similar to the memory cell 40 of FIG. 1, in that the memory cell 40e comprises both the non-ferroelectric material 20 and the ferroelectric material 22 within all of the segments 23, 25 and 27 of the gate dielectric. However, unlike the embodiment of FIG. 1, that of FIG. 7 has the non-ferroelectric material 20 thicker within the segment 27 (i.e. along the bottom of the container 24 defined by the gate dielectric) than within the segments 23 and 25 (i.e., along the substantially vertical legs of the container 24 defined by the gate dielectric). Such can alleviate or eliminate coupling between the ferroelectric material 22 and the channel 30, which may be desired in some embodiments. In some embodiments, the non-ferroelectric material 20 may have a thickness within segments 23 and 25 within a range of from about 10 angstroms to about 20 angstroms, and may have a thickness along the bottom of container 24 within a range of from about 25 angstroms to about 50 angstroms. In some embodiments, the memory cells described above may comprise DRAM- type cells. For instance, the circuitry 34 may correspond to a charge- storage device (such as, for example, a capacitor), the circuitry 32 may include an access/sense line (such as, for example, a bitline), and the circuitry 36 may include a wordline that extends in and out of the page relative to the cross-sections of FIGS. 1-7. FIG. 8 shows a construction lOf comprising the transistor 38 of FIG. 1 incorporated into a DRAM-type memory cell The DRAM-type cell of FIG. 8 may be, in a sense, considered to include both a volatile memory storage component (the capacitor 70, with such component storing data by utilizing different charge states of the capacitor as different memory states) and a nonvolatile memory storage component (the transistor 38, with such component storing data by utilizing different polarization orientations of ferroelectric material 22 as different memory states, as discussed above with reference to FIG. 2). The volatile memory storage component may have rapid read/write characteristics analogous to those of a conventional DRAM, and the nonvolatile memory storage component may enable the cell to have capabilities beyond those of conventional DRAM. For instance, in some embodiments the cell may be configured so that the nonvolatile memory storage component backs up information from the volatile memory storage component so that the information is stable in the event of power failure. As another example, in some embodiments the cell may be configured so that the nonvolatile memory storage component is utilized for operations separate from those conducted by the volatile memory storage component and/or for operations that modify or overlap those of the volatile memory storage component. Such may enable a DRAM array comprising memory cells 80 of the type shown in FIG. 8 to perform operations that would otherwise comprise both logic and memory aspects of conventional integrated circuitry, which may enable a DRAM array comprising memory cells 40 of the type shown in FIG. 8 to be scaled to higher levels of integration than may be achieved with conventional DRAM circuitry. The devices discussed above may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application- specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc. The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The description provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation. The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections in order to simplify the drawings. When a structure is referred to above as being "on" or "against" another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being "directly on" or "directly against" another structure, there are no intervening structures present. When a structure is referred to as being "connected" or "coupled" to another structure, it can be directly connected or coupled to the other structure, or intervening structures may be present. In contrast, when a structure is referred to as being "directly connected" or "directly coupled" to another structure, there are no intervening structures present. In some embodiments, a semiconductor construction includes a semiconductor base and a gate extending into the base. A first region of the base adjacent the gate is a conductively-doped source region, and a second region of the base adjacent the gate and spaced from the first region is a conductively-doped drain region. A gate dielectric comprises a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments. At least a portion of the gate dielectric comprises ferroelectric material. In some embodiments, a transistor comprises a gate, a source region, a drain region, and a channel region between the source and drain regions. The transistor also comprises a gate dielectric between the gate and the source, drain and channel regions. The gate dielectric comprises ferroelectric material between the source region and the gate. In some embodiments, a semiconductor construction comprises a semiconductor base and a gate extending into the base. A region of the base on one side of the gate is a conductively-doped source region, and a region of the base on an opposing side of the gate relative to said one side is a conductively-doped drain region. The drain region is more heavily doped than the source region. The construction includes gate dielectric which comprises a first segment between the source region and the gate, a second segment between the drain region and the gate, and a third segment between the first and second segments. The gate dielectric, along a cross-section, is configured as an upwardly- opening container having the gate therein. The first segment of the gate dielectric comprises a first substantially vertical leg of the container. The second segment of the gate dielectric comprises a second substantially vertical leg of the container. The third segment of the gate dielectric comprises a bottom of the container. The gate dielectric comprises non-ferroelectric material directly against ferroelectric material, with the non-ferroelectric material being a boundary of the container directly against the semiconductor base. The non-ferroelectric material is thicker along the bottom of the container than along the first and second substantially vertical legs of the container.
The invention describes a gesture label for a new touch posture in a capacitive hovering mode. Methods and apparatus to detect touch input gestures are disclosed. An example apparatus includes a touch sensitive display, a touch sensor to detect touches and hovers associated with the touch sensitive display, and a gesture handler including: an identifier to identify fingers associated with the touches and hovers, and a gesture detector to determine a gesture associated with the touches and hovers and determine an action associated with the gesture and the identified fingers.
1.A device for triggering an action based on a gesture, the device comprising:Touch sensitive displaya touch sensor for detecting touch and hover associated with the touch sensitive display;a gesture processor, the gesture processor comprising:An identifier for identifying a finger associated with the touch and hover;A gesture detector for determining a gesture associated with the touch and hover and determining an action associated with the gesture and the identified finger.2.The device of claim 1 wherein said gesture processor includes a system interface for communicating said action to an operating system of said device.3.The apparatus of claim 1 or claim 2 wherein said gesture detector determines a first motion associated with said gesture when identifying a first finger for said gesture, and for said gesture A second action associated with the gesture is determined when the second finger is identified.4.The device of claim 3 wherein said first action is a left mouse click and said second action is a right mouse click.5.The apparatus of claim 3 wherein said first action is to draw with a first color and said second action is to draw with a second color.6.The apparatus of claim 3 wherein said first action is to open an application on a first screen and said second action is to open said application on a second screen.7.The apparatus of claim 3 wherein said first action is to change a first setting of the system and said second action is to change a second setting of said system.8.A method for triggering an action based on a gesture, the method comprising:Detecting touch and hover associated with the touch sensitive display;Identifying a finger associated with the touch and hover;Determining a gesture associated with the touch and hover; andAn action associated with the gesture and the identified finger is determined.9.The method of claim 8 further comprising transmitting said action to an operating system of the machine.10.A method according to claim 8 or claim 9, further comprising determining a first action associated with the gesture when identifying the first finger for the gesture and identifying a second for the gesture The second action associated with the gesture is determined at the finger.11.The method of claim 10 wherein said first action is a left mouse click and said second action is a right mouse click.12.The method of claim 10 wherein said first action is to draw with a first color and said second action is to draw with a second color.13.The method of claim 10 wherein said first action is to open an application on a first screen and said second action is to open said application on a second screen.14.The method of claim 10 wherein said first action is to change a first setting of the system and said second action is to change a second setting of said system.15.A device for triggering an action based on a gesture, the device comprising:An identifier for identifying a finger associated with touch and hover, the touch and hover being associated with a touch sensitive display;A gesture detector for determining a gesture associated with the touch and hover and determining an action associated with the gesture and the identified finger.16.The device of claim 8 further comprising a system interface for communicating said action to an operating system of said device.17.8. Apparatus according to claim 8 or claim 9 wherein said gesture detector determines a first action associated with said gesture when identifying a first finger for said gesture, and for said gesture A second action associated with the gesture is determined when the second finger is identified.18.The device of claim 10 wherein said first action is a left mouse click and said second action is a right mouse click.19.The apparatus of claim 10 wherein said first action is to draw with a first color and said second action is to draw with a second color.20.The apparatus of claim 10 wherein said first action is to open an application on a first screen and said second action is to open said application on a second screen.21.The apparatus of claim 10 wherein said first action is to change a first setting of the system and said second action is to change a second setting of said system.22.A system for triggering an action based on a gesture, the system comprising:Touch sensitive displayAn operating system, the operating system being associated with an execution application;a touch sensor for detecting touch and hover associated with the touch sensitive display;a gesture processor, the gesture processor comprising:An identifier for identifying a finger associated with the touch and hover;A gesture detector for determining a gesture associated with the touch and hover and determining an action for the operating system associated with the gesture and the identified finger.23.A computer readable medium, comprising instructions that, when executed, cause a machine to perform the method of any one of claims 8-14.
Finger identification of new touch gestures using capacitive hover modeTechnical fieldThe present disclosure relates generally to touch input and, more particularly, to methods and apparatus for detecting touch input gestures.Background techniqueIn recent years, the quality and popularity of touch input devices such as touch sensing displays have increased. For example, many popular computing devices (such as laptops, desktop computers, tablets, smart phones, etc.) have been implemented with touch input devices to receive user input via a touch (eg, via a finger touch display). Some such touch input devices are capable of sensing multiple touch inputs (eg, a two-finger input gesture). Additionally or alternatively, certain touch input devices are capable of detecting a touch input before the touch input makes contact with the touch input device/when the touch input is not in contact with the touch input device. This type of detection is often referred to as hover detection (eg, detecting a hover and/or approaching a finger of a touch input device).DRAWINGS1 is a block diagram of an example touch input device.2 is a block diagram of an example implementation of the gesture processor of FIG. 1.3-4 are flow diagrams showing machine readable instructions that can be executed to implement an example gesture detector.5 is a block diagram of an example processor platform capable of executing the instructions of FIGS. 3-4 to implement a gesture detector.The drawings are not to scale. Wherever possible, the same reference numerals will in theDetailed waysThe methods and apparatus disclosed herein utilize hover detection and/or touch input detection to identify a finger or fingers that perform a touch input gesture on a touch input device. For example, the disclosed method and apparatus determines which of the five fingers of the example hand have been in contact with the touch input device. As disclosed herein, the finger(s) are identified by detecting a finger that is in contact with the touch input device and hovering over the touch input device. For example, by analyzing the pattern of finger positions (eg, detecting four hovering fingers and one finger in contact with the touch input device along with the relative position of five fingers) to detect the finger(s) to detect the particular finger of the hand And/or detect which hand is utilized (eg, left hand and/or right hand). The disclosed method and apparatus trigger a finger specific action based on the identified finger(s). For example, touching the button with the index finger can trigger a different action than touching the finger with the thumb.For the sake of clarity, counting from the thumb, all fingers of the hand will be referred to as fingers 1 through 5.In some of the disclosed examples, different result actions are assigned to gestures performed using different fingers.In some examples, pinch-in is performed using finger 1 and finger 2 to cause magnification, pinch-out is performed using finger 1 and finger 2 to cause zooming, and pinching is performed using finger 1 and finger 3 to cause application to be Minimization, while pinch using finger 1 and finger 3 causes the application to be maximized.In some examples, tapping the screen with finger 2 triggers a left click action (eg, the same action as clicking the left button of the mouse) and tapping the screen with finger 3 triggers a right click action.In some examples, in applications that support drawing, scribing, highlighting, handwriting, etc., different fingers may be associated with different colors (eg, using a drag of finger 2 to create a red line and a finger 3 to create a blue line), Different line formats (eg, line width, dashed line versus solid line, etc.), use of different drawing tools, etc. are associated.In some examples, multiple screens can be connected, and a finger tap on an icon or widget can cause the program to open on a different screen in the flick direction. Another finger can be used to send the program or data to the recycle bin.In some examples, touching the screen with different fingers (eg, increasing from finger 1 to 5 or from finger 5 to 1, or increasing or decreasing any subset) may trigger an increase or decrease in a value (eg, Increase/decrease system settings (such as volume or brightness), increment/decrement numbers, etc.). For example, a single tap with the finger 2 of the right hand can increase the volume by 5 units. Use the tap of the finger 2 of the left hand to increase the brightness by 5 units. Use the tap of finger 3 on either hand to increase the corresponding attribute by 10 units and so on.The identification of a particular example finger throughout the disclosure is used to provide an example and is not intended to limit a particular finger unless a particular finger is identified in the claims. The disclosed gestures can be associated with any particular finger and/or combination of fingers.FIG. 1 is a block diagram of an example touch input device 102. According to the illustrated example, touch input device 102 is a tablet computing device. Alternatively, touch input device 102 can be any type of device that supports touch input (eg, a laptop computer, desktop computer monitor, smart phone, kiosk display, smart whiteboard, etc.). The example touch input device 102 includes an example touch sensitive display 104, an example touch sensor 106, an example gesture processor 108, and an example operating system 110.The example touch sensitive display 104 is coupled to a capacitive touch sensing circuitry to detect a touch (eg, an input in contact with the touch sensitive display 104) and hover (eg, such as adjacent to the touch sensitive display 104 but not to the touch sensitive display 104) A display that touches the input of a finger or the like. Alternatively, any other type of display and/or touch sensing that can detect touch and hover can be utilized.The touch circuitry of the example touch sensitive display 104 is communicatively coupled to the touch sensor 106. The example touch sensor 106 processes signals from the touch circuitry to determine characteristics of touch and hover. For example, touch sensor 106 determines the size of the touch and/or hover (eg, touch/hover the footprint on touch-sensitive display 104), touch/hover the location within the boundaries of touch-sensitive display 104, touch/suspend The intensity of the stop (eg, the touch is multiple to the touch sensitive display 104, how close the hover is to the touch sensitive display 104, etc.). Touch sensor 106 communicates characteristics regarding touch/hover to example gesture handler 108.The gesture handler 108 of the illustrated example analyzes the characteristics of the touch/hover received from the example touch sensor 106 over time to detect the gesture and trigger an action associated with the gesture. In particular, the example gesture handler 108 analyzes the characteristics of the touch/hover to identify the finger(s) performing the touch/posture and triggers an action associated with the combination of the gesture and the finger(s). Further details regarding the triggering action(s) are described in conjunction with FIG. The example gesture handler 108 communicates an indication of the action to be performed to the example operating system 110.The example operating system 110 is execution software and/or circuitry that interfaces software executing at the touch input device 102 with hardware of the touch input device 102 and/or other software executing on the touch input device 102. The actions triggered by the example gesture handler 108 are passed to a particular application (eg, if the gesture is associated with a particular application) and/or processed by the operating system 110 (eg, if the gesture is associated with the operating system 110 or otherwise not Application related).For purposes of description, FIG. 1 includes a button 120 being displayed. The example button 120 identifies an element that can be displayed on the touch-sensitive display 104. Alternatively, when the operating system is operating at touch input device 102, displayed button 120 can be replaced with any number of displayed elements. Also for purposes of description, FIG. 1 includes an outline of a touch input that can be detected by touch sensor 106 when a user touches touch-sensitive display 104 with a right hand. As exemplified in the example, the touch area 130 is the finger 1 of the right hand, the touch area 132 is the finger 2 of the right hand, the finger 3 of the right hand when the area 134 is touched, the finger 4 of the right hand is the finger 4 of the right hand, and the touch area 136 is the right hand Finger 5. According to the illustrated example, finger 2 is touching touch-sensitive display 104 to create second touch area 132 while fingers 1, 3, 4, and 5 are hovering over touch-sensitive display 104 to create first touch area 130, third touch A region 134, a fourth touch region 136, and a fifth touch region 138.2 is a block diagram of an example implementation of the gesture processor 108 of FIG. The example gesture processor 108 includes an example sensor interface 202, an example trainer 204, an example training data store 206, an example identifier 208, an example gesture detector 210, an example gesture data store 212, and an example system interface 214.The example sensor interface 202 interfaces with the example touch sensor 106 to receive information regarding touches and/or hovering on the example touch-sensitive display 104. The example sensor interface 202 transmits information about touch/hover to the example trainer 204 and/or the example identifier 208.The example trainer 204 collects information about touch/hover to train a model or other identification tool to improve the ability of the gesture processor 108 to identify a finger for touch/hover on the touch-sensitive display 104. The example trainer 204 stores training data (eg, trained models) in the example training data store 206. For example, the trainer 204 can prompt the user (eg, present a display that requires the user to place the finger(s) over the touch-sensitive display 104 and/or on the touch-sensitive display 104) and can record touch information from the identifier 208 and/or (All) finger identification. The recorded information can be used to train models, identifiers, etc. (eg, machine learning models) that are transmitted to the marker 208 for identifying the finger(s).The example training data store 206 is a database for storing training/identification data. Alternatively, training data store 206 can be any other type of data store (eg, files, file collections, hard drives, memory, etc.).The example identifier 208 identifies the finger(s) associated with the touch/hover. In accordance with the illustrated example, the marker 208 identifies the finger to identify the finger(s) associated with the touch/hover by analyzing the relative positions of all detected touches/hover. For example, during a touch, when a single hand is above the display, five fingers can be identified based on the relative positions of the five emerging touches/hover. The thumb can be identified by the relative position of the thumb's touch/hover relative to the four fingers. Additionally or alternatively, the data may be identified using a model based on local training or preset training. The marker 208 additionally determines whether each finger is touching or hovering. For example, the marker 208 can determine that the finger 2 is touching the display because the touch intensity of the finger 2 is the strongest (eg, creating the strongest interruption of the capacitive field of the touch-sensitive display 104). The example marker 208 will pair the finger(s) The identification and finger state (eg, touch, hover, etc.) are transmitted to the example gesture detector 210.The example gesture detector 210 analyzes the touch/hover data received from the marker 208 to detect a gesture. As used herein, a gesture is any action that can be performed by touch/hover. For example, the gesture can be a single touch/tap, double touch/tap, swipe, pinch, drag, and the like. Thus, gesture detector 210 can analyze multiple touches/hover and/or touch/hover over a period of time. Once the gesture detector 210 identifies the gesture, the gesture detector 210 determines the action associated with the gesture based on the finger(s) for the gesture.The example gesture detector queries an example gesture data store 212 that has information about the gesture (eg, the finger(s), gesture type, and/or gesture target (eg, gesture-targeted application). According to the illustrated example, the action associated with the gesture depends on the finger(s) used for the gesture. For example, the first action may be performed for a gesture performed using the finger 1, and the second action may be performed for the same gesture performed using the finger 2. For example, depending on the finger(s) used, the same gesture (eg, a tap on a button) can trigger a different action (eg, tapping the button with finger 1 can trigger a forward movement on the form, with finger 2 Tap to trigger a move back on the form). The action for the gesture may additionally depend on the target of the gesture (eg, application, user interface component, etc.)In some examples, performing pinch using finger 1 and finger 2 causes zooming, pinching using finger 1 and finger 2 causes zooming, pinching using finger 1 and finger 3 causes application to be minimized, and finger 1 and finger 3 are used Performing pinch causes the application to be maximized.In some examples, tapping the screen with finger 2 triggers a left click action (eg, the same action as clicking the left button of the mouse) and tapping the screen with finger 3 triggers a right click action.In some examples, in applications that support drawing, scribing, highlighting, handwriting, etc., different fingers may be associated with different colors (eg, using a drag of finger 2 to create a red line and a finger 3 to create a blue line), Different line formats (eg, line width, dashed contrast implementation, etc.), use of different drawing tools, and the like.In some examples, multiple screens can be connected, and a finger tap on an icon or widget can cause the program to open on a different screen in the flick direction. Another finger can be used to send the program or data to the recycle bin.In some examples, touching the screen with different fingers (eg, increasing from finger 1 to 5 or from finger 5 to 1, or increasing or decreasing any subset) may trigger an increase or decrease in a value (eg, increase) Large/reduce system settings (such as volume or brightness), increment/decrement numbers, etc.). For example, a single tap with the right hand finger 2 can increase the volume by 5 units. Use the tap of the finger 2 of the left hand to increase the brightness by 5 units. Use the tap of finger 3 on either hand to increase the corresponding attribute by 10 units and so on.The gesture data store 212 of the illustrated example is a database of rules that associate gestures with actions. Alternatively, gesture data store 212 can be any other type of data store (eg, files, file collections, hard drives, memory, etc.). The gesture data store 212 may alternatively or additionally store any other type of association of gestures and actions. For example, instead of rules, the association of gestures and actions can be stored in a table, stored as settings, and the like.System interface 214 interfaces with example operating system 110 to transmit the action(s) determined by example gesture detector 210 to an application and/or example operating system 110.Although FIG. 1 illustrates an example manner of implementing the gesture processor 108 of FIG. 1, one or more of the elements, processes, and/or devices illustrated in FIG. 2 may be combined, split, reconfigured, omitted, eliminated. And / or in any way. Moreover, the example sensor interface 202 of FIG. 1, the example trainer 204, the example identifier 208, the example gesture detector 210, the example system interface 214, and/or more generally, the example gesture detector 108 can be hardware, software, firmware, and/or Or any combination of hardware, software, and/or firmware. Thus, for example, the example sensor interface 202 of FIG. 1, the example trainer 204, the example identifier 208, the example gesture detector 210, the example system interface 214, and/or more generally, any of the example gesture detectors 108 may be Or multiple analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), programmable logic devices (PLDs), and/or field programmable logic devices (FPLDs). The example sensor interface 202, the example trainer 204, the example identifier 208, the example gesture detector 210, an example of FIG. 1 when reading any of the devices or system claims of this patent that cover pure software and/or firmware implementations. System interface 214 and/or more generally, at least one of example gesture detectors 108 is thus expressly defined to include a non-transitory computer readable storage device or storage disk (such as a memory, comprising software and/or firmware). Digital versatile disc (DVD), compact disc (CD), Blu-ray disc, etc.). Moreover, the example gesture detector 108 of FIG. 1 may include one or more elements, processes, and/or devices in addition to or in lieu of those illustrated in FIG. 2, and/or may include any or all of the illustrated elements, processes. And/or more than one in the device.3-4 show flowcharts representing example machine readable instructions for implementing gesture detector 108. In an example, the machine readable instructions include a program for execution by a processor, such as processor 512, shown in example processor platform 500 discussed below in connection with FIG. Although the program can be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with processor 512, The entire program and/or portions thereof may alternatively be executed by devices other than processor 512 and/or embodied in firmware or special purpose hardware. Further, although the example program is described with reference to the flowcharts illustrated in FIGS. 3-4, many other methods of implementing the example gesture detector 108 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some blocks of the described blocks may be changed, eliminated or combined. Additionally or alternatively, one or more hardware circuits (eg, discrete and/or integrated analog and/or digital circuits, field programmable gate arrays (FPGA) that are structured to perform corresponding operations without executing software or firmware ), an application specific integrated circuit (ASIC), a comparator, an operational amplifier (op-amp), a logic circuit, etc.) to implement any or all of the blocks in the block.As mentioned above, the example process of Figures 3-4 can be implemented using encoded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium, non-transient. Computer and/or machine readable medium such as: a hard drive, a flash memory, a read only memory (ROM), a compact disc (CD), a digital versatile disc (DVD), a cache, a random access memory (RAM), and/or the like Any storage device or storage disk in which information is stored for any length of time (eg, during extended time periods, permanently, during brief instances, during temporary buffering, and/or information caching). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk, and to exclude propagating signals and to exclude transmission media. "Include" and "include" (and all its forms and tenses) are used herein as open-ended terms. Thus, whenever the claims list any content that "comprises" or "includes" (including, includes, includes, etc.) in any form, it is understood that additional elements, items, etc. may exist without exceeding the corresponding rights. The scope of the request. As used herein, the phrase "at least" is used interchangeably with the terms "including" and "including" when used in conjunction with the claims.When the example sensor interface 202 receives touch/hover data from the example touch sensor 106 (block 302), the routine 300 of FIG. 3 begins. The example marker 208 detects a plurality of touch/hover regions (block 304). For example, the identifier 208 can determine that there are multiple discrete touch/hover regions included in the received touch/hover data. The example identifier 208 identifies the finger(s) associated with the plurality of touch/hover regions (block 306). The example marker 208 also determines the strength of the identified touch/hover area (block 308). For example, the marker 208 can determine that there is one or more touches/hover that have a greater intensity than the other touches/hover and because of the primary touch that performed the gesture. For example, the marker 208 can determine the force of the touch, the distance hovering from the touch-sensitive display 104, or any other characteristic or data indicative of such characteristics.The example gesture detector 210 determines the gestures that have been performed (eg, swipe, tap, pinch, etc.) (block 310). Gesture detector 210 determines the identity of the finger(s) associated with the gesture (block 312). Posture detector 210 may additionally consider other characteristics of touch/hover. For example, the gesture detector 210 can analyze the identity of the finger for the gesture, the identity of the finger not used for the gesture, the strength of the touch, the distance of the hover, and the like. For example, the gesture can include an action performed by the finger(s) touching the touch-sensitive display 104 and the finger(s) having a hovering distance greater than (or less than) a threshold. For example, a first finger swipe with the first finger while keeping the second finger (eg, a neighboring finger) with the touch-sensitive display 104 greater than a threshold distance may be the first gesture/action, while the first finger swipes while the second finger is For example, an adjacent finger) remaining less than a threshold distance with the touch sensitive display 104 can be a second gesture/action.The gesture detector 210 determines if there are any application specific rules associated with the gesture and the application for which the gesture is targeted in the gesture data store 212 (block 314). When there is no application specific rule, the gesture detector transmits a system action associated with the gesture and the identity of the finger(s) performing the gesture to the operating system 110 via the system interface 214 (block 316). When there is an application specific rule, the gesture detector transmits an application specific action associated with the gesture and the identity of the finger(s) performing the gesture to the operating system 110 via the system interface 214 (block 318).The routine 400 of FIG. 4 can be executed to train the gesture processor 108 for identifying the finger(s) associated with the gesture. Program 400 begins when training is initiated. For example, training can be initiated at the user's request, can be initiated automatically, can be initiated when an incorrect identification is detected, and the like. The example trainer 204 prompts the user to touch/hover over the touch-sensitive display 104 in a particular manner (block 402). For example, the trainer 204 can prompt the user to touch the touch sensitive display 104 with the finger 2 of the right hand while hovering with the fingers 1 and 3-5. When the user follows the instructions, sensor interface 202 receives touch/hover data (block 404). Trainer 204 updates the training data in training data store 206 (block 406). For example, the trainer 204 can update the model based on the input, can update the machine learning system based on the input, and the like.FIG. 5 is a block diagram of an example processor platform 500 capable of executing the instructions of FIGS. 3-4 to implement the example pose detector 58 of FIGS. 1 and/or 2. The processor platform 500 can be, for example, a server, a personal computer, a mobile device (eg, a cellular phone, a smart phone, a tablet such as an iPadTM), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, digital video recording. Instrument, Blu-ray player, game console, personal video recorder, set-top box, or any other type of computing device.The processor platform 500 of the illustrated example includes a processor 512. The processor 512 of the illustrated example is hardware. For example, processor 512 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The hardware processor can be a semiconductor based (eg, silicon based) device. In this example, processor 512 implements sensor interface 202, trainer 204, identifier 208, gesture detector 210, and system interface 214.The processor 512 of the illustrated example includes a local memory 513 (eg, a cache). Processor 512 of the illustrated example communicates with main memory including volatile memory 514 and non-volatile memory 516 via bus 518. Volatile memory 514 may be implemented by synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM), and/or any other type of random access memory device. Non-volatile memory 516 can be implemented by flash memory and/or any other desired type of memory device. Access to main memory 514, 516 is controlled by a memory controller.The processor platform 500 of the illustrated example also includes an interface circuit 520. Interface circuit 520 can be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.In the illustrated example, one or more input devices 522 are connected to interface circuit 520. Input device 522 permits the user to enter data and/or commands into processor 512. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touch screen, a trackpad, a trackball, a point mouse, and/or a voice recognition system.One or more output devices 524 are also coupled to interface circuit 520 of the illustrated example. Output device 524 can be implemented, for example, by a display device (eg, a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touch screen, a tactile output device, a printer, and/or a speaker). Thus, interface circuit 520 in the illustrated example typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor.The illustrated interface circuit 520 also includes communication devices such as transmitters, receivers, transceivers, modems, and/or network interface cards to facilitate communication via a network 526 (eg, an Ethernet connection, a digital subscriber line ( DSL), telephone lines, coaxial cables, cellular telephone systems, etc.) exchange data with external machines (eg, any kind of computing device).The illustrated processor platform 500 of the illustrated example also includes one or more mass storage devices 528 for storing software and/or data. Examples of such mass storage devices 528 include floppy disk drives, hard disk drives, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The example mass storage device 528 stores the training data store 206 and the gesture data store 212.The encoded instructions 532 of FIGS. 3-4 can be stored in the mass storage device 528, in the volatile memory 514, in the non-volatile memory 516, and/or in a removable tangible computer such as a CD or DVD. Readable on storage media.Exemplary methods, apparatus, systems, and articles of manufacture for detecting anomalies in electronic data are disclosed herein. Further examples and combinations thereof include the following.Example 1 is a device for triggering an action based on a gesture, the device comprising: a touch sensitive display; a touch sensor for detecting touch and hover associated with the touch sensitive display; and a gesture processor, the gesture processor The gesture processor includes an identifier for identifying a finger associated with the touch and hover, and a gesture detector for determining a gesture associated with the touch and hover and determining the gesture and the identification The action associated with the finger.Example 2 includes the apparatus as defined in Example 1, wherein the gesture processor includes a system interface for communicating an action to an operating system of the device.Example 3 includes the apparatus as defined in example 1 or example 2, wherein the gesture detector determines a first motion associated with the gesture when the first finger is identified for the gesture, and is determined to be associated with the gesture when the second finger is identified for the gesture The second action.Example 4 includes the apparatus as defined in Example 3, wherein the first action is a left mouse click and the second action is a right mouse click.Example 5 includes the apparatus as defined in Example 3, wherein the first action is to draw with the first color and the second action is to draw with the second color.Example 6 includes the apparatus as defined in Example 3, wherein the first action is to open the application on the first screen and the second action is to open the application on the second screen.Example 7 includes the apparatus as defined in Example 3, wherein the first action is to change the first setting of the system and the second action is to change the second setting of the system.Example 8 is at least one non-transitory computer readable medium comprising instructions that, when executed, cause a machine to at least: detect touch and hover associated with a touch sensitive display; identify associated with touch and hover a finger; determining a gesture associated with the touch and hover; and determining an action associated with the gesture and the identified finger.Example 9 includes the non-transitory computer readable medium as defined in Example 8, wherein the instructions, when executed, cause the machine to communicate the action to an operating system of the device.Example 10 includes the non-transitory computer readable medium as defined in Example 8 or Example 9, wherein the instructions, when executed, cause the machine to determine a first action associated with the gesture when identifying the first finger for the gesture, and for the gesture A second action associated with the gesture is determined when the second finger is identified.Example 11 includes the non-transitory computer readable medium as defined in Example 10, wherein the first action is a left mouse click and the second action is a right mouse click.Example 12 includes the non-transitory computer readable medium as defined in example 10, wherein the first action is to draw with the first color and the second action is to draw with the second color.Example 13 includes the non-transitory computer readable medium as defined in example 10, wherein the first action is to open the application on the first screen and the second action is to open the application on the second screen.Example 14 includes the non-transitory computer readable medium as defined in example 10, wherein the first action is to change a first setting of the system and the second action is to change a second setting of the system.Example 15 is a method for triggering an action based on a gesture, the method comprising: detecting a touch and hover associated with a touch-sensitive display; identifying a finger associated with touch and hover; determining a touch and hover related a gesture; and determining an action associated with the gesture and the identified finger.Example 16 includes the method as defined in example 15, further comprising transmitting the action to an operating system of the device.Example 17 includes the method as defined in example 15 or example 16, further comprising determining a first action associated with the gesture when the first finger is identified for the gesture, and determining a second associated with the gesture when identifying the second finger for the gesture action.Example 18 includes the method as defined in example 17, wherein the first action is a left mouse click and the second action is a right mouse click.Example 19 includes the method as defined in example 17, wherein the first action is to draw with the first color and the second action is to draw with the second color.Example 20 includes the method as defined in example 17, wherein the first action is to open the application on the first screen and the second action is to open the application on the second screen.Example 21 includes the method as defined in example 17, wherein the first action is to change a first setting of the system and the second action is to change a second setting of the system.Example 22 is an apparatus for triggering an action based on a gesture, the apparatus comprising: an identifier for identifying a finger associated with a touch and hover, the touch and hover being associated with a touch sensitive display; A gesture detector for determining a gesture associated with the touch and hover and determining an action associated with the gesture and the identified finger.Example 23 includes the apparatus as defined in example 22, further comprising a system interface for communicating the action to an operating system of the device.Example 24 includes the apparatus as defined in example 22 or example 23, wherein the gesture detector determines a first action associated with the gesture when the first finger is identified for the gesture and a determination of the gesture when the second finger is identified for the gesture The second action.Example 25 includes the apparatus as defined in example 24, wherein the first action is a left mouse click and the second action is a right mouse click.Example 26 includes the apparatus as defined in example 24, wherein the first action is to draw with the first color and the second action is to draw with the second color.Example 27 includes the apparatus as defined in example 24, wherein the first action is to open the application on the first screen and the second action is to open the application on the second screen.Example 28 includes the apparatus as defined in example 24, wherein the first action is to change a first setting of the system and the second action is to change a second setting of the system.Example 29 is an apparatus for triggering an action based on a gesture, the apparatus comprising: means for detecting a touch and hover associated with a touch-sensitive display; means for identifying a finger associated with touch and hover Means for determining a gesture associated with the touch and hover; and means for determining an action associated with the gesture and the identified finger.Example 30 includes the device as defined in example 29, further comprising means for communicating the action to an operating system of the device.Example 31 is a system for triggering an action based on a gesture, the system comprising: a touch sensitive display; an operating system associated with the execution application; a touch sensor for detecting associated with the touch sensitive display Touch and hover; and a gesture processor comprising: an identifier for identifying a finger associated with touch and hover; and a gesture detector for determining and touching and hanging The associated gesture is stopped and the action for the operating system associated with the gesture and the identified finger is determined.Example 32 includes the system as defined in example 31, wherein the gesture processor includes a system interface for communicating the action to an operating system to cause the action to be executed with the executing application.From the foregoing, example methods, apparatus, and articles of manufacture that have disclosed benefits to interact with computing devices having touch-sensitive displays will be appreciated. In some examples, different user input information may be benefited without the addition of additional user input devices. The touch input can convey different information to the computing device by detecting differences in the identity of the finger(s) used to provide the input, the intensity of the touch, the hovering distance, etc., without physical or virtual switching.Although certain example methods, apparatus, and articles of manufacture have been disclosed herein, the scope of the patent is not limited thereto. On the contrary, this patent covers all methods, devices and articles falling within the scope of the claims.
Methods and structures for pad reconfiguration to allow intermediate testing during the manufacture of an integrated circuit are disclosed. The methods and structures disclosed are particularly usefull in testing an embedded subcircuit, such as a memory array within an embedded chip product. A bond pad reconfiguration etch and other means for reconfiguring a bond pad are also disclosed.
What is claimed is: 1. A method of fabricating an embedded device, comprising the steps of: (a) providing a substrate; (b) forming circuitry on the substrate, said circuitry comprising a first circuit and a second circuit; (c) at an intermediate stage of circuitry formation wherein said first circuit is fully processed and operational and said second circuit is partially processed and not operational, forming an intermediate probe pad electrically connected to said first circuit; (d) disconnecting the electrical connection between said intermediate probe pad and said first circuit; and (f) completing processing of said second circuit. 2. A method in accordance with claim 1, wherein said first circuit comprises a memory circuit and said second circuit comprises a logic circuit. 3. A method in accordance with claim 1, wherein said embedded device is a microprocessor. 4. A method in accordance with claim 1, wherein said step (b) of forming circuitry comprises performing a reconfiguration etch; and wherein said step (d) of disconnecting the electrical connection is accomplished by said reconfiguration etch. 5. A method in accordance with claim 1, wherein said step (b) of forming circuitry farther comprises electrically connecting said first circuit and said second circuit. 6. A method in accordance with claim 1, further comprising the step of: (e) forming a bond pad electrically connected to said circuitry, wherein said bond pad is located above said intermediate probe pad. 7. A method in accordance with claim 1, wherein said step (d) of disconnecting said electrical connection is accomplished by laser ablation. 8. A method of fabricating an embedded device, comprising the steps of: (a) providing a substrate; (b) forming circuitry on the substrate, comprising a first circuit and a second circuit; (c) at an intermediate stage of circuitry formation wherein said first circuit is fully processed and said second circuit is partially processed, forming an intermediate probe pad electrically connected to said first circuit; and (d) forming a disconnect circuit for disconnecting the electrical connection between said intermediate probe pad and said first circuit, and (e) completing processing of said second circuit. 9. A method in accordance with claim 8, wherein said first circuit comprises a memory circuit and said second circuit comprises a logic circuit. 10. A method in accordance with claim 8, wherein said embedded device is a microprocessor. 11. A method in accordance with claim 8, wherein said disconnect circuit comprises a fuse located in series between and in electrical contact with said intermediate probe pad and said first circuit. 12. A method in accordance with claim 8, wherein said step (b) of forming circuitry further comprises electrically connecting said first circuit and said second circuit. 13. A method in accordance with claim 8, further comprising the step of: (e) forming a bond pad electrically connected to said circuitry, wherein said bond pad is located above said intermediate probe pad. 14. A method in accordance with claim 8, further comprising the step of: (e) disconnecting the electrical connection between the intermediate probe pad and said first circuit by actuating said disconnect circuit. 15. A method of testing an embedded device, comprising the steps of: (a) providing an embedded device comprising a memory circuit, a logic circuit, and an intermediate probe pad electrically connected to said memory circuit, wherein said memory circuit is fully processed and operational and said logic circuit is partially processed and not operational; (b) electrically testing said memory circuit via said intermediate probe pad; and (c) completing processing of said logic circuit subsequent to said step (b) of electrically testing said memory circuit. 16. A method in accordance with claim 15, wherein said embedded device further comprises a disconnect circuit electrically connected to said memory circuit and said intermediate probe pad. 17. A method of fabricating and testing an embedded circuit comprising: (a) providing a substrate; (b) forming circuitry on the substrate, said circuitry comprising a first circuit and a second circuit; (c) at an intermediate stage of circuitry formation wherein said first circuit is fully processed and operational and said second circuit is partially processed and not operational forming an intermediate probe pad electrically connected to said first circuit; (d) electrically testing said first circuit via said intermediate probe pad; and (e) completing processing of said second circuit. 18. A method in accordance with claim 17, further comprising the step of: (e) forming a disconnect circuit for disconnecting the electrical connection between said intermediate probe pad and said first circuit. 19. A method in accordance with claim 18, wherein said disconnect circuit comprises an electrical switch. 20. A method in accordance with claim 19, further comprising the step of: (f) disconnecting the electrical connection between said intermediate probe pad and said first circuit by actuating said electrical switch to an open position.
TECHNICAL FIELD This invention relates generally to semiconductor manufacturing and more specifically to a method allowing for testing of an integrated circuit during an intermediate stage of its manufacture. BACKGROUND OF THE INVENTION Modern semiconductor integrated circuits, or "chips," can be generally divided into two major categories: "microprocessors," which perform logic operations and act generally as the "brains" of electronic systems incorporating them; and "memories," which store data utilized by the microprocessor and other components of electronic systems. Traditionally, memory functions and microprocessing functions have been realized on separate chips. However, semiconductor manufacturers are currently pursuing "embedded" designs that incorporate both memory and microprocessor functions on the same chip. Embedded designs are advantageous because a single chip can take the place of separate memory and microprocessor chips, thus saving needed board space in a final computer product in which it is incorporated. Moreover, embedded chips are expected to produce cost savings, higher reliability, and faster speeds when compared to the use of separate memory and microprocessor chips. However, the manufacture of embedded designs pose significant challenges. Significantly, the processes traditionally used to manufacture memories and microprocessors are different in ways that make their integration on a single chip difficult. For example, the process used to fabricate a Dynamic Random Access Memory (DRAM) cell array is typically quite different from the process used to fabricate the logic gates of a microprocessor. For example, while memories typically only require two metal levels of interconnections, the logic gates of microprocessor circuitry typically call for many more interconnect levels. Thus, the construction of the embedded memory array on a given portion of the embedded chip product will usually be complete at an intermediate stage of the embedded chip's manufacture. The remainder of the process is directed to the completion of the remaining interconnect levels necessary to complete the logic gates for the microprocessor portion of the chip. However, because the embedded memory array is covered by the remaining levels used to complete the microprocessor portions of the embedded chip product, access to the array is limited, malking it difficult to directly test the memory using industry standard memory testing techniques. Moreover, the lack of direct array access makes it difficult to use known redundancy techniques to repair any defects within the embedded memory array. The present inventions are directed to overcoming or at least reducing the effects of the one or more problems set forth above. SUMMARY OF THE INVENTION According to one aspect of the invention, a method for fabricating an integrated circuit is provided. The integrated circuit includes a surface, and the method comprises the steps of: forming a first layer on the surface; forming a second layer over the first layer; forming an opening in the second layer over the first layer; forming a third layer within the opening; forming a fourth layer over the second layer and over the third layer; removing the fourth layer which overlies the opening; removing the third layer from within the opening; and removing the first layer below the opening. According to another aspect of the invention, a method for electrically testing an integrated circuit is provided. The integrated circuit includes a subcircuit which is operational at an intermediate stage during the processing of the integrated circuit, and the method comprises electrically testing the operational subcircuit during the intermediate stage. BRIEF DESCRIPTION OF THE DRAWINGS Other objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings in which: FIG. 1 shows an isometric view of an unpackaged embedded chip product, including an embedded memory array. FIG. 2 shows an isometric view of the embedded chip product at an intermediate stage of its manufacture, specifically after the patterning of "metal-2" to form leads and intermediate probe pads. FIG. 3 shows a cross section through one of the leads and one of the intermediate probe pads of FIG. 2. FIG. 4 shows the cross section of FIG. 3 after the formation and etching of a dielectric layer to form a via and an opening over the lead. FIG. 5A shows the cross section of FIG. 4 after the deposition of "metal-3" and the development of an overlying photoresist layer. FIG. 5B shows a modification to the process shown in FIG. 5A in which a "plug" process is used to fill the via and the opening. FIG. 6 shows the cross section of FIG. 5A after a reconfiguration etch has removed the "metal-3" from over and within the opening and the "metal-2" from a portion of the lead, thereby severing the lead from the intermediate probe pad. FIG. 7 shows an isometric view of the embedded chip product of FIG. 6, with the dielectric layer removed for clarity. FIG. 8 shows a cross section perpendicular to the cross section of FIG. 5A, and illustrates the relative widths of the lead, the opening, and the photoresist. FIG. 9 shows the structure of FIG. 2 following laser ablation to sever one of the leads from its intermediate probe pad. FIG. 10 shows a circuit for severing the lead from its intermediate probe pad using a fuse. FIG. 11A shows a switch circuit useful for testing the embedded memory during the intermediate manufacturing stage. FIG. 11B shows how the switch circuit of FIG. 11A can be modified by subsequent processing to disconnect the lead from its intermediate probe pad. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. FIG. 1 shows an isometric view of a completed embedded chip product 2 that contains two subcircuits: an embedded memory array 4 (such as a DRAM memory array) capable of storing data; and logic circuitry 6 capable of performing microprocessor type functions. One of ordinary skill will recognize that bond pads 8 are used to connect the embedded chip product 2 to a suitable carrier or package (not shown). Moreover, the bond pads 8 can be used to test the embedded chip product 2 after the wafer (not shown) on which the embedded chip product 2 is contained has completed manufacture. Such wafer-level testing is typically performed by a multiprobe tester containing a probe card with pins that contact the bond pads 8 and through which electrical signals are sent to test the chip 2. FIG. 1 shows the embedded chip product 2 after multiprobe testing and after the wafer (not shown) on which it has been built has been "diced" to separate the chip 2 from other such chips that have been fabricated on the wafer. FIGS. 2 through 11B disclose an embodiment of the invention whereby intermediate probe pads 12 are provided to test the embedded memory array 4 at an intermediate stage during the manufacture of the embedded chip product 2. In this embodiment, the position of the intermediate probe pads 12 corresponds to the position of the final bond pads 8 that are used to test and bond the final embedded chip product 2, as will be made clear shortly. As disclosed, the embodiment describes an embedded chip 2 that requires three levels of metal interconnects, only two of which are needed to fully fabricate the embedded memory array 4. However, one of ordinary skill will realize that the methods disclosed would be applicable to products incorporating fewer or additional layers of metal or other conductors as well. For example, the embedded memory array 4 could contain three levels of metal, while the logic circuitry 6 could contain four or more layers of metal. FIGS. 2 and 3 respectively show an isometric and cross sectional view a portion of the embedded chip 2 during an intermediate stage of its processing. At this stage, the embedded chip 2 has been completed through the patterning and etching of a second interconnect layer 10, otherwise known as "metal-2." At this stage, the embedded memory array 4, which (in the preferred embodiment) requires only two interconnect layers (i.e., "metal-1" and "metal-2"), is complete and capable of being tested. One of ordinary skill will recognize that many other circuit layers 11 may be present between the "metal-2" layer 10 and a silicon substrate 9, and that these layers form the cells and logic circuits of the embedded memory array 4 and the logic circuitry 6. These layers are not specifically shown in the Figures but comprise various portions of layer 11. Thus, for example, two levels of polycrystalline silicon ("poly") might be present to form traditional "stacked" DRAM cells in the cell array, and a third level of poly might be used to form both the transistor gates of the decoding circuitry and the row lines in the cell array. "Metal-1" might be used, for example, to form the bit lines in the cell array and to interconnect the decoding transistors. Other circuit layers will comprise various portions of layer 11, such as the interlevel dielectrics between the poly and the "metal-1" and the gate dielectrics used in both the transistors and the cells. Similarly, the underlying layers in the logic circuitry 6 will, at this stage, form similar structures such as the logic transistors (although not yet fully interconnected). These other circuit layers 11 are not is shown in detail so as not to obscure the inventions disclosed. As shown in FIG. 2, the "metal-2" layer 10 has been etched to form intermediate probe pads 12. The intermediate probe pads 12 are electrically connected to the completed embedded memory array 4 by way of "metal-2" leads 14. The actual connections to the embedded memory array 4 are not shown, but one of ordinary skill will realize that the leads 14 contact appropriate nodes in the embedded memory array 4. Using the intermediate probe pads 12, the embedded memory array 4 can be electrically tested at this intermediate processing stage using a conventional multiprobe tester. To facilitate testing, the pads 12 preferably, but not necessarily, are connected to the test signals used to test (and operate) a traditional, non-embedded DRAM, such as Vcc (power supply), Vss (ground), RAS (row access strobe), CAS (column address strobe), WE (write enable), OE (output enable), address lines, and data input/output lines. If the pads 12 are connected to these test signals, test programs written for a traditional DRAM chip can be used at this point to test the embedded memory array 4. Intermediate testing of the embedded memory array 4 is advantageous because the logic circuitry 6 is bypassed, allowing failures in the memory array to be "pinpointed" more succinctly and accurately (and, if necessary, fixed via redundancy) through standard memory testing procedures. Of course, it should be understood that in the completed embedded chip product 2, the embedded memory array 4 will be connected to the logic circuitry 6, and that the is logic circuitry 6 will generate the necessary electrical signals to operate the memory array 4. This will be described in more detail shortly. FIG. 4 shows the resulting structure after the deposition and etching of a dielectric layer 16 over the structure of the embedded chip product 2 of FIGS. 2 and 3. The dielectric layer 16 has been etched to form both a via 18 and an opening 20, whose respective functions will become clear shortly. The dielectric layer 16 can be any suitable dielectric such as BPSG, TEOS or other silane-based silicon oxides traditionally used between metal layers in a semiconductor process. Methods for deposition and etching such a suitable dielectric are well known to those of ordinary skill in the art of semiconductor fabrication. FIG. 5A shows the resulting structure after the deposition of a third interconnect layer 22 ("metal-3"). Notice that the "metal-3" layer 22 deposits into and fills both the via 18 and the opening 20. FIG. 5A also shows the placement of a photoresist layer 24 on top of the "metal-3" layer 22. The photoresist layer 24 has been exposed and developed using industry standard procedures known to one of ordinary skill. FIGS. 6 and 7 show the structure of FIG. 5A after the "metal-3" layer 22 has been etched in the areas not protected by the photoresist layer 24, and after the photoresist layer 24 has been removed. (In FIG. 7, the dielectric layer 16 between the "metal-2" layer 10 and the "metal-3" layer 22 has been removed for clarity). This etch, referred to herein as the "reconfiguration etch," performs three important functions. First, it defines the final bond pads 8 to be used in the completed embedded chip product 2. Notice that the final bond pads 8 are located over the intermediate probe pads 12. Second, it defines the "metal-3" leads, such as leads 25 and 26, which are ultimately connected to the logic circuitry 6 through vias (not shown) similar in structure to the via 18. Third, it etches the "metal-3" layer 22 in the opening 20 and the exposed "metal-2" layer 10 underlying the opening 20, thereby disconnecting the "metal-2" lead 14 from the intermediate probe pad 12. In this manner the bond pad area is "reconfigured" so that it no longer communicates with the embedded memory array 4 (i.e., by way of intermediate probe pad 12 and "metal-2" lead 14), but instead communicates with the logic circuitry 6 (i.e., by way of "metal-3" lead 25). FIG. 7 only shows the reconfiguration of one of the intermediate probe pads 12, but during manufacture it is possible to reconfigure all of the intermediate probe pads 12 similarly. To properly isolate the "metal-2" lead 14 from the intermediate probe pad 12 using the reconfiguration etch, the opening 20 should preferably be made slightly bigger in width than the "metal-2" lead 14 to ensure that the reconfiguration etch will sever the lead 14. Similarly, the photoresist layer 24 should preferably be made slightly bigger than the width of the opening 20. This is illustrated in FIG. 8 (a perpendicular cross section of the cross section shown in FIG. 5A). This sizing consideration is more significant if dry etching is used for the reconfiguration etch, but is of lesser concern if isotropic wet etching is used. The reconfiguration etch is especially useful because it does not require any additional processing steps over what would normally be called for in a conventional multilevel metal process. The only modification that needs be made to a conventional multilevel metal process is "overetching" of the "metal-3" layer 22 to properly clear the "metal-3" layer 22 from within the opening 20 and the "metal-2" layer 10 underneath the opening 20. For example, suppose that both the "metal-3" and the "metal-2" layers 22, 10 are comprised of an industry-standard aluminum alloy, such as an aluminum-copper-silicon alloy (with approximately 0.5% copper and 1.0% silicon). Using a traditional dryplasma anisotropic aluminum etch, it should take about 1 minute to clear an approximately 1 micron thick (labeled as dimension "x" in FIG. 5A) "metal-3" layer 22 that overlies the dielectric layer 16 to pattern the "metal 3" leads, such as leads 25 and 26. However, because the "metal-3" layer 22 within the opening 20 and the underlying "metal-2" layer 10 also have a given thickness (labeled "y" and "z" respectively), the etch time will need to be adjusted to ensure proper removal of the metal from these areas. Thus, if x=y=z, the etch time will need to be adjusted to approximately three times the amount needed (i.e., 3 minutes) to pattern the "metal-3" leads in a standard multilevel metal process. One of ordinary skill will realize that several suitable, industry-standard etchants for the dry etching of aluminum may be utilized during metal patterning and bond pad reconfiguration. Examples of suitable aluminum alloy etchants are Cl2, BCl3, CCl4 and SiCl4. The etch times, temperatures, gas flow rates, pressure and other metal etch parameters will necessarily depend on the etchant used, as one of ordinary skill will realize. A suitable industry standard plasma etch for the disclosed aluminum alloy requires the use of a BCl3 +Cl2 chemistry at approximately 55 degrees Centigrade, 10 mTorr of pressure, and a plasma power of 500W. Furthermore, if a different conductive layer 21 is used to fill both the via 18 and the opening 20, as shown in FIG. 5B, the etch used to clear the opening 20 may need to be modified. For example, if the opening 20 is filled using an industry standard tungsten plug process, the overlying "metal-3" layer 22 may first need to be etched using a suitable aluminum alloy etchant (e.g., BCl3 +Cl2) to pattern the "metal-3" leads 25, 26, followed by a suitable tungsten etchant (e.g., SF6 or NF3) to clear the opening 20, followed again by the suitable aluminum alloy etchant to clear the "metal-2" layer 10 underlying the opening 20. The parameters of the tungsten etch process are similar to those used to etch the aluminum alloy, and will etch approximately 0.6 microns of tungsten in one minute. Other plug processes, such as poly plugs, may also be used. One of ordinary skill will recognize that many different metal compositions could be used during the manufacture of an integrated circuit, and that an appropriate etching process must be chosen to perform the reconfiguration etch. For example, the various conductive layers could be composite layers consisting of two adjacent layers of conductive materials. Many composite layers are known and used in the semiconductor industry, such as aluminum alloy over a titanium or titanium nitride "barrier" layer, or tungsten silicide over doped polysilicon. Such composite layers are beneficial in the construction of a conductive layer because the differing materials will each contribute beneficial properties to the conductive layer. For example, an aluminum alloy is highly conductive, while titanium nitride has good electromigration characteristics. If such composite layers are used, then the reconfiguration etch will need to be modified (as in the tungsten plug process) to ensure that all of the materials comprising the conductive layers are removed by using appropriate industry standard etchants. The reconfiguration etch could also be performed by using wet etchants. It may also be advantageous to perform a short wet etch "dip" (e.g., in a diluted H2 SO4 or H2 NO3 solution) after the reconfiguration dry plasma etch to remove any conductive residue to ensure that the "metal-2" lead 14 has been properly electrically isolated from the intermediate probe pad 12. Because metal etchants are usually highly selective to the underlying dielectric layers (such as the interlevel dielectric 16), the "overetching" of the "metal-3" layer 22 during the reconfiguration etch should not interfere substantially with the integrity of these layers, although some slight etching of these layers may result (see element 19, FIG. 6). To guard against overetching, other sacrificial structures such as a "metal-1" layer 30 can be placed underneath the opening 20, although such structures are not strictly necessary. After reconfiguration, and referring again to FIG. 7, the reconfigured final bond pad 8 now communicates with the logic circuitry 6 through the "metal-3" lead 25, and the "metal-2" lead 14 no longer communicates with the intermediate probe pad 12. In this way, the signal passed to the embedded memory array 4 during intermediate testing can be changed from its previous memory array specific function to a signal to be used by the completed embedded chip product 2. For example, suppose that a given intermediate probe pad 12 supplies the "row access strobe" (RAS) signal to the embedded memory array 4 (i.e., DRAM array) through its associated "metal-2" lead 14 during intermediate testing. After pad reconfiguration (and after fully completing the manufacture of the embedded chip product 2), the final bond pad 8 overlying this intermediate probe pad 12 might supply a clock signal (CLK) to the logic circuitry 6 through the "metal-3" lead 25. The "metal-3" lead 25 communicates with the logic circuitry 6 through a via (not shown) extending between the "metal-3" layer 22 and the "metal-2" layer 10. An appropriate portion of the logic circuitry 6 would internally generate the RAS signal needed to operate the embedded memory array 4, and this internal signal would appear at the "metal-3" lead 26 (again, through the use of a via). The internally generated RAS signal is in turn connected to the "metal-2" lead 14 through the via 18 and supplied to the embedded memory array 4. There are important reasons for disconnecting the intermediate probe pads 12. First, the intermediate probe pad 12 has an inherent capacitance which, if not disconnected, would delay the transmission of the internally generated RAS signal to the embedded memory array 4. Second, both multiprobing (during wafer testing) and thermalsonic bonding (during packaging) of the final bond pads 8 will sometimes cause cracking in the dielectric layer 16 between the final bond pads 8 and the intermediate probe pads 12, causing these pads to short together. Therefore, disconnecting the intermediate probe pad 12 ensures that this unwanted result will not result in circuit failure. By positioning the final bond pads 8 over the intermediate probe pads 12, the intermediate testing procedure is simplified. First, the same probe card can be used to test both the embedded memory array 4 (i.e., during intermediate testing) and the completed embedded chip product 2 (i.e., during final testing). Of course, differing test patterns must be supplied by the multiprobe tester to the probe card to reflect the signal to be supplied to the relevant bond pad during testing (e.g., RAS for intermediate testing or CLK for final testing). Second, because the bond pads take up a fair amount of space on the chip's surface, the overlapping of the intermediate 12 and final 8 bond pads ensures that a minimal amount of chip space is taken up by the bond pads. Because the embedded memory array 4 will usually require less signals than will the completed embedded chip product 2, not every final bond pad 8 will necessarily have an operational intermediate probe pad 12 underneath it, although a "dummy" (i.e., unconnected) pad may be formed so that the probe card will have a pad with which to make contact during intermediate testing. While the embedded chip product 2 is fully functional after the patterning and etching of the "metal-3" layer 22, a passivating layer will usually be placed on the surface of the chip 2 to protect the circuitry prior to final multiprobe testing and packaging. The passivating layer is typically a silicon oxide or silicon nitride and can be formed using several techniques known to those of ordinary skill in the art. The final bond pads 8 are made accessible by etching holes in the passivating layer where it exists over the final bond pads 8. Other structures, such as Electro-Static Discharge (ESD) circuitry, will usually be connected to the final bond pads, and may also be connected to the intermediate probe pads 12 if desired. Appropriate ESD circuits are well known in the art and are not disclosed herein so as not to obscure the inventive aspects of this disclosure. Bond pad reconfiguration may be realized in other ways other than the use of the disclosed reconfiguration etch. For example, upon completion of "metal-2" patterning (see FIGS. 2 and 3) and intermediate testing, the "metal-2" leads 14 may be severed from the intermediate probe pad 12 by laser ablating 29 a portion of the lead 14, as shown in FIG. 9. Suitable laser ablation techniques are well known in the art and have traditionally been used to program the redundancy circuits of defective chips. After laser ablation, processing could continue per a conventional multilayer metal process as outlined above. Alternatively, an electrical fuse 32 can be positioned between the intermediate probe pad 12 and the lead 14, as shown in FIG. 10, and can be "blown" to form an open circuit after intermediate testing. Many different types of electrical fuses 32, such as poly fuses, are known to those of ordinary skill in the art of semiconductor manufacturing. After fusing, processing could continue per a conventional multilayer metal process as outlined above. The use of antifuses (i.e., a structure which may be "blown" to form a closed circuit) may also be advantageous in bond pad configuration. Another means for reconfiguring the pads is to use an electrical switch, such as an N-channel transistor 34, between the intermediate probe pad 12 and the "metal-2" lead 14, as shown in FIGS. 11A and 11B. FIG. 11A shows the connection of the transistor 34 after "metal-2" patterning. The gate 36 of the transistor 34 can be connected by way of a "metal-2" lead 38 to the power supply voltage Vcc, which is used to power the memory array 4 during intermediate testing. Vcc is delivered into the chip 2 through its own intermediate probe pad 13. Because the gate 36 of the N-channel transistor 34 is at Vcc, the transistor 34 will be "on" and signals can pass between the intermediate probe pad 12 and the "metal-2" lead 14 during intermediate testing. Alternatively, the gate 36 of transistor 34 could be tied in the "metal-2" layer 22 to an auxiliary power supply bond pad 13 (Vcc2), which is not the same Vcc pad used to power the embedded memory array 4. This scheme is advantageous because the voltage supplied to the auxiliary power supply bond pad 13 may be biased higher than the "high" logic levels (i.e., logic `1`) supplied to the intermediate probe pads 12 during intermediate testing to ensure that a full logic `1` level can be transferred to and from the "metal-2" lead 14 without a transistor threshold "Vt" drop. (For example, if Vt equals the threshold voltage of transistor 34, then the voltage supplied to the auxiliary power supply bond pad 13 (Vcc2) should equal logic `1`+Vt). Other equivalent circuits could be substituted for the transistor 34, such as a transmission gate (not shown). A transmission gate consists of an N-channel transistor and a P-channel transistor whose sources and drains are tied together. The N-channel transistor would be gated as shown in FIG. 11A. The P-channel transistor would be similarly gated but would instead be connected to a ground (Vss) intermediate probe pad. A transmission gate would ensure that signals could be transmitted to and from the "metal-2" lead 14 without experiencing a Vt drop, without the need for changing the supply voltages from their normal values. FIG. 11B shows the circuit of FIG. 11A after the processing of the embedded chip product 2 is completed according to a conventional multilevel metal process. Generally, the darker lines schematically represent the "metal-3" layer 22 and "metal-3" to "metal-2" vias. A via 40 (similar in construction to via 18 of FIG. 6) has been formed over the "metal-2" lead 38 and is connected by a "metal-3" lead 41 to the ground signal (GND) provided by a ground bond pad 42 on the completed embedded chip product 2 (i.e., the ground bond pad is one of the many final bond pads 8). Because the intermediate Vcc (or Vcc2) pad is no longer accessible, the ground signal can be transferred through the via 40 and to the "metal-2" lead 38. The lead 38 can be severed at position "A" using the reconfiguration etch, laser ablation, or by an electrical fuse if the capacitance of the intermediate Vcc (or Vcc2) bond pad 13 is undesirable, but this is not strictly necessary. Because the gate 36 of the N-channel transistor 34 is grounded, the transistor 34 will be "off," thereby isolating the intermediate probe pad 12 from the "metal-2" lead 14. As shown in FIG. 7, the "metal-3" lead 26 is connected to the "metal-2" lead 14 by the via 18. If a transmission gate (not shown) is used, the gate of the P-channel transistor would need to be similarly connected to the Vcc final bond pad 8. The embodiment may be modified such that the location of the intermediate probe pads 12 do not correspond with the final bond pads 8 of the completed embedded chip product 2. For example, the location of the intermediate probe pads 12 could be moved to the edge of the embedded memory array 4. Otherwise, processing and pad reconfiguration may continue after intermediate testing as outlined in FIGS. 4 through 11B above and accompanying text. This modification is advantageous in that a previously designed mask work for a memory chip product (such as a DRAM) can be completely incorporated within the design of an embedded chip product without substantial changes to the location of the bond pads. Thus, the same probe card used to test the memory product can be used to intermediately test the embedded memory chip product. However, this modification is disadvantageous in that there is no overlapping of the intermediate and final pads, and therefore extra space on the surface of the embedded chip product is needed to accommodate the intermediate probe pads. Those of ordinary skill in the art who now have the benefit of the present disclosure will appreciate that the present inventions may take many forms and embodiments and have many uses. Moreover, those of ordinary skill will realize that the manufacturing details as set forth are merely exemplary ways of reconfiguring a bond pad and that many other ways are possible which do not depart from the invention disclosed herein. Also, the reconfiguration etch could be used for severing of an underlying poly lead or any other suitably conductive material, or for making holes therein. While the reconfiguration etch has been shown particularly useful for disconnecting a lead from intermediate probe pads, it can be used more generally to sever an underlying lead for any number of other reasons. Other minor changes to the manufacturing process are also possible without departing from the invention in any significant respect. For example, while the embedded chip product 2 can be intermediately tested directly after the patterning of the "metal-2" layer 10, it may also be advantageous to apply a thin layer of dielectric material (a "passivating layer") over the "metal-2" layer 10 to protect the "metal-2" structures during intermediate testing. One of ordinary skill will realize that the passivating layer will need to be etched over the intermediate probe pads 12 so that the pins of the probe card used during multiprobe testing can make electrical contact with the intermediate probe pads 12. The passivating layer need not be removed after intermediate testing, but instead can be incorporated as part of the dielectric layer 16 (see FIG. 4). Furthermore, the disclosed intermediate testing procedures will be useful in testing more than just embedded memory arrays. They may be used more generally to intermediately test any appropriate subcircuit whose structure is capable of testing during the manufacture of an integrated circuit. Accordingly, it is intended that the embodiments described herein should be illustrative only, and not limiting with respect to the scope of the present invention. Rather, it is intended that the invention encompass all modifications, equivalents and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Methods and apparatus for conditional access of non real-time (NRT) content in a distribution system. A method includes encrypting NRT content with a control word (CW) to generate encrypted NRT content, providing the CW to entitlement control message (ECM) generators, receiving ECMs from the ECM generators, wherein each ECM comprises a unique encryption of the CW to provide conditional access to the CW, and providing the encrypted NRT content and the ECMs for transmission over a distribution network. An apparatus includes a synchronizer configured to provide a CW to ECM generators and receive ECMs from the ECM generators, wherein each ECM comprises a unique encryption of the CW to provide conditional access to the CW, and a management module configured to encrypt the NRT content with the CW to generate encrypted NRT content and provide the encrypted NRT content and the ECMs for transmission over the distribution network.
CLAIMS WHAT IS CLAIMED IS: 1. A method for distributing non real-time (NRT) content over a distribution network, the method comprising: encrypting the NRT content with a control word to generate encrypted NRT content; providing the control word to one or more entitlement control message (ECM) generators; receiving one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word; and providing the encrypted NRT content and the one or more ECMs for transmission over the distribution network. 2. The method of claim 1 , further comprising receiving the control word from a control word generator. 3. The method of claim 1 , further comprising: obtaining one or more access criteria (AC) parameters to be associated with the NRT content; and providing the one or more AC parameters to the one or more ECM generators, respectively. 4. The method of claim 1 , further comprising encoding the encrypted NRT content and the one or more ECMs into a NRT file format. 5. The method of claim 4, wherein the NRT file format comprises a clip definition record. 6. The method of claim 5, wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 7. An apparatus configured to distribute non real-time (NRT) content over a distribution network, the apparatus comprising: a synchronizer configured to provide a control word to one or more entitlement control message (ECM) generators and receive one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word; and a management module configured to encrypt the NRT content with the control word to generate encrypted NRT content and provide the encrypted NRT content and the one or more ECMs for transmission over the distribution network. 8. The apparatus of claim 7, wherein said synchronizer is configured to obtain the control word from a control word generator. 9. The apparatus of claim 7, wherein said synchronizer is configured to: obtain one or more access criteria (AC) parameters to be associated with the NRT content; and provide the one or more AC parameters to the one or more ECM generators, respectively. 10. The apparatus of claim 7, wherein said management module is configured to encode the encrypted NRT content and the one or more ECMs into a NRT file format. 11. The apparatus of claim 10, wherein the NRT file format comprises a clip definition record. 12. The apparatus of claim 11 , wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 13. An apparatus configured to distribute non real-time (NRT) content over a distribution network, the apparatus comprising: means for encrypting the NRT content with a control word to generate encrypted NRT content;means for providing the control word to one or more entitlement control message (ECM) generators; means for receiving one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word; and means for providing the encrypted NRT content and the one or more ECMs for transmission over the distribution network. 14. The apparatus of claim 13 , further comprising means for receiving the control word from a control word generator. 15. The apparatus of claim 13 , further comprising: means for obtaining one or more access criteria (AC) parameters to be associated with the NRT content; and means for providing the one or more AC parameters to the one or more ECM generators, respectively. 16. The apparatus of claim 13, further comprising means for encoding the encrypted NRT content and the one or more ECMs into a NRT file format. 17. The apparatus of claim 16, wherein the NRT file format comprises a clip definition record. 18. The apparatus of claim 17, wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 19. A computer program product for distributing non real-time (NRT) content over a distribution network, the computer program product comprising: a computer-readable medium encoded with codes executable to: provide a control word to one or more entitlement control message (ECM) generators; receive one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word;encrypt the NRT content with the control word to generate encrypted NRT content; and provide the encrypted NRT content and the one or more ECMs for transmission over the distribution network. 20. A server configured to distribute non real-time (NRT) content over a distribution network, the server comprising: a network interface; a synchronizer configured to provide a control word to one or more entitlement control message (ECM) generators and receive one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word; and a management module configured to encrypt the NRT content with the control word to generate encrypted NRT content and provide the encrypted NRT content and the one or more ECMs over the network interface for transmission over the distribution network. 21. A method for receiving non real-time (NRT) content over a distribution network, the method comprising: receiving encrypted NRT content that has been encrypted with a control word; receiving one or more entitlement control messages (ECMs); decrypting a selected ECM with a long term key to obtain the control word; and decrypting the encrypted NRT content to obtain decrypted NRT content. 22. The method of claim 21 , further comprising receiving an entitlement management message (EMM) that comprises the long term key. 23. The method of claim 21 , further comprising receiving the encrypted NRT content and the one or more ECMs in a NRT file format. 24. The method of claim 23, wherein the NRT file format comprises a clip definition record. 25. The method of claim 24, wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 26. An apparatus for receiving non real-time (NRT) content over a distribution network, the apparatus comprising: processing logic configured to receive encrypted NRT content that has been encrypted with a control word and receive one or more entitlement control messages (ECMs); key acquisition logic configured to decrypt a selected ECM with a long term key to obtain the control word; and decryption logic configured to decrypt the encrypted NRT content to obtain decrypted NRT content. 27. The apparatus of claim 26, wherein said processing logic is configured to receive an entitlement management message (EMM) that comprises the long term key. 28. The apparatus of claim 26, wherein said processing logic is configured to receive the encrypted NRT content and the one or more ECMs in a NRT file format. 29. The apparatus of claim 28, wherein the NRT file format comprises a clip definition record. 30. The apparatus of claim 29, wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 31. An apparatus for receiving non real-time (NRT) content over a distribution network, the apparatus comprising: means for receiving encrypted NRT content that has been encrypted with a control word; means for receiving one or more entitlement control messages (ECMs); means for decrypting a selected ECM with a long term key to obtain the control word; and means for decrypting the encrypted NRT content to obtain decrypted NRT content. 32. The apparatus of claim 31 , further comprising means for receiving an entitlement management message (EMM) that comprises the long term key. 33. The apparatus of claim 31 , further comprising means for receiving the encrypted NRT content and the one or more ECMs in a NRT file format. 34. The apparatus of claim 33, wherein the NRT file format comprises a clip definition record. 35. The apparatus of claim 34, wherein the clip definition record identifies the encrypted NRT content and comprises the one or more ECMs. 36. A computer program product for receiving non real-time (NRT) content over a distribution network, the computer program product comprising: a computer-readable medium encoded with codes executable to: receive encrypted NRT content that has been encrypted with a control word; receive one or more entitlement control messages (ECMs); decrypt a selected ECM with a long term key to obtain the control word; and decrypt the encrypted NRT content to obtain decrypted NRT content. 37. A device configured to distribute non real-time (NRT) content over a distribution network, the device comprising: an antenna; processing logic configured to receive, using the antenna, encrypted NRT content that has been encrypted with a control word and receive one or more entitlement control messages (ECMs); key acquisition logic configured to decrypt a selected ECM with a long term key to obtain the control word; and decryption logic configured to decrypt the encrypted NRT content to obtain decrypted NRT content.
METHODS AND APPARATUS FOR CONDITIONAL ACCESS OF NON REAL-TIME CONTENT IN A DISTRIBUTION SYSTEM Claim of Priority under 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Provisional Application No. 61/029,278 entitled "METHODS AND APPARATUS FOR FORWARD LINK ONLY FRAMEWORK" filed February 15, 2008, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. [0002] The present Application for Patent claims priority to Provisional Application No. 61/029,277 entitled "METHODS AND APPARATUS FOR FORWARD LINK ONLY NON REAL TIME FILE FORMAT" filed February 15, 2008, and assigned to the assignee hereof and hereby expressly incorporated by reference herein BACKGROUND [0003] Data networks, such as wireless communication networks, have to trade off between services customized for a single terminal and services provided to a large number of terminals. For example, the distribution of non real time (NRT) content to a large number of resource limited portable devices (subscribers) is a complicated problem. Therefore, it is very important for network administrators, content retailers, and service providers to have a way to distribute NRT content and/or other network services in a fast and efficient manner and in such a way as to increase bandwidth utilization and terminal power efficiency. [0004] In current content delivery/distribution systems, foreground and background services are packed into a transmission frame and delivered to devices on a network. For example, a communication network may utilize Orthogonal Frequency Division Multiplexing (OFDM) to broadcast real time services from a network server to one or more mobile devices. For example, the foreground services comprise real time streaming video and/or audio that generally needs to be processed when received. The background services comprise non real-time advertisements, presentations, files or other data. [0005] It has become increasingly important in current wireless distribution systems to be able to provide conditional access (CA) to content. Conditional access means that one or more network entities (such as third party content vendors) are able to controluser access to selected content to prevent unauthorized use. For example, conventional systems currently operate to provide conditional access to real time content, such as news, weather, sports, etc. However, conditional access systems to control access to NRT content are not available. [0006] Therefore, it would be desirable to have a system that operates to provide conditional access to NRT content over a distribution network. BRIEF DESCRIPTION OF THE DRAWINGS [0007] The foregoing aspects described herein will become more readily apparent by reference to the following Description when taken in conjunction with the accompanying drawings wherein: [0008] FIG. 1 shows a communication system that illustrates aspects of a NRT content distribution system; [0009] FIG. 2 shows a conventional real-time conditional access content distribution system; [0010] FIG. 3 shows an exemplary NRT content distribution system; [0011] FIG. 4 shows another exemplary NRT content distribution system; [0012] FIG. 5 shows still another exemplary NRT content distribution system; [0013] FIG. 6 shows an exemplary protocol stack for use in aspects of a NRT content distribution system; [0014] FIG. 7 illustrates a general NRT file format for use in aspects of a NRT content distribution system; [0015] FIG. 8 shows an exemplary clip definition record for use in aspects of a NRT content distribution system; [0016] FIGS. 9A-B show an exemplary conditional access parameters and content information parameters that are part of the clip definition record of FIG. 8 for use in aspects of a NRT content distribution system; [0017] FIG. 10 shows an exemplary method for providing conditional access of NRT content for use in aspects of a NRT content distribution system; [0018] FIG. 11 shows another exemplary method for providing conditional access of NRT content for use in aspects of a NRT content distribution system; [0019] FIG. 12 shows exemplary NRT content receiving module for use in aspects of a NRT content distribution system; [0020] FIG. 13 shows an exemplary method for receiving NRT content for use in aspects of a NRT content distribution system; [0021] FIG. 14 shows an exemplary NRT content delivery component for use in aspects of a NRT content distribution system; and [0022] FIG. 15 shows an exemplary NRT content receiving module for use in aspects of a NRT content distribution system.DESCRIPTION [0023] In one or more aspects, a NRT content distribution system (comprising methods and apparatus) is described that operates to provide efficient conditional access of non real-time content transmitted over a distribution network, hi an aspect, the system interfaces to one or more third party conditional access systems to allow these systems to control user access to the NRT content. [0024] The system is suited for use in wireless network environments, but may be used in any type of network environment, including but not limited to, communication networks, public networks, such as the Internet, private networks, such as virtual private networks (VPN), local area networks, wide area networks, long haul networks, or any other type of data network. [0025] FIG. 1 shows a communication system 100 that illustrates aspects of a NRT content distribution system. The communication system 100 comprises server 102, distribution network 104, and devices 106. In an aspect, the NRT content distribution system operates to allow the server 102 to provide conditional access to NRT content delivered to devices in communication with the distribution network 104. The NRT content comprises media clips, presentations, data, metadata, applications or any other type of non real-time content. [0026] The server 102 operates to communicate with the network 104 using any type of communication link 116. The network 104 may be any type of wired and/or wireless distribution network, such as a forward link only broadcast network. In an aspect, the network 104 provides services to a local area in which the devices 106 are operating. For example, the network 104 may operate to distribute information to a local region or community, city, or county. Although only a few devices 106 are shown, it should be noted that the system is suitable for use with any number and/or types of devices. [0027] The server 102 comprises NRT delivery component 108 that includes a NRT content delivery module 112 that operates to receive NRT content for distribution over the network 104. The NRT content delivery module 112 interfaces to third party conditional access systems 114 to allow access to the NRT content to be controlled by one or more of the third party conditional access systems. For example, the NRT content is encrypted with a control word and each conditional access system 114 operates to encrypt the control word with a long term key associated with the particular conditional access system; thereby generating an entitlement control message (ECM)that comprises the encrypted control word. Each conditional access system also generates an entitlement management message (EMM) that comprises the long term key and is distributed to authorized users (i.e., when each user subscribes to receive the NRT content). Thus, each conditional access system is able to limit access of the NRT content to its subscribers. [0028] In one or more aspects, The NRT delivery component 108 operates to perform one or more of the following operations. 1. Obtain non real-time content to be distributed 2. Obtain a control word to encrypt the NRT content. 3. Encrypt the NRT content using the control word to generated encrypted NRT content. 4. Interface with one or more third party conditional access systems to obtain ECMs and EMMs that allow each conditional access system to control access to the encrypted content. 5. Transmit the encrypted content, ECMs and EMMs over a distribution network. [0029] The transmitted encrypted NRT content, ECMs and EMMs are receivable by the devices 106. For the purpose of this description, the operation of the devices will be described with reference to the device 110. [0030] The device 110 comprises a NRT content receiving module 116. This module operates to receive the encrypted content, ECMs and EMMs. If the device 110 is authorized to access particular NRT content, it may use a received EMM to obtain a long term key with which to decrypt the appropriate ECM associated with that NRT content. The ECM comprises a control word that can be used to decrypt the encrypted NRT content for storage and/or rendering. [0031] Therefore, aspects of the NRT content distribution system operate to provide efficient conditional access of NRT content transmitted over a distribution network. It should be noted that the communication system 100 illustrates just one implementation and that other implementations are possible within the scope of the aspects. [0032] FIG. 2 shows a conventional real-time conditional access content distribution system 200. The system 200 comprises real-time content provisioning module 202, simul-crypt synchronizer (SCS) 204 and one or more third party conditional access modules 206. For example, the system 200 is operable to schedule and deliver real time content over a distribution network.[0033] The real-time content provisioning module 202 operates to communicate with provisioning logic (PL) 208 at the third party modules 206 to provision and schedule the delivery of real time content. Once content provisioning is complete, the real time content provisioning module 202 communicates with the simul-crypt synchronizer 204 to obtain a control word (short term key) for encrypting the real time content. The simul-crypt synchronizer 204 comprises a control word generator (CWG) 210 that operates to generate a control word that is passed to an ECM generator 212 of the third party modules 206. In response, the ECM generator 212 generates an ECM comprising the control word which has been encrypted by a long term key provided by the respective third party module 206. The simul-crypt synchronizer 204 then passes the control word and corresponding ECM(s) to a real time transport system (RTS) for distribution. [0034] hi addition, the third party modules 206 comprise an EMM generator 214 that generates an EMM that comprises the long term key which can be used to decrypt a corresponding ECM to obtain the control word. The EMM(s) are also passed to the RTS for distribution. [0035] During operation, the RTS operates to encrypt the real time content with the control word and transmit the encrypted content and ECM over a particular flow or channel of the distribution network. The EMM is transmitted over the distribution network on a different flow or channel. [0036] To prevent unauthorized access, the control word is periodically changed by the simul-crypt synchronizer 204. For example, the control word may be changed every ten seconds so that if the current control word is comprised, only ten seconds of content can be accessed by unauthorized users. [0037] The time line 216 illustrates how the control word is periodically changed by the simul-crypt synchronizer 204. An encryption period 218 is used to determine how often to change the control word so as to limit content access by unauthorized users. At the end of each encryption period 218, the simul-crypt synchronizer 204 controls the CWG 210 to generate a new control word that is passed to the third party conditional access modules 206. New ECMs and EMMs are generated and passed to the real time transport system. [0038] FIG. 3 shows an exemplary NRT content distribution system 300. For example, the system 300 is suitable for use as the NRT content delivery component 108 shown in FIG. 1.[0039] The system 300 comprises NRT content provider 302, NRT encryption module 304, one or more third party ECM generators 306, network serving node 308, and CWG module 310. FIG. 3 also shows interfaces that exist between the various components of the NRT content distribution system 300. Each of the interfaces is identified by a circled numeral. [0040] The NRT content provider 302 operates to provide NRT content to the NRT encryption module 304 using interface 1. Interface 1 is a content acquisition interface and allows the NRT encryption module 304 to acquire NRT content for distribution over a distribution network. [0041] The NRT encryption module 304 operates to communicate with the CWG module 310 using interface 2. The CWG module 310 operates to generate a control word that is to be used to encrypt the NRT content. The interface 2 is control word acquisition interface that allows the NRT encryption module 304 to acquire the generated control word. [0042] The NRT encryption module 304 operates to communicate with the third party ECM generators 306 using interface 3. The third party ECM generators 302 operate to receive the control word from the NRT encryption module 304 and encrypt the control word with a long term key to generate ECMs, respectively, which comprises the encrypted control word. The interface 3 is an encryption to ECM generator interface that allow the NRT encryption module 304 acquire ECMs associated with one or more third party vendors. [0043] The encryption module 304 also operates to encrypt the NRT content with the control word to generate encrypted NRT content. The encrypted NRT content and associated ECMs are passed to the network serving node 308 using interface 4. The network serving node 308 provides access to a distribution network so that the NRT encrypted content can be distributed to devices in communication with the distribution network. The interface 4 is an encrypted content delivery interface that allows the encryption module 304 to deliver the encrypted NRT content and the ECMs to the network serving node 308. [0044] Thus, during operation, the NRT content distribution system 300 operates to provide one or more of the following functions. 1. Acquire NRT content for distribution over a distribution network. 2. Acquire a control word to be used to encrypt the content.3. Encrypt the NRT content with the control word. 4. Acquire ECMs associated with one or more third party ECM generators. 5. Deliver the encrypted NRT content and the ECMs to a network serving node for distribution over a distribution network. [0045] FIG. 4 shows another exemplary NRT content distribution system 400. For example, the system 400 is suitable for use as the NRT content delivery component 108 shown in FIG. 1. [0046] The system 400 comprises NRT content provider 402, NRT processing module 404, provisioning module 406, simul-crypt synchronizer 408, one or more third party ECM generators 410, and network serving node 412. FIG. 4 also shows interfaces that exist between the various components of the NRT content distribution system 400. Each of the interfaces is identified by a circled numeral. [0047] The NRT content provider 402 operates to provide NRT content to the NRT processing module 404 using interface 1. Interface 1 is a content acquisition interface and allows the NRT processing module 404 to acquire NRT content for distribution over a distribution network. [0048] The NRT processing module 404 operates to communicate with the provisioning module 406 using interface 2. The provisioning module 406 operates to provision and schedule the distribution of the NRT content over a distribution network. The interface 2 is a NRT content notification interface that indicates to the provisioning module 406 that NRT content is available for distribution over the distribution network. [0049] The provisioning module 406 operates to communicate with the simul-crypt synchronizer 408 using interface 3. The interface 3 comprises a provisioning to encryption interface that allows the provisioning module 406 to provide provisioning, scheduling, and various access criteria to the simul-crypt synchronizer 408. For example, the access criteria indentify the NRT content and provide information about the availability of the NRT content on the distribution network. [0050] The simul-crypt synchronizer 408 operates to receive the access criteria from the provisioning module 406 and control a control word generator 414 to generate a control word with which to encrypt the NRT content. The simul-crypt synchronizer 408 then passes the generated control word to the third party ECM generators 410 using interface 4. The interface 4 comprises an SCS to ECM generator interface that allows control words to be passed to the ECM generators 410 and generated ECM(s) to bereturned to the SCS 408. The SCS 408 then passes the control word and ECMs to the NRT processing module 404. [0051] The third party ECM generators 410 operate to receive control words and encrypt the control words into ECMs. Each ECM generator may encrypt the control word using a different long term key. Thus, the ECM generators can control access to the NRT content so that only users that have access to the appropriate long term key can decrypt the control word. [0052] The SCS 408 also operates to pass the control word and the ECMs to the NRT processing module 404 using interface 5. The interface 5 comprises a control word and ECM delivery interface that allows the NRT processing module 404 to obtain the control word and ECMs. The NRT processing module 404 then operates to encrypt the NRT content with the control word to generate encrypted NRT content. The encrypted NRT content and associated ECMs are passed to the serving node 412 using interface 6 which comprises an encrypted content delivery interface. [0053] The serving node 408 provides access to a distribution network so that the encrypted NRT content and the ECMs can be distributed to devices in communication with the distribution network. [0054] Thus, during operation the NRT content distribution system 400 operates to provide one or more of the following functions. 1. Acquire NRT content for distribution over a distribution network. 2. Perform provisioning and scheduling related to the NRT content to determine access criteria. 3. Cause a control word to be generated to be used to encrypt the NRT content. 4. Encrypt the control word with a long term key based on the access criteria to generate ECMs. 5. Encrypt the NRT content with the control word to generate encrypted NRT content. 6. Deliver the encrypted NRT content and the ECMs to a network serving node for distribution over a distribution network. [0055] FIG. 5 shows another exemplary NRT content distribution system 500. For example, the system 500 is suitable for use as the NRT content delivery component 108 shown in FIG. 1.[0056] The system 500 comprises a NRT content module 502, a NRT file management module 504, a SCS 506, one or more third party CA modules 508, and a network serving module 510. It should be noted that the system 500 illustrates just one implementation and that other implementations are possible within the scope of the various aspects. [0057] The NRT content module 502 comprises hardware and/or hardware executing software that operate to obtain the non real time content and provide this content to the NRT file management module 504. The NRT content module 206 also provides access control (AC) parameters (or access criteria) to the NRT file management module 504. The access control parameters are associated with the NRT content and are utilized by the third party CA modules 508 to control access to the NRT content as will be discussed below. In an aspect, the AC parameters are used by the CA providers and consumed by ECM generators to generate ECMs. hi one example, the AC parameters identifiers NRT content or may be associated with rights of the NRT content. [0058] The NRT file management module 504 comprises hardware and/or hardware executing software that operate to obtain the NRT content and the AC parameters. The NRT file management module 504 passes the AC parameters to the SCS 506. [0059] The SCS 506 comprises hardware and/or hardware executing software that operate to generate a control word that is used to encrypt the NRT content. For example, the SCS 506 comprises a control word generator 516 that operates to generation control word. The SCS 506 passes generated control words and the received AC parameters to the third party CA modules 508. An ECM generator 518 at each third party CA module 508 receives the control word and AC parameters and generates an ECM message. The ECM message from each of the modules 508 is returned to the SCS 506. The SCS 506 operates to pass the control word and the received ECM messages to the NRT file management module 504. [0060] Each of the third party CA modules 508 further comprises an EMM generator 520. The EMM generator 520 generates an EMM that comprises a long term key that can be used to decrypt the associated ECM message. The generated EMM messages are passed to the network serving module 510 for delivery over a distribution network. For example, the EMM messages may be delivered over the distribution network in an IP datacast. In an aspect, a grouping operation is performed where one EMM is used to cover many users to reduce bandwidth requirements.[0061] The NRT file management module 504 operates to encrypt the NRT content with the generated control word. The encrypted NRT content and the generated ECM messages are output to the network serving module 510. In an aspect, the NRT file management module 504 operates to receive information from digital rights management (DRM) module 512. This information is used by the NRT file management module 504 to associate digital rights management with the encrypted NRT content. For example, the DRM module 512 provides fine granularity control to determine how many times a presentation can be viewed. [0062] Additionally, the NRT file management module 504 operates to receive information from a forward error correction module 514. This information is used by the NRT file management module 504 to provide forward error correction for the NRT content. The FEC is used to adjust system performance. [0063] The network serving module 510 comprises at least one of a CPU, processor, gate array, hardware logic, memory elements, virtual machine, and/or hardware executing software. The network serving module 510 operates to output the encrypted NRT content, the generated ECMs and the generated EMMs. [0064] During operation, the system provides conditional access of NRT content by encrypting the NRT content with a selected control word and encrypting the control word using one or more long term keys that are associated with one or more conditional access vendors. In addition, AC parameters are associated with the NRT content to allow the conditional access vendors to further control access to the NRT content. A file management module operates to encrypt the NRT content with the generated control word. The encrypted NRT content, ECMs and EMMs are then distributed over a distribution network. [0065] Therefore, the NRT content distribution system 500 operates in various aspects to perform one or more of the following functions. 1. Acquire NRT content for distribution over a distribution network. 2. Cause a control word to be generated with which to encrypt the NRT content. 3. Encrypt the control word with a long term key based on access criteria to generate one or more ECMs. 4. Generate EMMs comprising the long terms keys associated with each conditional access system5. Encrypt the NRT content with the control word to generate encrypted NRT content. 6. Deliver the encrypted NRT content, ECMs and EMMs to a network serving node for distribution over a distribution network. [0066] hi an aspect, the NRT content distribution system comprises one or more program instructions ("instructions") or sets of codes ("codes") stored or embodied on a machine-readable medium. The codes when executed by at least one processor, for instance, a processor at the NRT file management module 504, provide the functions described herein. For example, the codes may be loaded into the NRT file management module 504 from a machine-readable medium, such as a floppy disk, CDROM, memory card, FLASH memory device, RAM, ROM, or any other type of memory device or machine-readable medium that interfaces to the NRT file management module 504. In another aspect, the codes may be downloaded into the NRT file management module 504 from an external device or network resource. The codes, when executed, provide aspects of a NRT content distribution system as described herein. [0067] FIG. 6 shows an exemplary protocol stack 600 for use in aspects of a NRT content distribution system. For example, the protocol stack 600 may be implemented by the NRT file management module 504. [0068] The protocol stack 600 comprises file-based applications 602, non real-time services 604, file delivery layer 606, transport layer 608 and air interface layer 610. [0069] The file delivery layer 606 operates to deliver NRT files to devices. The file delivery layer 606 uses the services of the transport layer 608. Files are subject to message coding to ensure they are delivered efficiently and reliably from the network to devices. A more detailed description of the protocols and messages that belong to the file delivery layer 606 is provided below. Non Real Time File Format [0070] In various aspects, the NRT content distribution system operates to provide multicast file delivery for later consumption by devices. In one implementation, the file delivery layer 606 operates to provide a NRT file transport mechanism. This mechanism can be used to transport files of any format. [0071] The NRT file transport mechanism operates to provide the following functions.1. Encapsulates one or more presentations. 2. Leverages network System Information (SI) structures thereby enabling rich- feature support. 3. Metadata is XML based for extensibility. 4. Support for Conditional Access. [0072] FIG. 7 illustrates a general NRT file format 700 for use in aspects of the NRT content distribution system. Components of the NRT file format 700 are further defined in Table 1 below. Table 1 NRT_FILE_DATA (702) [0073] Non real time file data (NRT_FILE_DATA) - Contains an encapsulated file. META_DATA_TYPE (704) [0074] A meta data type (META_DATA_TYPE) - Identifies the type of meta data where a value of "1" indicates "clip definition record" XML meta data. META_DATA_VALUE (706) [0075] A meta data value (METAJDATA_V ALUE) - Contains the meta data, which in this example comprises a clip definition record as further discussed below. TOTAL METAJ) ATA LENGTH (708) [0076] A total meta data length (TOTAL_META_DATA_LENGTH) - Contains the total length of the TYPE and VALUE fields. CRC (710)[0077] The CRC is a 16-bit CRC calculated over the entire NRT FILE including the data and meta-data parts except the CRC field, hi an aspect, the CRC is calculated using a standard CRC-16-CCITT generator polynomial. [0078] FIG. 8 illustrates an exemplary clip definition record 800 for use in aspects of a NRT content distribution system. For example, the clip definition record 800 is suitable for use as the meta data value 706 described above. [0079] The clip definition record 800 comprises a record type indicator 802, a NRT presentation indicator 804, attributes 806, conditional access specifications 808, encryption information 810, content information 812, presentation language information 814, rating indicator 816 and genre indicator 818. [0080] FIGS. 9A-B show an exemplary conditional access parameters 900 and content information parameters 902 that are part of the clip definition record of FIG. 8 for use in aspects of a NRT content distribution system. [0081] The conditional access parameters 900 comprise a conditional access specification indicator 904 that identifies one or more conditional access specifications, a conditional access system identifier 906 that identifies one or more CA venders or third parties, an operator identifier 908, and private data 910 that contains EMCs associated with each identified CA system identifier 906. The content information parameters 902 comprise attributers 912. [0082] FIG. 10 shows an exemplary method 1000 for use in aspects of a NRT content distribution system. For clarity, the method 1000 is described herein with reference to the NRT content distribution system 300 shown in FIG. 3. For example, in an aspect, the NRT encryption module 304 executes one or more sets of codes to control the NRT content distribution system 300 to perform the operations described below. [0083] At block 1002, NRT content is acquired for distribution to devices on a distribution network. For example, the NRT content may comprise clips, presentations, data or other type of NRT content. In an aspect, the NRT content is acquired by the NRT encryption module 304. [0084] At block 1004, a control word is acquired to be used to encrypt the NRT content. In an aspect, the NRT encryption module 304 acquires the control word from the CWG module 310.[0085] At block 1006, the NRT content is encrypted with the control word to generate encrypted NRT content. In an aspect, the encryption module 304 operates to encrypt the NRT content using the control word. [0086] At block 1008, ECMs associated with one or more ECM generator are acquired. In an aspect, the NRT encryption module 304 passes the control word to the ECM generators 306 and each generator generates an ECM in response. [0087] At block 1010, the encrypted NRT content and the ECMs are delivered to devices over a distribution network. In an aspect, the network serving node 308 operates to transmit the encrypted NRT content and ECMs over the distribution network. [0088] Thus, the method 1000 operates to provide an aspect of a NRT content distribution system. It should be noted that the method 1000 represents just one implementation and that other implementations are possible within the scope of the aspects. [0089] FIG. 11 shows an exemplary method 1100 for use in aspects of a NRT content distribution system. For clarity, the method 1100 is described herein with reference to the NRT content distribution system 500 shown in FIG. 5. For example, in an aspect, the NRT file management module 504 executes one or more sets of codes to control the NRT content distribution system 500 to perform the operations described below. [0090] At block 1102, NRT content is acquired for distribution to devices on a distribution network. For example, the NRT content may comprise clips, presentations, data or other type of NRT content. In an aspect, the NRT content is acquired by the NRT file management module 504. [0091] At block 1104, a control word is acquired to be used to encrypt the NRT content. In an aspect, the NRT file management module 504 acquires the control word from the SCS 506. [0092] At block 1106, one or more ECMs are generated, hi an aspect, each ECM generator 518 encrypts the control word using a long term key to generate the ECMs. [0093] At block 1108, one or more EMMs are generated, hi an aspect, each EMM generator 520 generates an EMM that comprises the long term key. [0094] At block 1110, the NRT content is encrypted with the control word to generate encrypted NRT content. In an aspect, the NRT file management module 504 operates to encrypt the NRT content using the control word.[0095] At block 1112, the encrypted NRT content, ECMs and EMMs are delivered to devices over a distribution network. In an aspect, the NRT file management module 504 delivers the encrypted NRT content and the ECMs to a network serving node 510 that operates to transmit the encrypted NRT content and ECMs over the distribution network. Furthermore, the EMM generators 520 operate to deliver the EMMs to the network serving node 510 for transmission over the distribution network in an IP datacast. [0096] Thus, the method 1100 operates to provide an aspect of a NRT content distribution system. It should be noted that the method 1100 represents just one implementation and that other implementations are possible within the scope of the aspects. [0097] FIG. 12 shows an exemplary NRT content receiving module 1200 for use in aspects of a NRT content distribution system. For example, the NRT content receiving module 1200 is suitable for use as the NRT content receiving module 116 shown in FIG. 1. The NRT content receiving module 1200 comprises processing logic 1202, key acquisition logic 1204, rendering logic interface (I/F) 1206, decryption logic 1208, protocol stack interface 1210, and user interface 1212 all coupled to a data bus 1214. [0098] In an aspect, the processing logic 1202 comprises at least one of a CPU, processor, gate array, hardware logic, memory elements, virtual machine, software, and/or hardware executing software. Thus, the processing logic 1202 generally comprises logic configured to execute machine-readable instructions and to control one or more other functional elements of the NRT content receiving module 1200 using the data bus 1214. [0099] The user interface 1212 comprises hardware and/or hardware executing software that operate to allow the NRT content receiving module 1200 to interact with a device user to receive user instructions. For example, the user may request that particular NRT content be acquired for rendering. In an aspect, the user interface 1212 is controlled by the processing logic 1202. [00100] The rendering logic 1206 comprises hardware and/or hardware executing software that operate to allow the NRT content receiving module 1200 to render received NRT content on a device. For example, the rendering logic 1206 may communicate with a visual display or other device to allow a user to view selected NRT content. In an aspect, the rendering logic 1206 also comprises a memory that can be used to store NRT content for later presentation.[00101] The protocol stack interface 1210 comprises hardware and/or hardware executing software that operate to allow the NRT content receiving module 1200 to obtain encrypted NRT content, ECMs and EMMs from a device protocol stack. In an aspect, the processing logic 1202 operates to control the protocol stack interface 1210 to obtain information from the protocol stack. [00102] The key acquisition logic 1204 comprises hardware and/or hardware executing software that operate to allow the NRT content receiving module 1200 to process EMMs and ECMs to obtain a control word that can be used to decrypt encrypted NRT content. For example, the key acquisition logic 1204 processes EMMs to obtain a long term key that was used to encrypt a particular ECM. The long term key is then used to decrypt the ECM to obtain the control word. The control word is then passed to the decryption logic 1208. [00103] The decryption logic 1208 comprises hardware and/or hardware executing software that operate to allow the NRT content receiving module 1200 to decrypt encrypted NRT content. For example, the protocol stack interface 1210 operates to acquire encrypted NRT content from a device protocol stack. The encrypted content is passed to the decryption logic 1208 where a control word is used to decrypt the NRT content. The NRT content is then passed to the rendering logic 1206 where is it processed for rendering on a device or stored in a memory for later processing. [00104] In an aspect, the NRT content distribution system comprises one or more program instructions ("instructions") or sets of codes ("codes") stored or embodied on a machine-readable medium. The codes when executed by at least one processor, for instance, a processor at the processing logic 1202, provides the functions described herein. For example, the codes may be loaded into the processing logic 1202 from a machine-readable medium, such as a floppy disk, CDROM, memory card, FLASH memory device, RAM, ROM, or any other type of memory device or machine-readable medium that interfaces to the file receiver 1200. In another aspect, the codes may be downloaded into the file receiver 1200 from an external device or network resource. The codes, when executed, provide aspects of the NRT content distribution system as described herein. [00105] FIG. 13 shows an exemplary method 1300 for use in aspects of a NRT content distribution system. For clarity, the method 1300 is described herein with reference to the NRT content receiving module 1200 shown in FIG. 12. For example,in an aspect, the processing logic 1202 executes one or more sets of codes to control the NRT content receiving module 1200 to perform the operations described below. [00106] At block 1302, NRT content is subscribed for. In an aspect, the processing logic 1202 operates to subscribe to receive selected NRT content from one or more content vendors. [00107] At block 1304, EMM(s) associated with the subscribed for NRT content are received. For example, as part of the subscription process, the processing logic 1202 obtains EMMs from the appropriate content vendors. Li an aspect, the EMMs are obtained by the protocol stack interface 1210 and passed to the key acquisition logic 1204. [00108] At block 1306, encrypted NRT content and associated ECM are obtained. In an aspect, the processing logic 1202 operates to control the protocol stack interface 1210 to obtain the encrypted NRT content and ECM(s). [00109] At block 1308, the received ECM is passed to the key acquisition logic 1204 where the key provided in the EMM is used to decrypt the ECM to obtain a control word that was used to encrypt the encrypted NRT content. [00110] At block 1310, the received encrypted NRT content is decrypted with the control word. In an aspect, the decryption logic 1208 operates to decrypt the encrypted NRT content using the control word. [00111] At block 1312, the decrypted NRT content is passed to the rendering logic interface 1206 where it is rendered or stored for later presentation. [00112] Thus, the method 1300 operates to provide an aspect of a NRT content distribution system. It should be noted that the method 1300 represents just one implementation and that other implementations are possible within the scope of the aspects. [00113] FIG. 14 shows a NRT content delivery module 1400 for use in aspects of a NRT content distribution system. For example, the NRT content delivery module 1400 is suitable for use as the NRT content delivery module 112 shown in FIG. 1. In an aspect, the NRT content delivery module 1400 is implemented by at least one processor comprising one or more modules configured to provide aspects of a NRT content distribution system as described herein. For example, each module comprises hardware and/or hardware executing software. [00114] The NRT content delivery module 1400 comprises a first module comprising means (1402) for encrypting NRT content with a control word to generate encryptedNRT content, which in an aspect comprises file management module 504. The NRT content delivery module 1400 also comprises a second module comprising means (1404) for providing the control word to one or more entitlement control message (ECM) generators, which in an aspect comprises the SCS 506. The NRT content delivery module 1400 also comprises a third module comprising means (1406) for receiving one or more ECMs from the one or more ECM generators, respectively, wherein each ECM comprises a unique encryption of the control word to provide conditional access to the control word, which in an aspect comprises the SCS 506. The NRT content delivery module 1400 also comprises a fourth module comprising means (1408) for providing the encrypted NRT content and the one or more ECMs for transmission over a distribution network, which in an aspect comprises the file management module 504. [00115] FIG. 15 shows a NRT content receiving module 1500 for use in aspects of a NRT content distribution system. For example, the NRT content receiving module 1500 is suitable for use as the NRT content receiving module 116 shown in FIG. 1. In an aspect, the NRT content receiving module 1500 is implemented by at least one processor comprising one or more modules configured to provide aspects of a NRT content distribution system as described herein. For example, each module comprises hardware, and/or hardware executing software. [00116] The NRT content receiving module 1500 comprises a first module comprising means (1502) for receiving encrypted NRT content that has been encrypted with a control word, which in an aspect is comprises the processing logic 1202. The NRT content receiving module 1500 also comprises a second module comprising means (1504) for receiving one or more entitlement control messages (ECMs), which in an aspect comprises the processing logic 1202. The NRT content receiving module 1500 also comprises a third module comprising means (1506) for decrypting a selected ECM with a long term key to obtain the control word, which in an aspect comprises the key acquisition logic 1204. The NRT content receiving module 1500 also comprises a fourth module comprising means (1508) for decrypting the encrypted NRT content to obtain decrypted NRT content, which in an aspect comprises the decryption logic 1208. [00117] The various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA)or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [00118] The steps of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium, hi the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal, hi the alternative, the processor and the storage medium may reside as discrete components in a user terminal. [00119] The description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects, e.g., in an instant messaging service or any general wireless data communication applications, without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The word "exemplary" is used exclusively herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. [00120] Accordingly, while aspects of a NRT content distribution system have been illustrated and described herein, it will be appreciated that various changes can be made to the aspects without departing from their spirit or essential characteristics. Therefore,the disclosures and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
A memory device includes a perpendicular magnetic tunnel junction (pMTJ) stack, between a bottom electrode and a top electrode. In an embodiment, the pMTJ includes a fixed magnet, a tunnel barrier above the fixed magnet and a free magnet structure on the tunnel barrier. The free magnet structure includes a first free magnet on the tunnel barrier and a second free magnet above the first free magnet, wherein at least a portion of the free magnet proximal to an interface with the free magnet includes a transition metal. The free magnet structure having a transition metal between the first and the second free magnets advantageously improves the switching efficiency of the MTJ, while maintaining a thermal stability of at least 50kT.
A memory device, comprising:a bottom electrode;a top electrode; anda magnetic tunnel junction (MTJ) between the bottom electrode and the top electrode, the MTJ comprising:a fixed magnet;a free magnet structure comprising;a first free magnet; anda second free magnet adjacent the first free magnet, wherein at least a portion of the first free magnet proximal to an interface with the second free magnet comprises a transition metal; anda tunnel barrier between the fixed magnet and the free magnet structure.The memory device of claim 1, wherein the transition metal comprises at least one of tungsten, hafnium, tantalum or molybdenum.The memory device of claim 1, further comprising a coupling layer between the first free magnet and the second free magnet, wherein the coupling layer is discontinuous, has a thickness no more than 0.1nm and comprises the transition metal.The memory device of any of claims 1-3, wherein at least a portion of the first free magnet is in direct contact with the second free magnet, in at least one discontinuity of the coupling layer.The memory device of claim 1, wherein the first free magnet has a first perpendicular magnetic anisotropy and the second free magnet has a second perpendicular magnetic anisotropy.The memory device of claim 5, wherein the first perpendicular magnetic anisotropy is greater than the second perpendicular magnetic anisotropy.The memory device of claim 1, wherein the first free magnet has a thickness that is greater than a thickness of the second free magnet and wherein the free magnet structure has a combined total thickness that is less than 3nm.The memory device of claim 1, further comprises a cap layer comprising metal and oxygen between the free magnet structure and the top electrode, wherein the cap layer is on the side of the free magnet structure that is opposite to the tunnel barrier and wherein the cap layer has a thickness of at least 1.5nm.A method of fabricating a memory device, comprising:forming a bottom electrode layer;forming a material layer stack on the bottom electrode layer, the forming comprising:forming a fixed magnetic layer above the bottom electrode layer;forming a tunnel barrier layer on fixed magnetic layer;forming a first free magnetic layer on the tunnel barrier layer;forming a coupling layer on the first free magnetic layer, wherein the coupling layer comprises a transition metal and has a thickness no more 0.1nm; andforming a second free magnetic layer on the coupling layer;forming a top electrode on the material layer stack; andetching the material layer stack to form a memory device.The method of claim 9, wherein forming the coupling layer comprises depositing at least one of tungsten, hafnium, tantalum or molybdenum.The method of claim 10, wherein depositing the coupling layer comprises sputter depositing the coupling layer, and wherein sputter depositing the coupling layer intermixes the transition metal with constituents in at least an upper portion of the first free magnetic layer.The method of claim 11, wherein sputter depositing a 0.1nm coupling layer forms discontinuities in the coupling layer.The method of claim 10, wherein forming the second free magnetic layer comprises sputter depositing CoFeB and wherein the sputter depositing intermixes the CoFeB with the transition metal of coupling layer.An apparatus comprising:a transistor above a substrate, the transistor comprising:a drain contact coupled to a drain;a source contact coupled to a source;a gate contact coupled to a gate;a memory device coupled with the drain contact, comprising:a top electrode;a bottom electrode;a magnetic tunnel junction (MTJ) between the top electrode and the bottom electrode, the MTJ comprising:a fixed magnet;a free magnet structure comprising;a first free magnet; anda second free magnet adjacent the first free magnet, wherein at least a portion of the first free magnet proximal to an interface with the second free magnet comprises a transition metal; anda tunnel barrier between the fixed magnet and the free magnet structure.The apparatus of claim 14, further comprises a power supply coupled to the transistor.
BACKGROUNDThe past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory devices on a chip, lending to the fabrication of products with increased functionality. The drive for ever-more functionality, however, is not without issue. It has become increasingly significant to rely heavily on innovative fabrication techniques to meet the exceedingly tight tolerance requirements imposed by scaling.Non-volatile embedded memory device with perpendicular magnetic tunnel junction (pMTJ), e.g., on-chip embedded memory with non-volatility can enable energy and computational efficiency. However, the technical challenges of assembling a pMTJ stack to form functional memory devices presents formidable roadblocks to commercialization of this technology today. Specifically, increasing thermal stability of pMTJ along with increasing switching efficiency are some of the challenges in assembling a viable pMTJ stack.BRIEF DESCRIPTION OF THE DRAWINGSThe material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Also, various physical features may be represented in their simplified "ideal" forms and geometries for clarity of discussion, but it is nevertheless to be understood that practical implementations may only approximate the illustrated ideals. For example, smooth surfaces and square intersections may be drawn in disregard of finite roughness, corner-rounding, and imperfect angular intersections characteristic of structures formed by nanofabrication techniques. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.Figure 1A illustrates a cross-sectional view of a memory device, in accordance with an embodiment of the present disclosure.Figure 1B illustrates an enhanced cross-sectional view of a free magnet structure, depicting discontinuities in a coupling layer between two free magnetic layers, in accordance with an embodiment of the present disclosure.Figure 1C illustrates a cross-sectional view depicting the direction of magnetization in a free magnet relative to the direction of magnetization in a fixed magnetic layer, in accordance with an embodiment of the present disclosure.Figure 1D illustrates a cross-sectional view depicting the direction of magnetization in a free magnet relative to the direction of magnetization in a fixed magnetic layer, in accordance with an embodiment of the present disclosure.Figures 1E illustrates a cross-sectional view of individual layers of a synthetic antiferromagnetic structure, in accordance with an embodiment of the present disclosure.Figure 2 illustrates a flow diagram of a method to fabricate a memory device.Figure 3A illustrates a conductive interconnect formed above a substrate.Figure 3B illustrates the structure of Figure 3A following the formation of a first conductive layer on the conductive interconnect followed by the formation of a plurality of layers of a pMTJ material layer stack.Figure 4A illustrates a cross-sectional view of the structure in Figure 3B following the deposition of a coupling layer on the first free magnetic layer, followed by the formation of a second magnetic layer on the coupling layer.Figure 4B illustrates an enhanced cross-sectional view of discontinuities in the coupling layer.Figure 5A illustrates a cross-sectional view of the structure in Figure 4A following the formation of capping layer on the second free magnetic layer, formation of a conductive layer on the capping layer, followed by the formation of a mask on the conductive layer.Figure 5B illustrates a cross-sectional view of the structure in Figure 4A following the patterning of the conductive layer and the pMTJ material layer stack to form a pMTJ device.Figure 5C illustrates a cross-sectional view of the structure in Figure 5B following the formation of a dielectric spacer adjacent to the pMTJ.Figure 6 illustrates a cross-sectional view of a SOT memory device coupled having one terminal coupled to a first transistor, a second terminal coupled to a second transistor, and a third terminal coupled to a bit line.Figure 7 illustrates a computing device in accordance with embodiments of the present disclosure.Figure 8 illustrates an integrated circuit (IC) structure that includes one or more embodiments of the present disclosure.DESCRIPTION OF THE EMBODIMENTSPerpendicular-MTJ (pMTJ) devices with enhanced stability and high switching efficiency and methods of fabrication are described. In the following description, numerous specific details are set forth, such as novel structural schemes and detailed fabrication methods in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as transistor operations and switching operations associated with embedded memory, are described in lesser detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.Certain terminology may also be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings to which reference is made. Terms such as "front", "back", "rear", and "side" describe the orientation and/or location of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import.In the following description, numerous details are set forth. However, it will be apparent to one skilled in the art, that the present disclosure may be practiced without these specific details. In some instances, well-known methods and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present disclosure. Reference throughout this specification to "an embodiment" or "one embodiment" or "some embodiments" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Thus, the appearances of the phrase "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment of the disclosure. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive.As used in the description and the appended claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.The terms "coupled" and "connected," along with their derivatives, may be used herein to describe functional or structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical, or electrical contact with each other. "Coupled" may be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship).The terms "over," "under," "between," and "on" as used herein refer to a relative position of one component or material with respect to other components or materials where such physical relationships are noteworthy. For example, in the context of materials, one material or material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material/material. Similar distinctions are to be made in the context of component assemblies.As used throughout this description, and in the claims, a list of items joined by the term "at least one of" or "one or more of" can mean any combination of the listed terms. Unless otherwise specified in the explicit context of their use, the terms "substantially equal," "about equal" and "approximately equal" mean that there is no more than incidental variation between two things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.A MTJ device functions as a variable resistor where the resistance of the device may switch between a high resistance state and a low resistance state. The resistance state of a MTJ device is defined by the relative orientation of magnetization of two magnetic layers (fixed and free) that are separated by a tunnel barrier. When the magnetization of the two magnetic layers have orientations that are in the same direction the MTJ device is said to be in a low resistance state. Conversely, when the magnetization of the two magnetic layers have orientations that are in opposite directions the MTJ device is said to be in a high resistance state. In an embodiment, resistance switching is brought about by passing a critical amount of spin polarized current or switching current through the MTJ device so as to influence the direction of magnetization of the free magnetic layer to align with the magnetization of the fixed magnetic layer. Such an alignment may be brought about by a torque exerted by the spin polarized current on the magnetization of the free magnetic layer. By changing the direction of the current, the direction of magnetization in the free magnetic layer may be reversed relative to that of the fixed magnetic layer. Since the free magnetic layer does not need power to retain relative orientation of magnetization, the resistance state of the MTJ device is retained even when there is no power applied to the MTJ device. For this reason, MTJ belongs to a class of memory known as non-volatile memory.Integrating a non-volatile memory device such as a MTJ device onto an access transistor enables the formation of embedded memory for system on chip or for other applications. However, approaches to integrate a MTJ device onto an access transistor presents challenges that have become far more formidable with scaling. Examples of such challenges range from improving thermal stability of MTJ devices against perturbing forces, reducing switching current and enabling patterning of MTJ devices at less than 40nm feature sizes. As scaling continues, the need for smaller memory devices to fit into a scaled cell size has driven the industry in the direction of "perpendicular" MTJ or pMTJ. pMTJ based memory devices have a fixed magnet and a free magnet each having a magnetic anisotropy that is perpendicular with respect to a horizontal plane of the free magnet. Fortunately, while pMTJ devices have higher stability for small memory device sizes, maintaining stability along with improving other device parameters such as switching efficiency continues to be a challenge.A free magnet in pMTJ device may include a multilayer stack having a layer of non-magnetic material between a pair of layers including magnetic materials to increase thermal stability and improve retention characteristics for functionality as a memory device. Multiple layers of magnetic materials separated by a layer of non-magnet may be dipole coupled and where the dipole coupled layers of magnetic materials undergo magnetization switching simultaneously. To an extent, thermal stability of a pMTJ device depends on the strength of the perpendicular anisotropy of the free magnetic layers in the pMTJ material layer stack. Strength of perpendicular anisotropy depends on the quality and size (volume) of the free magnets, to an extent on the number and quality of interfaces between magnetic and non-magnetic layers and on a parameter such as a thickness of the nonmagnetic layer. While a thick non-magnetic layer may be used to increase perpendicular interfacial anisotropy and thermal stability, another magnetic parameter such as switching efficiency may become diminished. The switching efficiency of a free magnet may be defined as the ratio between thermal activation barrier height, Eb, in free magnetic layer and the threshold switching current, Ic0. A thick non-magnetic layer may be any non-magnetic layer that is substantially equal to, or thicker than, 0.2nm.In accordance with embodiments of the present disclosure, the switching efficiency may be increased by thinning the non-magnetic layer below 0.2nm. By thinning the non-magnetic layer, a parameter known as magnetic damping may be reduced as the non-magnetic layer becomes discontinuous. Magnetic damping acts against the spin transfer torque in the free magnets and causes an increase in the switching current and a reduction in switching efficiency.In accordance with embodiments of the present disclosure, a memory device includes a perpendicular magnetic tunnel junction (pMTJ) stack, between a bottom electrode and a top electrode. In an embodiment, the pMTJ includes a fixed magnet, a tunnel barrier above the fixed magnet and a free magnet structure on the tunnel barrier. The free magnet structure includes a first free magnet on the tunnel barrier and a second free magnet above the first free magnet, wherein at least a portion of the free magnet proximal to an interface with the free magnet includes a transition metal. The free magnet structure having a transition metal between the first and the second free magnet may advantageously improve the switching efficiency of the pMTJ, while maintaining a thermal stability (e.g., of at least 50kT). In some embodiments, the transition metal includes at least one of tungsten, hafnium, tantalum or molybdenum. In other embodiments, the free magnet structure includes a coupling layer that includes the transition metal, where the coupling layer is at most 0.1nm thick. Such a layer may be sufficiently thin to effectively reduce damping, but sufficiently thick to preserve interfacial perpendicular anisotropy in each of the interfaces between the coupling layer and the first free magnet, and between the coupling layer and the second free magnet. A coupling layer having a thickness of 0.1nm may be discontinuous over some portions between the first and the second free magnets, enabling portions of the first and second free magnets in one or more discontinuities to be in direct contact with each other. However, the discontinuities may not be so prevalent or substantial, so as to prevent dipole coupling between the first and second free magnets (as might occur for significantly thicker coupling layers).Figure 1A is an illustration of a cross-sectional view of a memory device 100 in accordance with an embodiment of the present disclosure. The memory device 100 includes a bottom electrode 101, a top electrode 120, and a magnetic tunnel junction (MTJ) 104 between the bottom electrode 101 and the top electrode 120. The MTJ 104 includes a fixed magnet 112 above the bottom electrode 101, a free magnet structure 106 above the fixed magnet 112 and a tunnel barrier 110 between the fixed magnet 112 and the free magnet structure 106. In an exemplary embodiment, as disclosed herein, the magnetic tunnel junction (MTJ) 104 is a perpendicular MTJ (pMTJ). In some such embodiment, the free magnet structure 106 and fixed magnet 112 of the pMTJ 104 have perpendicular magnetic anisotropy. The free magnet structure 106 advantageously improves the switching efficiency of the pMTJ 104, while maintaining a thermal stability of at least 50kT. The free magnet structure 106 includes a first free magnet 107 on the tunnel barrier 110, and a second free magnet 108 on the first free magnet 107, wherein at least a portion of the free magnet 107 proximal to an interface 103 with the free magnet 108 includes a transition metal. In some embodiments, the transition metal includes at least one of tungsten, hafnium, tantalum or molybdenum.In some embodiments, the pMTJ 104 further includes a coupling layer 109 between the free magnet 107 and the free magnet 108. In the illustrative embodiment, the coupling layer 109 couples the free magnet 107 to free magnet 108 via dipole coupling.In some embodiments, the coupling layer 109 has a thickness that is no more than 0.1nm and includes the transition metal. In some embodiments, a coupling layer 109 having a thickness less than 0.1nm is discontinuous, as illustrated in Figure 1B . When there are discontinuities in the coupling layer, at least a portion of the free magnet 108 is in direct contact with the free magnet 107, in at least one discontinuity 111 or in a plurality of discontinuities in the coupling layer 109 as shown in Figure 1B . In the illustrative embodiment, at least one discontinuity 111 is filled with free magnet portion 108A.In an embodiment, the free magnet 107 has a thickness between 0.5nm and 2.0nm for pMTJ devices. In an embodiment, the free magnet 108 has a thickness between 0.0.4nm and 1.5nm for pMTJ devices. In an embodiment, the free magnet 107 has a thickness that is greater than a thickness of the free magnet 108. In such an embodiment, the free magnet structure 106 has a combined total thickness that is less than 3nm. A combined total thickness of less than 3nm can be sufficient to maintain perpendicular magnetic anisotropy in the free magnet structure 106.Referring again to Figure 1A , in an exemplary embodiment, the free magnet 107 has a first perpendicular magnetic anisotropy and the free magnet 108 has a second perpendicular magnetic anisotropy. In some embodiments, the first perpendicular magnetic anisotropy is greater than the second perpendicular magnetic anisotropy. In other embodiments, the first perpendicular magnetic anisotropy is substantially similar to the second perpendicular magnetic anisotropy. In yet another embodiment, the second perpendicular magnetic anisotropy is greater than the first perpendicular magnetic anisotropy.The pMTJ 104 further includes a cap 114 between the free magnet structure 106 and the top electrode 120. In the illustrative embodiment, the cap 114 is on the side of the free magnet structure 106 that is opposite to the tunnel barrier 110. The cap 114 may be a non-metal such as an oxide. In embodiments, cap 114 is a oxide that includes a metal and oxygen, for instance In2O3-x, VO2, V2O3, WO2, RuO, AlOx or MgO. In other examples, the cap 114 is a doped conductive oxide such as Sn-doped In2O3 (ITO), In- or Ga-doped ZnO or metal doped MgO. In an embodiment, the cap 114 has a thickness of at least 1.5nm. In an embodiment, when the free magnet 108 includes iron, the cap 114 is source of oxygen that enables oxygen-iron hybridization at an interface 105 located between an uppermost surface of the free magnet 108 and a lowermost surface of the cap 114. The oxygen-iron hybridization in the interface 105 enables interfacial perpendicular anisotropy in the free magnet structure 106.In an embodiment, tunnel barrier 110 is composed of a material suitable for allowing electron current having a majority spin to pass through tunnel barrier 110, while impeding, at least to some extent, electron current having a minority spin from passing through tunnel barrier 110. Thus, tunnel barrier 110 (or spin filter layer) may also be referred to as a tunneling layer for electron current of a particular spin orientation. In an embodiment, tunnel barrier 110 includes a material such as, but not limited to, oxygen and at least one of magnesium (e.g., a magnesium oxide, or MgO), or aluminum (e.g., an aluminum oxide such as Al2O3). In the illustrative embodiment, the tunnel barrier 110 including MgO has a crystal orientation that is (001) and is lattice matched to fixed magnet 112 below the tunnel barrier 110 and free magnet 107 above tunnel barrier 110. In an embodiment, a free magnet 107 including a Co100-x-yFexBy, is highly lattice matched to the tunnel barrier 110 including an MgO. Lattice matching a crystal structure of the free magnet 107 with the tunnel barrier 110 enables a higher tunneling magnetoresistance (TMR) ratio in the pMTJ 104. In an embodiment, tunnel barrier 110 is MgO and has a thickness in the range of 1nm to 2 nm.In an embodiment, the fixed magnet 112 includes magnetic materials with sufficient perpendicular magnetization. In an embodiment, the fixed magnet 112 of the pMTJ 104 can include alloys such as CoFe, CoFeB, FeB. The alloys of CoFe, CoFeB, FeB may include doping of tungsten, tantalum, or molybdenum to promote high perpendicular anisotropy. Alternatively, the alloys of CoFe, CoFeB, FeB may include thin inserts of W, Ta or Molybdenum to promote high perpendicular anisotropy. In an embodiment, the fixed magnet 112 comprises a Co100-x-yFexBy, where X and Y each represent atomic percent, further where X is between 50-80 and Y is between 10-40, and further where the sum of X and Y is less than 100. In one specific embodiment, X is 60 and Y is 20. In an embodiment, the fixed magnet 112 is FeB, where the concentration of boron is between 10-40 atomic percent of the total composition of the FeB alloy. In further embodiments, there are additional layers of high-anisotropy CoPt or CoNi or CoPd multilayers and/or alloys to provide a further perpendicular anisotropy boost to the alloys of two or more of Co, Fe, B. In an embodiment the fixed magnet 112 has a thickness that is between 1nm and 3nm.Figure 1C illustrates a cross-sectional view depicting the free magnet structure 106 of the pMTJ 104 having a direction of magnetization (denoted by the direction of the arrow 154) that is anti-parallel to a direction of magnetization (denoted by the direction of the arrow 156) in the fixed magnet 112. When the direction of magnetization 154 in the free magnet structure 106 is opposite (anti-parallel) to the direction of magnetization 156 in the fixed magnet 112, the pMTJ 104 device is said to be in a high resistance state.Conversely, Figure 1D illustrates a cross-sectional view depicting the free magnet structure 106 of the pMTJ 104 having a direction of magnetization (denoted by the direction of the arrow 154) that is parallel to a direction of magnetization (denoted by the direction of the arrow 156) in the fixed magnet 112. When the direction of magnetization 154 in the free magnet structure 106 is parallel to the direction of magnetization 156 in the fixed magnet 112, the pMTJ 104 is said to be in a low resistance state.In an embodiment, the free magnet structure 106 and the fixed magnet 112 can have approximately similar thicknesses and an injected spin polarized current which changes the direction of the magnetization 154 in the free magnet structure 106 can also affect the magnetization 156 of the fixed magnet 112. In an embodiment, to make the fixed magnet 112 more resistant to accidental flipping the fixed magnet 112 has a higher magnetic anisotropy than the free magnet structure 106. In another embodiment, as illustrated in Figure 1A , memory device 100 includes a synthetic antiferromagnetic (SAF) structure 118 between the bottom electrode 101 and the fixed magnet 112. A SAF structure 118 may minimize stray magnetic field impinging on the free layer 108 and may prevent an accidental change in the direction of magnetization 156 in the fixed magnet 112.Figure 1E illustrates cross-sectional view of the SAF structure 118 in an accordance of an embodiment of the present disclosure. In an embodiment, the SAF structure 118 includes a non-magnetic layer 118B sandwiched between a first pinning ferromagnet 118A and a second pinning ferromagnet 118C as depicted in Figure 1D . The first pinning ferromagnet 118A and the second pinning ferromagnet 118C are anti-ferromagnetically coupled to each other. In an embodiment, the first pinning ferromagnet 118A includes a layer of a magnetic metal such as Co, Ni, Fe, alloys such as CoFe, CoFeB, or alloys of magnetic metals such as Co, Ni, Fe or a bilayer of a magnetic/non-magnetic metals such but not limited to Co/Pd or a Co/Pt. In an embodiment, the non-magnetic layer 118B includes a ruthenium or an iridium layer. In an embodiment, the second pinning ferromagnet 118C includes a layer of a magnetic metal such as Co, Ni, Fe, alloys such as CoFe, CoFeB, or alloys of magnetic metals such as Co, Ni, Fe or a bilayer of a magnetic/non-magnetic metals such but not limited to Co/Pd or a Co/Pt. In an embodiment, a ruthenium based non-magnetic layer 118B has a thickness between 0.3nm and 1.0 nm to ensure that the coupling between the first pinning ferromagnet 118A and the second pinning ferromagnet 118C is anti-ferromagnetic in nature.It is to be appreciated that an additional layer of non-magnetic spacer layer may exist between the fixed magnet 112 and the SAF structure 118 (not illustrated in Figure 1A ). A non-magnetic spacer layer enables coupling between the SAF structure 118 and the fixed magnet 116. In an embodiment, a non-magnetic spacer layer may include a metal such as Ta, Ru or Ir.Referring again to Figure 1A , in an embodiment, the top electrode 120 includes a material such as Ta or TiN. In an embodiment, the top electrode 120 has a thickness between 5nm and 70nm. In some embodiments, the bottom electrode 101 includes one or more layers including materials such as but not limited to TaN, Ru or TiN.In an embodiment, the substrate 150 includes a suitable semiconductor material such as but not limited to, single crystal silicon, polycrystalline silicon and silicon on insulator (SOI). In another embodiment, substrate 150 includes other semiconductor materials such as germanium, silicon germanium or a suitable group III-N or a group III-V compound. Logic devices such as MOSFET transistors and access transistors and may be formed on the substrate 150. Logic devices such as access transistors may be integrated with memory devices such as SOT memory devices to form embedded memory. Embedded memory including SOT memory devices and logic MOSFET transistors can be combined to form functional integrated circuit such as a system on chip.Figure 2 illustrates a flow diagram of a method to fabricate a memory device. The method 200 begins at operation 210 by forming a first electrode in a dielectric layer above a substrate. The method 200 continues at operation 220 with the formation of a pMTJ material layer stack on the bottom electrode. In exemplary embodiments, all layers in the pMTJ material layer stack are blanket deposited in-situ without breaking vacuum. In a simplest embodiment, forming the pMTJ material layer stack includes deposition of a SAF layer on the bottom electrode, deposition of a fixed magnetic layer on the SAF layer, deposition of a tunnel barrier on the fixed magnetic layer, deposition of a first free magnetic layer on the tunnel barrier, deposition of a coupling layer on the second free magnetic layer, deposition of a second free magnetic layer on the coupling layer and deposition of a capping layer on the second free magnetic layer. The method 200 is continued at operation 240 with patterning of the pMTJ material layer stack to form a memory device. The method 200 is then resumed at operation 240 with the deposition of dielectric spacer and patterning to form a dielectric spacer adjacent to sidewalls of the memory device.Figures 3A-5B illustrate cross-sectional views representing various operations in a method of fabricating a memory device, such as the memory device 100 in accordance with embodiments of the present disclosure.Figure 3A illustrates a conductive interconnect 304 formed above a substrate 150. In some embodiments, the conductive interconnect 304 is formed in a dielectric layer 302, above a substrate, such as is shown. In an embodiment, the conductive interconnects 304 includes a barrier layer 304A and a fill metal 304B. In some examples, the barrier layer 304A includes a material such as tantalum nitride or ruthenium. In some examples, the fill metal 304B includes a material such as copper or tungsten. In other examples, the conductive interconnect 304 is fabricated using a subtractive etch process when materials other than copper are utilized. In an embodiment, the dielectric layer 302 includes a material such as but not limited to silicon dioxide, silicon nitride, silicon carbide, or carbon doped silicon oxide. The dielectric layer 302 may have an uppermost surface that is substantially co-planar with an uppermost surface of the conductive interconnect 304, as is illustrated. In some embodiments, conductive interconnects 304 is electrically connected to a separate circuit element such as a transistor (not shown).Figure 3B illustrates the structure of Figure 3A following the formation of a conductive layer 305 on the conductive interconnect 304 followed by the formation of a plurality of layers of a pMTJ material layer stack 340. In an embodiment, the conductive layer 305 includes a material that is the same or substantially the same as the material of the bottom electrode 120.In an embodiment, one or more SAF layers 307 that form a SAF structure are formed on the conductive layer 305. In some embodiments, one or more SAF layers 307 are blanket deposited on the conductive layer 305 using a PVD process. In some embodiments, the one or more SAF layers 307 utilized to form a SAF structure are the same or substantially the same as the one or more layers in the SAF structure 118, described above.In an embodiment, a fixed magnetic layer 309 is deposited on the one or more SAF layers 307. The fixed magnetic layer 309 may be deposited using a PVD process or a plasma enhanced chemical vapor deposition process. In an embodiment, the fixed magnetic layer 309 includes a material that is the same or substantially the same as the material of the fixed magnet 112. In an embodiment, the deposition process forms a fixed magnetic layer 309 including CoFeB that is amorphous. In one example, fixed magnetic layer 309 is deposited to a thickness between 0.9nm and 2.0nm to fabricate a pMTJ. During an in-situ deposition process, a tunnel barrier layer 311 is then formed on the fixed magnetic layer 309, a first free magnetic layer 313 is formed on the tunnel barrier layer 311 to partially complete formation of a pMTJ material layer stack 340.In some embodiments, a tunnel barrier layer 311 is blanket deposited on the fixed magnetic layer 309. In an embodiment, the tunnel barrier layer 311 is a material including magnesium and oxygen or a material including aluminum and oxygen. In an exemplary embodiment, the tunnel barrier layer 311 is a layer of MgO and is deposited using a reactive sputter process. In an embodiment, the reactive sputter process is carried out at room temperature. In an embodiment, the tunnel barrier layer 311 is deposited to a thickness between 0.8nm to 1nm. In some examples, the deposition process is carried out in a manner that yields a tunnel barrier layer 311 having an amorphous structure. In some such examples, the amorphous tunnel barrier layer 311 becomes crystalline after performing a high temperature anneal process to be described further below. In other embodiments, the tunnel barrier layer 311 is crystalline as deposited.In an embodiment, a free magnetic layer 313 is blanket deposited on an uppermost surface of the tunnel barrier layer 311. In an embodiment, the deposition process includes a physical vapor deposition (PVD) or a plasma enhanced chemical vapor deposition process. In an embodiment, the PVD deposition process includes an RF or a DC sputtering process. In an exemplary embodiment, the free magnetic layer 313 is Co100-x-yFexBy, where X and Y each represent atomic percent, further where X is between 50-80 and Y is between 10-40, and further where the sum of X and Y is less than 100. In some embodiments, the free magnetic layer 313 includes a material that is the same or substantially the same as the material of the fixed magnet 116 described above. In some examples, the free magnetic layer 313 may be deposited to a thickness between 0.9nm and 2.0nm. A thickness range between 0.9nm and 2.0nm may be sufficiently thin to provide perpendicular magnetic anisotropy required to fabricate a perpendicular MTJ.Figure 4A illustrates a cross-sectional view of the structure in Figure 3B following the deposition of a coupling layer 315 on the free magnetic layer 313, followed by the formation of a second free magnetic layer 317 on the coupling layer 315. In an embodiment, the coupling layer 315 includes a transition metal and has a thickness no more 0.1nm. Forming the coupling layer 315 includes depositing a transition metal including at least one of tungsten, hafnium, tantalum or molybdenum. The coupling layer 315 may be deposited by a physical vapor deposition (PVD) process. In some embodiments, the PVD process involves a sputter depositing the material of the coupling layer 315. While deposition energy and time duration of sputter deposition process may be controlled, in some examples, sputter depositing the coupling layer 315 intermixes the transition metal with constituents in at least an upper portion of the free magnetic layer 313. It is to be appreciated that a substantial portion of the transition metal adheres to an upper surface of the free magnetic layer 313. In some embodiments, a sputter process involving deposition of a 0.1 nm coupling layer forms discontinuities in the coupling layer 315 as illustrated in the enhanced cross-sectional illustration of Figure 4B . The enhanced cross-sectional illustration represents a portion 327 of the free magnetic layer 313, coupling layer 315 and the free magnetic layer 317.An interface between the coupling layer 315 and the first magnetic layer 313 provides interfacial anisotropy contribution to the overall perpendicular magnetic anisotropy of the first free magnetic layer 313, in spite of discontinuities in the coupling layer 315. An interface between the coupling layer 315 and the free magnetic layer 313 provides interfacial anisotropy contribution to the overall perpendicular magnetic anisotropy of the free magnetic layer 313, in spite of discontinuities in the coupling layer 315.In embodiments, where there are discontinuities 329 in the coupling layer 315, portions 317A of the free magnetic layer 317 may be directly on portions of the free magnetic layer 313 as illustrated in Figure 4B .Referring again to Figure 4A , in some examples, the free magnetic layer 317 includes a material that is the same or substantially the same as the material of the free magnet 108. In an embodiment, formation of the free magnetic layer 317 may involve a sputter deposition process, such as a sputter deposition of a layer of CoFeB. In such an embodiment, the sputter deposition process intermixes the CoFeB with the transition metal of coupling layer 315.Figure 5A illustrates a cross-sectional view of the structure in Figure 4A following the formation remaining layers of a pMTJ material layer stack 340. In an embodiment, the capping layer 319 is deposited using a reactive sputter deposition technique and includes a material such as the material of the cap 114. In an embodiment, the capping layer 319 and the tunnel barrier layer 311 both include magnesium and oxygen. In some such embodiments, the capping layer 319 includes a layer of magnesium and oxygen that functions as a conductive oxide rather than as a tunnel barrier. In an embodiment, the capping layer 319 is deposited to a thickness of at least 1.0nm. A thickness of at least 1.0nm may advantageously counteract a nominal reduction in thermal stability of a pMTJ, that includes a 0.1nm thin coupling layer in a free magnet structure.In an embodiment, the conductive layer 321 is blanket deposited on the surface of the capping layer 319. In an embodiment, the conductive layer 321 includes a material suitable to provide a hardmask for etching the pMTJ material layer stack 340. In an embodiment, the conductive layer 321 includes one or more layers of material such as Ta, TaN or TiN. In an embodiment, the thickness of the conductive layer 321 ranges from 30nm to 70nm.In an embodiment, after all the layers in the pMTJ material layer stack 340 are deposited, an anneal is performed. In an embodiment, the anneal process enables formation of a crystalline MgO - tunnel barrier layer 311. In an embodiment, the anneal is performed immediately post deposition but before forming the mask on the conductive layer 321. A post-deposition anneal of the pMTJ material layer stack 340 is carried out in a furnace at a temperature between 300-350 degrees Celsius in a forming gas environment. In an embodiment, the forming gas includes a mixture of H2 and N2 gas. In an embodiment, the annealing process promotes solid phase epitaxy of the fixed magnetic layer 309 to follow a crystalline template of the tunnel barrier layer 310 (e.g., MgO) that is directly above the fixed magnetic layer 309. In an embodiment, the anneal also promotes solid phase epitaxy of the free magnetic layer 313 to follow a crystalline template of the tunnel barrier layer 310 (e.g., MgO) that is directly below the free magnetic layer 313, in the illustrative embodiment. <001> Lattice matching between the tunnel barrier layer 311 and the fixed magnetic layer 309 and <001> lattice matching between the tunnel barrier layer 311 and the free magnetic layer 313 enables a TMR ratio of at least 90% to be obtained in the pMTJ material layer stack 340.After an anneal, a mask 323 is formed on the conductive layer 321. In an embodiment, the mask 323 defines a shape and size of a memory device and a location where the memory device is to be subsequently formed with respect the conductive interconnect 304. In some embodiments, the mask 323 is formed by a lithographic process. In other embodiments, the mask 323 includes a dielectric material that has been patterned.Figure 5B illustrates a cross-sectional view of the structure in Figure 4A following the patterning of the conductive layer 321 and the pMTJ material layer stack 340. In an embodiment, the patterning process includes etching the conductive layer 321 by a plasma etch process to form a top electrode 120. In an embodiment, plasma etch process possesses sufficient ion energy and chemical reactivity to render vertical etched sidewalls of the top electrode 120.In an embodiment, the plasma etch process is then continued to pattern the layers of the pMTJ material layer stack 340 to form a memory device 300. The plasma etch process etches the various layers in the pMTJ material layer stack 340 to form the cap 114, the free magnet 108, the coupling layer 109, the free magnet 107, the tunnel barrier 110, the fixed magnet 112, and the SAF structure 118. The plasma etch process is continued to pattern and form a bottom electrode 101. The plasma etch process exposes the underlying dielectric layer 302. In some embodiments, depending on the etch parameters, the memory device 300. may have sidewalls that are tapered as indicated by the dashed lines 325. In the illustrative embodiment, the memory device 300 constitutes a perpendicular magnetic tunnel junction (pMTJ) memory device 100 or a pMTJ memory device 300.Figure 5C illustrates a cross-sectional view of the structure in Figure 5B following the formation of a dielectric spacer 326 adjacent to the memory device 300. In an embodiment, a dielectric spacer layer is deposited on the memory device 300 and on the uppermost surface of the dielectric layer 102. In an embodiment, the dielectric spacer layer is deposited without a vacuum break following the plasma etch process to prevent oxidation of magnetic layers in the pMTJ 104. In an embodiment, the dielectric spacer layer includes a material such as, but not limited to, silicon nitride, carbon doped silicon nitride or silicon carbide. In an embodiment, the dielectric spacer layer includes an insulator layer that does not have oxygen to minimize oxidation of the magnetic layers 112,107 and 108. In an embodiment, the dielectric spacer layer is etched by a plasma etch process forming dielectric spacer 326 on sidewalls of the memory device 300.Figure 6 illustrates a system 600 including a power supply 680 connected to a memory device 100 coupled with a transistor 601. In an embodiment, a memory device such as the memory device 300 includes a pMTJ 104 on a bottom electrode 101, described in association with Figures 1A-1E .In an embodiment, the transistor 601 has a source region 604, a drain region 606 and a gate 602. The transistor 601 further includes a gate contact 614 above and electrically coupled to the gate 602, a source contact 616 above and electrically coupled to the source region 604, and a drain contact 618 above and electrically coupled to the drain region 606 as is illustrated in Figure 6 . The memory device 100 includes a bottom electrode 101, a top electrode 120, and a pMTJ 104 between the bottom electrode 101 and the top electrode 120. The pMTJ 104 includes a fixed magnet 112 above the bottom electrode 101, a free magnet structure 106 above the fixed magnet 112 and a tunnel barrier 110 between the fixed magnet 112 and the free magnet structure 106. The free magnet structure 106 includes a first free magnet 107 on the tunnel barrier 110, and a second free magnet 108 on the first free magnet 107, wherein at least a portion of the free magnet 107 proximal to an interface with the free magnet 108 includes a transition metal. In some embodiments, the transition metal includes at least one of tungsten, hafnium, tantalum or molybdenum. The memory device 100 further includes the cap 114 between the top electrode 120 and the free magnet structure 106. In the illustrative embodiment, the memory device 100 further includes a SAF structure 118 above the bottom electrode 101.In the illustrative embodiment, the memory device 100 is electrically coupled with the drain contact 618 of transistor 601. An MTJ contact 628 is on and electrically coupled with the top electrode 120 of the MTJ 104.In an embodiment, the underlying substrate 611 represents a surface used to manufacture integrated circuits. Suitable substrate 611 includes a material such as single crystal silicon, polycrystalline silicon and silicon on insulator (SOI), as well as substrates formed of other semiconductor materials. In some embodiments, the substrate 611 is the same as or substantially the same as the substrate 150. The substrate 611 may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates.In an embodiment, the access transistor 601 associated with substrate 611 are metal-oxide-semiconductor field-effect transistors (MOSFET or simply MOS transistors), fabricated on the substrate 611. In various implementations of the invention, the access transistor 601 may be planar transistors, nonplanar transistors, or a combination of both. Nonplanar transistors include FinFET transistors such as double-gate transistors and tri-gate transistors, and wrap-around or all-around gate transistors such as nanoribbon and nanowire transistors.In an embodiment, the transistor 601 of substrate 611 includes a gate 602. In some embodiments, gate 602 includes at least two layers, a gate dielectric layer 602A and a gate electrode 602B. The gate dielectric layer 602A may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide (SiO2) and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer 602A to improve its quality when a high-k material is used.The gate electrode 602B of the access transistor 601 of substrate 611 is formed on the gate dielectric layer 602A and may consist of at least one P-type work function metal or N-type work function metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode 602B may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a conductive fill layer.For a PMOS transistor, metals that may be used for the gate electrode 602B include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a work function that is between about 4.6 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a work function that is between about 3.6 eV and about 4.2 eV.In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode 602B may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In further implementations of the invention, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode 602B may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.In some implementations of the invention, a pair of sidewall spacers 610 are on opposing sides of the gate 602 that bracket the gate stack. The sidewall spacers 610 may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers include deposition and etching process operations. In an alternate implementation, a plurality of spacer pairs may be used, for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack. As is well known in the art, source region 604 and drain region 606 are formed within the substrate adjacent to the gate stack of each MOS transistor. The source region 604 and drain region 606 are generally formed using either an implantation/diffusion process or an etching/deposition process. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the substrate to form the source region 604 and drain region 606. An annealing process that activates the dopants and causes them to diffuse further into the substrate typically follows the ion implantation process. In the latter process, the substrate 611 may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the source region 604 and drain region 606. In some implementations, the source region 604 and drain region 606 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some implementations, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In further embodiments, the source region 604 and drain region 606 may be formed using one or more alternate semiconductor materials such as germanium or a group III-V material or alloy. And in further embodiments, one or more layers of metal and/or metal alloys may be used to form the source region 604 and drain region 606. In the illustrative embodiment, an isolation 608 is adjacent to the source region 604, drain region 606 and portions of the substrate 611.In an embodiment, a dielectric layer 620 is adjacent to the source contact 616, the drain contact 618 and the gate contact 614. In the illustrative embodiment, a source metallization structure 624 is coupled with the source contact 616 and a gate metallization structure 626 is coupled with the gate contact 614. In the illustrated embodiment, a dielectric layer 650 is adjacent to the gate metallization structure 626, source metallization structure 624, memory device 100 and MTJ contact 628.In an embodiment, the source contact 616, the drain contact 618, gate contact 614, gate metallization structure 626, source metallization structure 624 and MTJ contact 628 each include a multi-layer stack. In an embodiment, the multi-layer stack includes two or more distinct layers of metal such as a layer of Ti, Ru or Al and a conductive cap on the layer of metal. The conductive cap may include a material such as Co, W or Cu.The isolation 608 and dielectric layers 620 and 650 may include any material that has sufficient dielectric strength to provide electrical isolation such as, but not, limited silicon dioxide, silicon nitride, silicon oxynitride, carbon doped nitride and carbon doped oxide.Figure 7 illustrates a computing device 700 in accordance with embodiments of the present disclosure. As shown, computing device 700 houses a motherboard 702. Motherboard 702 may include a number of components, including but not limited to a processor 701 and at least one communication chip 705. Processor 701 is physically and electrically coupled to the motherboard 702. In some implementations, communications chip 705 is also physically and electrically coupled to motherboard 702. In further implementations, communications chip 705 is part of processor 701.Depending on its applications, computing device 700 may include other components that may or may not be physically and electrically coupled to motherboard 702. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset 706, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).Communications chip 705 enables wireless communications for the transfer of data to and from computing device 700. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. Communications chip 705 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.6 family), WiMAX (IEEE 802.6 family), IEEE 802.7, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. Computing device 700 may include a plurality of communication chips 704 and 705. For instance, a first communication chip 705 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 704 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.Processor 701 of the computing device 700 includes an integrated circuit die packaged within processor 701. In some embodiments, the integrated circuit die of processor 701 includes one or more memory devices, such as SOT memory device 100, described in association with Figures 1A, 1B , 1C , ID, and 1E in accordance with embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.Communications chip 705 also includes an integrated circuit die packaged within communications chip 705. In another embodiment, the integrated circuit die of communication chips 704 and 705 include a memory array with memory cells including at least one memory device such as a memory device 100 including a MTJ 104.In various examples, one or more communication chips 704 and 705 may also be physically and/or electrically coupled to the motherboard 702. In further implementations, communications chip 704 may be part of processor 701. Depending on its applications, computing device 700 may include other components that may or may not be physically and electrically coupled to motherboard 702. These other components may include, but are not limited to, volatile memory (e.g., DRAM) 707, 708, non-volatile memory (e.g., ROM) 710, a graphics CPU 712, flash memory, global positioning system (GPS) device 713, compass 714, a chipset 706, an antenna 716, a power amplifier 709, a touchscreen controller 711, a touchscreen display 717, a speaker 715, a camera 703, and a battery 718, as illustrated, and other components such as a digital signal processor, a crypto processor, an audio codec, a video codec, an accelerometer, a gyroscope, and a mass storage device (such as hard disk drive, solid state drive (SSD), compact disk (CD), digital versatile disk (DVD), and so forth), or the like. In further embodiments, any component housed within computing device 700 and discussed above may contain a stand-alone integrated circuit memory die that includes one or more arrays of memory cells including one or more memory devices, such as a memory device 100, including a pMTJ 104 on a conductive layer including Ru and W, built in accordance with embodiments of the present disclosure.In various implementations, the computing device 700 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, IOT device in automotive applications or a digital video recorder. In further implementations, the computing device 700 may be any other electronic device that processes data.Figure 8 illustrates an integrated circuit (IC) structure 800 that includes one or more embodiments of the disclosure. The integrated circuit (IC) structure 800 is an intervening substrate used to bridge a first substrate 802 to a second substrate 804. The first substrate 802 may be, for instance, an integrated circuit die. The second substrate 804 may be, for instance, a memory module, a computer mother, or another integrated circuit die. Generally, the purpose of an integrated circuit (IC) structure 800 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an integrated circuit (IC) structure 800 may couple an integrated circuit die to a ball grid array (BGA) 807 that can subsequently be coupled to the second substrate 804. In some embodiments, the first and second substrates 802/804 are attached to opposing sides of the integrated circuit (IC) structure 800. In other embodiments, the first and second substrates 802/804 are attached to the same side of the integrated circuit (IC) structure 800. And in further embodiments, three or more substrates are interconnected by way of the integrated circuit (IC) structure 800.The integrated circuit (IC) structure 800 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the integrated circuit (IC) structure may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The integrated circuit (IC) structure may include metal interconnects 808 and vias 810, including but not limited to through-silicon vias (TSVs) 810. The integrated circuit (IC) structure 800 may further include embedded devices 814, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, device structure including transistors, such as a transistor 601 (described in association with Figure 6 , not shown in Figure 8 ) coupled with a with one at least one memory device such as the memory device 100 where at least a portion of the free magnet 107 proximal to an interface with the free magnet 108 includes a transition metal. The integrated circuit (IC) structure 800 may further include embedded devices 814 such as one or more resistive random-access devices, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the integrated circuit (IC) structure 800. In accordance with embodiments of the present disclosure, apparatuses or processes disclosed herein may be used in the fabrication of integrated circuit (IC) structure 800.Accordingly, one or more embodiments of the present disclosure relate generally to the fabrication of embedded microelectronic memory. The microelectronic memory may be non-volatile, wherein the memory can retain stored information even when not powered. One or more embodiments of the present disclosure relate to the fabrication of a perpendicular magnetic tunnel junction memory device such as the pMTJ device 100 and 300. The pMTJ devices 100 and 300 may be used in embedded non-volatile memory applications.Thus, embodiments of the present disclosure include magnetic memory devices and methods to form the same.In a first example, the memory device, includes a bottom electrode, a top electrode and a magnetic tunnel junction (MTJ) between the bottom electrode and the top electrode. The MTJ includes a fixed magnet, a free magnet structure includes, a first free magnet, a second free magnet adjacent the first free magnet, wherein at least a portion of the first free magnet proximal to an interface with the second free magnet comprises a transition metal and a tunnel barrier between the fixed magnet and the free magnet structure.In second examples, for any of first examples, the transition metal comprises at least one of tungsten, hafnium, tantalum or molybdenum.In third examples, for any of the first through second examples, the memory device further includes a coupling layer between the first free magnet and the second free magnet, wherein the coupling layer is discontinuous, has a thickness no more than 0.1 nm and comprises the transition metal.In fourth examples, for any of the first through third examples, at least a portion of the first free magnet is in direct contact with the second free magnet, in at least one discontinuity of the coupling layer.In fifth examples, for any of the first through fourth examples, the first free magnet has a first perpendicular magnetic anisotropy and the second free magnet has a second perpendicular magnetic anisotropy.In sixth examples, for any of the fifth examples, the first perpendicular magnetic anisotropy is greater than the second perpendicular magnetic anisotropy.In seventh examples, for any of the first through sixth examples, the first free magnet has a thickness that is greater than a thickness of the second free magnet and wherein the free magnet structure has a combined total thickness that is less than 3nm.In eighth examples, for any of the first through seventh examples, the first free magnet comprises cobalt, iron and boron and the second free magnet comprises cobalt, iron and boron.In ninth examples, for any of the first through eighth examples, the memory device further includes a cap layer includes metal and oxygen between the free magnet structure and the top electrode, and wherein the cap layer is on the side of the free magnet structure that is opposite to the tunnel barrier.In tenth examples, for any of the ninth example, the cap layer has a thickness of at least 1.5nm.In eleventh examples, a method of fabricating a memory device, includes forming a bottom electrode layer, forming a material layer stack on the bottom electrode layer. Forming the material layer stack includes forming a fixed magnetic layer above the bottom electrode layer, forming a tunnel barrier layer on fixed magnetic layer, forming a first free magnetic layer on the tunnel barrier layer, forming a coupling layer on the first free magnetic layer, wherein the coupling layer includes a transition metal and has a thickness no more 0.1nm, forming a second free magnetic layer on the coupling layer, forming a top electrode on the material layer stack and etching the material layer stack to form a memory device.In twelfth examples, for any of the eleventh example, forming the coupling layer includes depositing at least one of tungsten, hafnium, tantalum or molybdenum.In thirteenth examples, for any of the eleventh through the twelfth examples, depositing the coupling layer includes sputter depositing the coupling layer.In a fourteenth example, for any of the thirteenth examples, sputter depositing the coupling layer intermixes the transition metal with constituents in at least an upper portion of the first free magnetic layer.In a fifteenth example, for any of the thirteenth examples, sputter depositing a 0.1nm coupling layer forms discontinuities in the coupling layer.In sixteenth examples, for any of the eleventh through the fifteenth examples, forming the second free magnetic layer includes sputter depositing CoFeB.In seventeenth examples, for any of the sixteenth example; the sputter depositing intermixes the CoFeB with the transition metal of coupling layer.In eighteenth examples, an apparatus includes a transistor above a substrate. The transistor includes a drain contact coupled to a drain, a source contact coupled to a source, a gate contact coupled to a gate and a memory device coupled with the drain contact, includes a top electrode, a bottom electrode and a magnetic tunnel junction (MTJ) between the top electrode and the bottom electrode. The MTJ includes a fixed magnet, a free magnet structure including a first free magnet and a second free magnet adjacent the first free magnet, where at least a portion of the first free magnet proximal to an interface with the second free magnet includes a transition metal and a tunnel barrier between the fixed magnet and the free magnet structure.In nineteenth examples, for any of the eighteenth example, the apparatus further includes a power supply coupled to the transistor.In a twentieth example, for any of the eighteenth through nineteenth examples, the fixed magnet is above the drain contact, the tunnel barrier is above the fixed magnet and the free magnet structure is above the tunnel barrier.
Nanowire channel structures of continuously stacked nanowires for complementary metal oxide semiconductor (CMOS) devices are disclosed.In one aspect, an exemplary CMOS device includes a nanowire channel structure that includes a plurality of continuously stacked nanowires.Vertically adjacent nanowires are connected at narrow top and bottom end portions of each nanowire.Thus, the nanowire channel structure comprises a plurality of narrow portions that are narrower than a corresponding plurality of central portions.A wrap-around gate material is disposed around the nanowire channel structure, including the plurality of narrow portions, without entirely wrapping around any nanowire therein.The exemplary CMOS device provides, for example, a larger effective channel width and better gate control than a conventional fin field-effect transistor (FET) (FinFET) of a similar footprint.The exemplary CMOS device further provides, for example, a shorter nanowire channel structure than a conventional nanowire FET.
claimed is:A complementary metal oxide-semiconductor (CMOS) device, comprising: a substrate;a source disposed on the substrate;a drain disposed on the substrate; anda channel body interposed between the source and the drain, the channel body comprising:a channel comprising a nanowire channel structure comprising:a plurality of nanowires arranged in a continuously stacked arrangement, each of the plurality of nanowires comprising:a top end portion;a bottom end portion; anda central portion disposed between the top end portion and the bottom end portion, the central portion comprising a greater width than the top end portion and the bottom end portion; and a plurality of separation areas, each disposed between central portions of vertically adjacent nanowires among the plurality of nanowires, and each formed by the bottom end portion of a higher nanowire of vertically adjacent nanowires and the top end portion of a lower nanowire of the vertically adjacent nanowires;a dielectric material layer disposed adjacent to the plurality of nanowires and extending into the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires; anda gate material disposed adjacent to the dielectric material layer and extending into the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires.2. The CMOS device of claim 1, wherein the central portion, the top end portion, and the bottom end portion of each of the plurality of nanowires comprise body- centered cubic (BCC) facet sidewalls to form a substantially hexagonal cross section for each of the plurality of nanowires.3. The CMOS device of claim 2, wherein the BCC facet sidewalls of the central portion of each of the plurality of nanowires comprise BCC <110> facet sidewalls, and wherein the BCC facet sidewalls of the top end portion and the bottom end portion of each of the plurality of nanowires comprise BCC <111> facet sidewalls.4. The CMOS device of claim 1, wherein the plurality of nanowires comprise one of Silicon Germanium (SiGe), Silicon (Si), or Germanium (Ge).5. The CMOS device of claim 1, wherein the nanowire channel structure further comprises an isolation layer formed on the substrate over a portion of a nanowire of the nanowire channel structure, the isolation layer configured to isolate a channel material within the substrate from an electrostatic field applied to the channel.6. The CMOS device of claim 1, wherein the gate material does not completely surround at least one nanowire among the plurality of nanowires.7. The CMOS device of claim 1, wherein the gate material does not completely surround any nanowire among the plurality of nanowires.8. The CMOS device of claim 1, the channel further comprising:a second nanowire channel structure comprising a second plurality of nanowires arranged in a continuously stacked arrangement, each of the second plurality of nanowires comprising:a top end portion;a bottom end portion; and a central portion disposed between the top end portion and the bottom end portion, the central portion comprising a greater width than the top end portion and the bottom end portion; anda second plurality of separation areas, each disposed between central portions of vertically adjacent nanowires among the second plurality of nanowires, and each formed by the bottom end portion of a higher nanowire of the vertically adjacent nanowires and the top end portion of a lower nanowire of the vertically adjacent nanowires;the channel body further comprising a second dielectric material layer disposed adjacent to the second plurality of nanowires and extending into the second plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the second plurality of nanowires; andthe gate material further disposed adjacent to the second dielectric material layer and extending into the second plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the second plurality of nanowires.9. The CMOS device of claim 1 integrated into a semiconductor die.10. The CMOS device of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a smart phone; a tablet; a phablet; a server; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; and an automobile.11. A method of fabricating a complementary metal oxide-semiconductor (CMOS) device, comprising:providing a semiconductor die for a CMOS device comprising: a source formed on a substrate;a drain formed on the substrate; anda fin structure comprising a width and a length, the fin structure interposed lengthwise between the source and the drain, and comprising a first lateral side and a second lateral side;disposing a plurality of block co-polymer layers on the substrate adjacent to the fin structure, each of the plurality of block co-polymer layers comprising one of a first material of a first etching sensitivity and a second material of a second etching sensitivity different from the first etching sensitivity, the plurality of block co-polymer layers disposed in an alternating configuration between a block co-polymer layer of the first material and a block co-polymer layer of the second material;removing each block co-polymer layer of the first material to form a plurality of exposed portions of the fin structure and a plurality of masked portions of the fin structure, each masked portion of the plurality of masked portions being masked by a block co-polymer layer of the second material;etching a plurality of trenches in the fin structure in each of the plurality of exposed portions of the fin structure, along the length of the fin structure on one of the first lateral side and the second lateral side of the fin structure to form a plurality of continuously stacked nanowires separated by a plurality of separation areas, each of the plurality of separation areas comprising:a first trench of the plurality of trenches on the first lateral side; anda second trench of the plurality of trenches on the second lateral side; andremoving each block co-polymer layer of the second material to expose a central portion of the plurality of continuously stacked nanowires.12. The method of claim 11, wherein each block co-polymer layer of the second material forms the central portion of each of the plurality of continuously stacked nanowires comprising body-centered cubic (BCC) facet sidewalls; andwherein the first trench and the second trench of the plurality of separation areas form a top end portion and a bottom end portion comprising BCC facet sidewalls for corresponding nanowires of the plurality of continuously stacked nanowires to form a substantially hexagonal cross section for each of the plurality of continuously stacked nanowires.13. The method of claim 12, wherein the BCC facet sidewalls of the central portion of each of the plurality of continuously stacked nanowires are formed as BCC <110> facet sidewalls, and wherein the BCC facet sidewalls of the top end portion and the bottom end portion of each of the plurality of continuously stacked nanowires are formed as BCC <111> facet sidewalls.14. The method of claim 13, wherein etching the plurality of trenches in the fin structure along the length of the fin structure comprises exposing the fin structure to a wet chemical for a predetermined period of time.15. The method of claim 14, wherein the predetermined period of time is determined based on a time required to etch the fin structure and stop on a BCC <111> facet to form the BCC facet sidewalls of the top end portion and the bottom end portion of each of the plurality of continuously stacked nanowires as BCC <111> facet sidewalls that converge to a horizontal center of the fin structure.16. The method of claim 11, wherein disposing the plurality of block co-polymer layers on the substrate adjacent to the fin structure comprises disposing self-organizing material comprising the first material and the second material.17. The method of claim 11, further comprising disposing a capping layer above the plurality of block co-polymer layers before removing each block co-polymer layer of the first material.18. The method of claim 11, further comprising forming an isolation layer over a portion of the fin structure above the substrate to isolate a material of the fin structure disposed within the substrate from an electrostatic field applied above the substrate to the plurality of continuously stacked nano wires.19. The method of claim 18, wherein forming the isolation layer comprises implanting oxygen at the portion of the fin structure above the substrate to oxidize the portion of the fin structure and form the isolation layer at the portion of the fin structure.20. The method of claim 19, further comprising recessing the substrate before implanting the oxygen at a lower portion of the fin structure above the substrate.21. The method of claim 20, wherein recessing the substrate comprises etching the substrate.22. The method of claim 21, further comprising disposing a dielectric material layer adjacent to the plurality of continuously stacked nano wires and extending into each of the plurality of trenches forming the plurality of separation areas.23. The method of claim 22, further comprising disposing a gate material adjacent to the dielectric material layer and extending into each of the plurality of trenches forming the plurality of separation areas.24. The method of claim 11, further comprising disposing a dielectric material layer adjacent to the plurality of continuously stacked nano wires and extending into each of the plurality of trenches forming the plurality of separation areas.25. The method of claim 24, further comprising disposing a gate material adjacent to the dielectric material layer and extending into each of the plurality of trenches forming the plurality of separation areas. A complementary metal oxide semiconductor (CMOS) device, comprising: a means for providing a substrate;a means for forming a source disposed on the substrate;a means for forming a drain disposed on the substrate; anda means for forming a channel body interposed between the means for forming the source and the means for forming the drain, the channel body comprising:a means for forming a channel comprising a nanowire channel structure comprising:a plurality of nanowires arranged in a continuously stacked arrangement, each of the plurality of nanowires comprising:a top end portion;a bottom end portion; anda central portion disposed between the top end portion and the bottom end portion, the central portion comprising a greater width than the top end portion and the bottom end portion; and a plurality of separation areas, each disposed between central portions of vertically adjacent nanowires among the plurality of nanowires, and each formed by the bottom end portion of a higher nanowire of the vertically adjacent nanowires and the top end portion of a lower nanowire of the vertically adjacent nanowires; and a means for forming a dielectric material layer disposed adjacent to the plurality of nanowires and extending into portions of the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires; anda means for forming a gate material disposed adjacent to the means for forming the dielectric material layer and extending into the portions of the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires.27. The CMOS device of claim 26, wherein the central portion, the top end portion, and the bottom end portion of each of the plurality of nanowires comprise body- centered cubic (BCC) facet sidewalls to form a substantially hexagonal cross section for each of the plurality of nanowires.28. The CMOS device of claim 27, wherein the BCC facet sidewalls of the central portion of each of the plurality of nanowires comprise BCC <110> facet sidewalls, and wherein the BCC facet sidewalls of the top end portion and the bottom end portion of each of the plurality of nanowires comprise BCC <111> facet sidewalls.
NANOWIRE CHANNEL STRUCTURES OF CONTINUOUSLY STACKED NANOWIRES FOR COMPLEMENTARY METAL OXIDE SEMICONDUCTOR(CMOS) DEVICESPRIORITY CLAIM[0001] The present application claims priority to U.S. Provisional Patent Application Serial No. 62/242,170 filed on October 15, 2015 and entitled "CONTINUOUSLY STACKED NANOWIRE STRUCTURES FOR COMPLEMENTARY METAL OXIDE SEMICONDUCTOR (CMOS) DEVICES," the contents of which is incorporated herein by reference in its entirety.[0002] The present application also claims priority to U.S. Patent Application Serial No. 15/198,763 filed on June 30, 2016 and entitled "NANOWIRE CHANNEL STRUCTURES OF CONTINUOUSLY STACKED NANOWIRES FOR COMPLEMENTARY METAL OXIDE SEMICONDUCTOR (CMOS) DEVICES," the contents of which is incorporated herein by reference in its entirety.RELATED APPLICATION[0003] The present application is related to U.S. Patent Application Serial No. 15/198,892 filed on June 30, 2016 and entitled "NANOWIRE CHANNEL STRUCTURES OF CONTINUOUSLY STACKED HETEROGENEOUS NANOWIRES FOR COMPLEMENTARY METAL OXIDE SEMICONDUCTOR(CMOS) DEVICES," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0004] This disclosure relates generally to complementary metal oxide semiconductor (CMOS) devices, and more specifically to implementing nanowire channel structures in CMOS devices.II. Background[0005] Transistors are essential components in modern electronic devices, and large numbers of transistors are employed in integrated circuits (ICs) therein. For example, components such as central processing units (CPUs) and memory systems each employ a large quantity of transistors for logic circuits and memory devices.[0006] As electronic devices become more complex in functionality, so does the need to include a greater number of transistors in such devices. But as electronic devices are provided in increasingly smaller packages, such as in mobile devices for example, there is a need to provide a greater number of transistors in a smaller IC chip. This increase in the number of transistors is achieved in part through continued efforts to miniaturize transistors in ICs (i.e., placing increasingly more transistors into the same amount of space). In particular, node sizes in ICs are being scaled down by a reduction in minimum metal line width in the ICs (e.g., 65 nanometers (nm), 45 nm, 28 nm, 20 nm, etc.). As a result, the gate lengths of planar transistors are also scalably reduced, thereby reducing the channel length of the transistors and interconnects. Reduced channel length in planar transistors has the benefit of increasing drive strength (i.e., increased drain current) and providing smaller parasitic capacitances resulting in reduced circuit delay. However, as channel length in planar transistors is reduced such that the channel length is of the same order of magnitude as the depletion layer widths, short channel effects (SCEs) can occur that degrade performance. More specifically, SCEs in planar transistors cause increased current leakage, reduced threshold voltage, and/or threshold voltage roll-off (i.e., reduced threshold voltage at shorter gate lengths), and therefore, reduced gate control.[0007] In this regard, alternative transistor designs to planar transistors have been developed. These alternative transistor designs provide for a gate material to wrap around at least a portion of a channel structure to provide better gate control over an active channel therein. Better gate control provides reduced current leakage and increased threshold voltage compared to a planar transistor of a similar footprint. An example is a complementary metal oxide semiconductor (CMOS) fin field-effect transistor (FET) (FinFET). A FinFET provides a channel structure formed by a thin Silicon (Si) "fin," and a gate that wraps around top and side portions of the fin. Figure 1A illustrates a conventional CMOS FinFET 100 ("FinFET 100") as an example. The FinFET 100 includes a substrate 102, a source 104, and a drain 106. The FinFET 100 further includes fin structures 108 and 110 disposed above the substrate 102 between the source 104 and the drain 106 to form a channel structure 112. The fin structures 108 and 110 are made of a conductive material, such as Silicon (Si) for example. The FinFET 100 further includes spacer layers 114 and 116 disposed to isolate the source 104 and the drain 106, respectively, from a "wrap-around" gate 118 disposed over the fin structures 108 and 110 in a later fabrication stage. Accordingly, the gate 118 wraps around top portions and side portions (not shown) of the fin structures 108 and 110.[0008] Figure 2 A illustrates a cross section of the FinFET 100 across an A- A line illustrated in Figure 1A. As shown in Figure 2A, the fin structures 108 and 110 are disposed at a lateral pitch 120 which allows the gate 118 to wrap around the top portions and the side portions of the fin structures 108 and 110. The gate 118 does not wrap under the fin structures 108 and 110. This configuration provides an effective channel width of the FinFET 100, i.e., the area of the channel structure 112 that can be controlled by an electrostatic field generated when a voltage is applied to the gate 118, that is proportional to a perimeter 122 of the fin structure 108 exposed to the gate 118. This perimeter 122 is based on a width 124 of the fin structure 108 and a height 126 of the fin structure 108. Having the gate 118 wrap around the top portions and the side portions of the fin structures 108 and 110 allows for a larger effective channel width in comparison to a planar transistor of a similar footprint. Having a larger effective channel width provides better gate control over the channel structure 112, which makes the FinFET 100 less susceptible to performance degradation due to SCEs in comparison to a planar transistor of a similar footprint. Accordingly, having better gate control over the channel structure 112 allows for a further scaling down of the FinFET 100 relative to a planar transistor of a similar footprint.[0009] However, additional scaling down of the FinFET 100 is subject to fabrication and performance limitations. For example, a reduction of the channel length of the FinFET 100 can increase sub-threshold leakage, negatively affect gate control, and negatively affect frequency performance of a circuit employing the FinFET 100. In this regard, another example of an alternative transistor design is a conventional CMOS nanowire device. In a conventional CMOS nanowire device, a nanowire channel structure is formed by a plurality of nanowires, such as Silicon (Si) nanowires for example. A "wrap-around" gate wraps completely around each nanowire of the plurality of nanowires. Figure IB illustrates a conventional CMOS nanowire device 132 ("nanowire device 132") as compared to the FinFET 100 in Figure 1A. The nanowire device 132 includes a substrate 134, a source 136, and a drain 138. The nanowire device 132 further includes a nanowire channel structure 140. The nanowire channel structure 140 comprises nanowires 142(1-N) disposed above the substrate 134 and interposed between the source 136 and the drain 138. The nanowires 142(1-N) are configured in two (2) channel structure columns labeled 144 and 146. The nanowires 142(1-N) are made of a semiconductor material, such as Silicon (Si) for example. The nanowire device 132 further includes spacer layers 148 and 150 disposed to isolate the source 136 and the drain 138, respectively, from a gate 152 disposed over the nanowires 142(1-N) in a later fabrication stage. Accordingly, the gate 152 wraps entirely around each of the nanowires 142(1-N) of the nanowire channel structure 140.[0010] Figure 2B illustrates a cross section of the nanowire device 132 across a B-B line illustrated in Figure IB. As shown in Figure 2B, the nanowire channel structure 140 comprises the nanowires 142(1-N), with N being 6 in this example. The channel structure columns 144 and 146 are disposed at a pitch 154, which allows the gate 152 to entirely wrap around each of the nanowires 142(1-N) of the nanowire channel structure 140. This configuration provides an effective channel width that is proportional to a perimeter 156 of the nanowires 142(1-3) of the channel structure column 144 exposed to the gate 152. In this example, the nanowires 142(1-3) are of a similar width 158 and of a similar height 160 (labeled only for the nanowire 142(3)). This configuration may allow for a larger effective channel width in comparison to a FinFET transistor of a similar footprint. For example, a larger effective channel width can be provided by the nanowire device 132 by increasing the number of nanowires 142(1-N). Accordingly, a large number of nanowires 142(1-N) may provide better gate control and increased drive strength in the nanowire device 132 that the FinFET 100.[0011] However, fabrication and performance limitations may limit the number of nanowires 142(1-N) that can be disposed in the nanowire device 132, and therefore, limit the effective channel width therein. In particular, as shown in Figure 2B, vertically adjacent nanowires, such as nanowires 142(1) and 142(2), are separated by a distance 162, while horizontally adjacent nanowires, such as nanowires 142(1) and 142(4), are separated by a distance 164. Thus, minimizing the distances 162 and 164 may allow for the formation of additional nanowires 142(1-N) in the nanowire device 132. Furthermore, minimizing the distances 162 and 164 may reduce the area between the gate 152 and the source 136, which may reduce parallel plate parasitic capacitance therein. In particular, the gate 152 and the source 136 are separated by the spacer layer 148, thus creating a parasitic parallel plate capacitance between the gate 152 and the source 136. Reducing the area between the gate 152 and the source 136 reduces the parasitic parallel plate capacitance between the gate 152 and the source 136, thus reducing a delay of the nanowire device 132. Reducing this delay increases the frequency performance of a circuit (not shown) that employs the nanowire device 132.[0012] However, minimizing the distances 162 and 164 may not be possible or may provide drawbacks. In particular, the distances 162 and 164 are provided to allow the gate material for the gate 152 to be disposed completely around and between the nanowires 142(1-N), for example. Accordingly, minimizing the distances 162 and 164 is limited by at least the process of disposing the gate material for the gate 152. Furthermore, adjacent nanowires 142(1-N) of the nanowires 142(1-N) are separated by, for example, a gate material, which generates channel parasitic capacitance. This channel parasitic capacitance increases as adjacent nanowires 142(1-N) of the nanowires 142(1-N) are set closer together, thus increasing power consumption and overall performance.[0013] Another way to add nanowires 142(1-N) to the nanowire device 132 is by increasing a height 166 of the nanowire channel structure 140 while maintaining required minimum distances for the distances 162 and 164. This may allow more nanowires 142(1-N) in the nanowire channel structure 140. However, performance and fabrication limitations may limit the height 166 of the nanowire channel structure 140. For example, increasing the height 166 of the nanowire channel structure 140 increases parasitic parallel plate capacitance between the gate 152 and the source 136 which, as explained earlier, may increase delay of the nanowire device 132, shift the threshold voltage of the nanowire device 132, and decrease frequency performance of a circuit (not shown) employing the nanowire device 132. Furthermore, increasing the height 166 of the nanowire channel structure 140 results in a high height-to-width aspect ratio for the nanowire channel structure 140. Having a high height-to-width aspect ratio in the nanowire channel structure 140 may be undesirable for forming the nanowire channel structure 140, in particular, and the nanowire device 132, generally, and may limit scaling down the nanowire device 132. Furthermore, having additional nanowires 142(1-N) increases channel parasitic capacitance by providing additional nanowire-gate material-nanowire combinations. Therefore, performance and fabrication limitations regarding, for example, the distances 162 and 164, and the height 166, may limit further scaling down of the nanowire device 132.SUMMARY OF THE DISCLOSURE[0014] Aspects disclosed in the detailed description include nanowire channel structures of continuously stacked nanowires for complementary metal oxide semiconductor (CMOS) devices. A nanowire channel structure in a conventional nanowire device includes a plurality of nanowires, each nanowire completely surrounded by a gate material of a corresponding gate. This provides strong gate control and drive strength for a given footprint. However, further scaling down of the conventional nanowire device is limited by a height of a nanowire channel structure therein. In particular, scaling down of the nanowire device includes decreasing channel length, which results in increased leakage current and decreased gate control. To mitigate these effects of a decreased channel length, gate control over the corresponding nanowire channel structure may be improved by increasing the number of nanowires in the nanowire channel structure. However, in a conventional nanowire device, a minimum distance between nanowires must be provided to allow depositing of a gate material therein. Accordingly, increasing the number of nanowires results in an increase in the height of the nanowire channel structure. However, increasing the height of the nanowire channel structure may not be possible due to fabrication limitations associated with forming tall semiconductor structures and etching/forming nanowires therein. Furthermore, even when possible, increasing the height of the nanowire channel structure may not be desirable. For example, an increase in the nanowire channel structure height results in an increase in an area between the gate and the source/drain elements of the nanowire device, which in turn increases a parallel plate parasitic capacitance between the parallel gate and source/drain elements. This parallel plate parasitic capacitance may increase signal delay and negatively affect a frequency performance of a circuit employing the nanowire channel structure. Accordingly, an increase in the number of nanowires to increase gate control to mitigate adverse effects of scaling down the nanowire device may not be possible or desirable. [0015] In this regard, to provide a nanowire device with strong gate control but with a channel structure providing minimal fabrication and performance limitations, nanowire channel structures comprising continuously stacked nanowires for CMOS devices are provided. In particular, an exemplary nanowire CMOS device ("nanowire device") includes a nanowire channel structure that includes a plurality of continuously stacked nanowires. Each of the plurality of continuously stacked nanowires is shaped to have a greater width at a central portion than at top and bottom end portions therein. Having continuously stacked nanowire structures eliminates the need to have a separation distance between vertically adjacent nanowires, thus providing a higher number of nanowires than a conventional nanowire device for a particular nanowire structure height. The greater number of nanowires provides increased gate control compared to the conventional nanowire device, but on a shorter nanowire channel structure, thus maintaining a lower parallel plate parasitic capacitance. Furthermore, the shorter nanowire channel structure simplifies fabrication compared to the conventional nanowire device.[0016] Having the nanowires of the exemplary nanowire channel structure be continuously stacked reduces the number of adjacent nanowires separated by the gate material in the nanowire channel structure, thus substantially reducing channel parasitic capacitance therein. Further still, having continuously stacked nanowire structures allows a gate material of a gate therein to be disposed within trenches formed in separation areas formed by the narrower top and bottom end portions between the continuously stacked nanowires. Thus, the effective channel width, and therefore the gate control, provided by the exemplary nanowire device is comparable to that provided by a taller conventional nanowire device.[0017] In this regard in one aspect, a CMOS device is provided. The CMOS device comprises a substrate, a source disposed on the substrate, a drain disposed on the substrate, and a channel body interposed between the source and the drain. The channel body comprises a channel comprising a nanowire channel structure. The nanowire channel structure comprises a plurality of nanowires arranged in a continuously stacked arrangement. Each of the plurality of nanowires comprises a top end portion, a bottom end portion, and a central portion disposed between the top end portion and the bottom end portion, the central portion comprising a greater width than the top end portion and the bottom end portion. The nanowire channel structure further comprises a plurality of separation areas, each disposed between central portions of vertically adjacent nano wires among the plurality of nano wires, and each formed by the bottom end portion of a higher nanowire of vertically adjacent nanowires and the top end portion of a lower nanowire of the vertically adjacent nanowires. The channel body further comprises a dielectric material layer disposed adjacent to the plurality of nanowires and extending into the plurality of separation areas disposed between the central portions of the adjacent nanowires among the plurality of nanowires. The channel body further comprises a gate material disposed adjacent to the dielectric material layer and extending into the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires.[0018] In another aspect, a method of fabricating a CMOS device is provided. The method comprises providing a semiconductor die for a CMOS device comprising a source formed on a substrate, a drain formed on a substrate, and a fin structure comprising a width and a length, the fin structure interposed lengthwise between the source and the drain, and comprising a first lateral side and a second lateral side. The method further comprises disposing a plurality of block co-polymer layers on the substrate adjacent to the fin structure, each of the plurality of block co-polymer layers comprising one of a first material of a first etching sensitivity and a second material of a second etching sensitivity different from the first etching sensitivity. The plurality of block co-polymer layers are disposed in an alternating configuration between a block co-polymer layer of the first material and a block co-polymer layer of the second material. The method further comprises removing each block co-polymer layer of the first material to form a plurality of exposed portions of the fin structure and a plurality of masked portions of the fin structure, each masked portion of the plurality of masked portions being masked by a block co-polymer layer of the second material. The method further comprises etching a plurality of trenches in the fin structure in each of the plurality of exposed portions of the fin structure, along the length of the fin structure on one of the first lateral side and the second lateral side of the fin structure to form a plurality of continuously stacked nanowires separated by a plurality of separation areas. Each of the plurality of separation areas comprises a first trench of the plurality of trenches on the first lateral side and a second trench of the plurality of trenches on the second lateral side. The method further comprises removing each block co-polymer layer of the second material to expose a central portion of the plurality of continuously stacked nano wires.[0019] In another aspect, a CMOS device is provided. The CMOS device comprises a means for providing a substrate, a means for forming a source disposed on the substrate, a means for forming a drain disposed on the substrate, and a means for forming a channel body interposed between the means for forming the source and the means for forming the drain. The channel body comprises a means for forming a channel comprising a nanowire channel structure comprising a plurality of nanowires arranged in a continuously stacked arrangement. Each of the plurality of nanowires comprises a top end portion, a bottom end portion, and a central portion disposed between the top end portion and the bottom end portion, the central portion comprising a greater width than the top end portion and the bottom end portion. The nanowire channel structure further comprises a plurality of separation areas, each disposed between central portions of vertically adjacent nanowires among the plurality of nanowires, and each formed by the bottom end portion of a higher nanowire of the vertically adjacent nanowires and the top end portion of a lower nanowire of the vertically adjacent nanowires. The channel body further comprises a means for forming a dielectric material layer disposed adjacent to the plurality of nanowires and extending into portions of the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires. The channel body further comprises a means for forming a gate material disposed adjacent to the means for forming the dielectric material layer and extending into the portions of the plurality of separation areas disposed between the central portions of the vertically adjacent nanowires among the plurality of nanowires.BRIEF DESCRIPTION OF THE FIGURES[0020] Figure 1A illustrates a conventional complementary metal oxide semiconductor (CMOS) fin field-effect transistor (FET) (FinFET);[0021] Figure IB illustrates a conventional CMOS nanowire device;[0022] Figure 2A illustrates a cross section of the conventional CMOS FinFET illustrated in Figure 1A across an A- A line; [0023] Figure 2B illustrates a cross section of the conventional CMOS nanowire device illustrated in Figure IB across a B-B line;[0024] Figure 3A illustrates an exemplary nanowire device that includes an exemplary nanowire channel structure of continuously stacked nanowires configured to expose a larger area of a channel structure to a wrap-around gate in comparison to a conventional FinFET of similar dimensions, and provide a shorter channel structure in comparison to a conventional nanowire device;[0025] Figure 3B illustrates a cross section of the exemplary nanowire device illustrated in Figure 3A across a C-C line;[0026] Figure 3C illustrates an expanded section of a separation area of the exemplary nanowire channel structure illustrated in Figure 3B ;[0027] Figure 4A illustrates a cross section of the exemplary nanowire channel structure of the exemplary nanowire device illustrated in Figure 3A across a C-C line that includes exemplary dimensions to illustrate an effective channel length;[0028] Figures 4B and 4C illustrate, respectively, a cross section of a channel structure for the conventional CMOS FinFET illustrated in Figure 1A and a cross section of the channel structure for the conventional nanowire device employing non- continuously stacked nanowire structures illustrated in Figure IB to illustrate their effective channel length as compared to the exemplary nanowire channel structure in Figure 4A;[0029] Figure 5 is a table showing effective characteristics that effect the effective channel length of the exemplary nanowire device illustrated in Figure 3A, the conventional FinFET device illustrated in Figure 1A, and the conventional CMOS nanowire device illustrated in Figure IB, based on the dimensions provided in Figures 4A-4C for comparison purposes;[0030] Figures 6A and 6B provide a flowchart illustrating an exemplary process for fabricating the exemplary nanowire device, including the exemplary nanowire channel structure, illustrated in Figures 3 A and 3B ;[0031] Figures 7A and 7B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of forming semiconductor fins above a shallow trench isolation substrate for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B; [0032] Figures 8A and 8B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of forming an isolation layer over a bottom portion of the fins, above the substrate, for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0033] Figures 9A and 9B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of disposing an oxide layer above the fins, and a poly mask/dummy gate above the substrate and above the fins for later formation of spacer layers, a drain, and a source for manufacturing the exemplary nanowire device illustrated in Figures 3 A and 3B;[0034] Figures 10A and 10B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of disposing the spacer layers on the substrate adjacent to the poly mask/dummy gate, and disposing the source and the drain on the substrate adjacent to the spacer layers, respectively, for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0035] Figures 11A and 11B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of removing the poly mask/dummy gate and exposing the fins for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0036] Figures 12A and 12B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of disposing a plurality of block co-polymer layers above the fins in an alternating configuration between a block co-polymer layer of a first material and a block co-polymer layer of a second material for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0037] Figures 13A and 13B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of disposing a capping layer above the plurality of block co-polymer layers for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0038] Figures 14A and 14B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of removing a portion of the capping layer and a portion of each of the plurality of block co-polymer layers from an area between the fins down to the substrate for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B; [0039] Figures 15A and 15B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of removing each block co-polymer layer of the first material to form a plurality of exposed portions of the fins and a plurality of masked portions of the fins for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0040] Figures 16A-16C are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of etching a trench in each of the plurality of exposed portions of the fins, to form separation areas between vertically adjacent nanowires for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0041] Figures 17A and 17B are profile and cross-sectional diagrams, respectively, of an exemplary fabrication process of removing each block co-polymer layer of the second material to expose a central portion of the plurality of nanowires and removing the capping layer for fabricating the exemplary nanowire device illustrated in Figures 3 A and 3B;[0042] Figure 18 illustrates an example environment that includes a computing device and wireless network in which the exemplary nanowire device illustrated in Figures 3 A and 3B may be employed; and[0043] Figure 19 is a block diagram of an exemplary processor-based system that can include the exemplary nanowire device illustrated in Figures 3A and 3B.DETAILED DESCRIPTION[0044] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0045] Aspects disclosed in the detailed description include nanowire channel structures of continuously stacked nanowires for complementary metal oxide semiconductor (CMOS) devices. A nanowire channel structure in a conventional nanowire device includes a plurality of nanowires, each nanowire completely surrounded by a gate material of a corresponding gate. This provides strong gate control and drive strength for a given footprint. However, further scaling down of the conventional nanowire device is limited by a height of a nanowire channel structure therein. In particular, scaling down of the nanowire device includes decreasing channel length, which results in increased leakage current and decreased gate control. To mitigate these effects of a decreased channel length, gate control over the corresponding nanowire channel structure may be improved by increasing the number of nanowires in the nanowire channel structure. However, in a conventional nanowire device, a minimum distance between nanowires must be provided to allow depositing of a gate material therein. Accordingly, increasing the number of nanowires results in an increase in the height of the nanowire channel structure. However, increasing the height of the nanowire channel structure may not be possible due to fabrication limitations associated with forming tall semiconductor structures and etching/forming nanowires therein. Furthermore, even when possible, increasing the height of the nanowire channel structure may not be desirable. For example, an increase in the nanowire channel structure height results in an increase in an area between the gate and the source/drain elements of the nanowire device, which in turn increases a parallel plate parasitic capacitance between the parallel gate and source/drain elements. This parallel plate parasitic capacitance may increase signal delay and negatively affect a frequency performance of a circuit employing the nanowire channel structure. Accordingly, an increase in the number of nanowires to increase gate control to mitigate adverse effects of scaling down the nanowire device may not be possible or desirable.[0046] In this regard, to provide a nanowire device with strong gate control but with a channel structure providing minimal fabrication and performance limitations, nanowire channel structures comprising continuously stacked nanowires for CMOS devices are provided. In particular, an exemplary nanowire CMOS device ("nanowire device") includes a nanowire channel structure that includes a plurality of continuously stacked nanowires. Each of the plurality of continuously stacked nanowires is shaped to have a greater width at a central portion than at top and bottom end portions therein. Having continuously stacked nanowire structures eliminates the need to have a separation distance between vertically adjacent nanowires, thus providing a higher number of nanowires than a conventional nanowire device for a particular nanowire structure height. The greater number of nanowires provides increased gate control compared to the conventional nanowire device, but on a shorter nanowire channel structure, thus maintaining a lower parallel plate parasitic capacitance. Furthermore, the shorter nanowire channel structure simplifies fabrication compared to the conventional nanowire device.[0047] Having the nanowires of the exemplary nanowire channel structure be continuously stacked reduces the number of adjacent nanowires separated by the gate material in the nanowire channel structure, thus substantially reducing channel parasitic capacitance therein. Further still, having continuously stacked nanowire structures allows a gate material of a gate therein to be disposed within trenches formed in separation areas formed by the narrower top and bottom end portions between the continuously stacked nanowires. Thus, the effective channel width, and therefore the gate control, provided by the exemplary nanowire device is comparable to that provided by a taller conventional nanowire device.[0048] In this regard, Figure 3A illustrates an exemplary nanowire device that includes an exemplary nanowire channel structure of continuously stacked nanowires configured to expose a larger area of a channel structure to a wrap-around gate in comparison to a conventional FinFET of similar dimensions, and provide a shorter channel structure in comparison to a conventional nanowire device. Figure 3B illustrates a cross section of the exemplary nanowire device 300 across a C-C line. The exemplary nanowire device 300 includes a substrate 302 and an exemplary nanowire channel 304 including exemplary nanowire channel structures 306 and 308 disposed on the substrate 302. The exemplary nanowire channel 304 only includes two (2) nanowire channel structures 306, 308. However, it is noted that the exemplary nanowire channel 304 may include more or fewer nanowire channel structures 306, 308 based on required drive current, size, or signal speed, for example. The exemplary nanowire channel 304 includes a plurality of nanowires 310(1-M), with M being 6 in this example, in a continuously stacked arrangement among the nanowire channel structures 306, 308. In this example, each of the nanowires 310(1-M) has a corresponding cross section 312(1- M) that has a substantially hexagonal cross section. The exemplary nanowire device 300 further includes a source 314 disposed on the substrate 302 and a drain 316 disposed on the substrate 302. As illustrated in Figure 3A, the nanowire channel 304 is interposed between the source 314 and the drain 316. The exemplary nanowire device 300 further includes a gate 318 comprising a gate material 320 disposed on the substrate 302 around the plurality of nanowires 310(1-M). The exemplary nanowire device 300 further includes spacer layers 322 and 323 disposed to isolate the source 314 and the drain 316, respectively, from the gate 318.[0049] Figure 3C illustrates an expanded section 324 of the nanowire channel structure 308 of the exemplary nanowire device 300 illustrated in Figure 3B to further describe elements of the nanowire channel structure 308. Figure 3C illustrates vertically adjacent nanowires 310(4) and 310(5), which are arranged in a continuously stacked arrangement, having cross sections 312(4) and 312(5), respectively. The nanowires 310(4) and 310(5) are comprised of a material 328, such as a semiconductor material 328, such as Silicon (Si), Silicon Germanium (SiGe), or Germanium (Ge), for example. To provide a separation area between the vertically adjacent nanowires 310(4) and 310(5) in the nanowire channel structure 308, in this example, the vertically adjacent nanowires 310(4) and 310(5) include top end portions 326(1) and 326(2), respectively. The top end portions 326(1) and 326(2) have substantially triangular cross sections 330(1) and 330(2), respectively. The triangular cross sections 330(1) and 330(2) are formed, in part, by body-centered cubic (BCC) <111> facet sidewalls 313(1)(1) and 313(1)(2), and 313(2)(1) and 313(2)(2), respectively. Furthermore, the top end portions 326(1) and 326(2) have top end points 332(1) and 332(2), respectively, formed by vertexes 333(1) and 333(2) of the substantially triangular cross sections 330(1) and 330(2), respectively, at substantially a horizontal center 334 of the nanowire channel structure 308.[0050] Furthermore, each of the vertically adjacent nanowires 310(4) and 310(5) comprises bottom end portions 336(1) and 336(2) and has substantially triangular cross sections 338(1) and 338(2), respectively. The triangular cross sections 338(1) and 338(2) are formed, in part, by the BCC <111> facet sidewalls 313(1)(3) and 313(1)(4), and 313(2)(3) and 313(2)(4), respectively. Furthermore, the bottom end portions 336(1) and 336(2) have bottom end points 340(1) and 340(2) formed by vertexes 339(1) and 339(2) of the substantially triangular cross sections 338(1) and 338(2), respectively, at substantially the horizontal center 334 of the nanowire channel structure 308.[0051] Furthermore, each of the vertically adjacent nanowires 310(4) and 310(5) comprises central portions 341(1) and 341(2), respectively. The central portion 341(1) is disposed between the top end portion 326(1) and the bottom end portion 336(1). The central portion 341(1) has a substantially rectangular cross section 344(1) having BCC <110> facet sidewalls 345(1) and 345(2), and a width 346 between the BCC <110> facet sidewalls 345(1) and 345(2) that is at least as large as the largest of a width 348 of the top end portion 326(1) and a width 350 of the bottom end portion 336(1). The central portion 341(2) is disposed between the top end portion 326(2) and the bottom end portion 336(2). The central portion 341(2) has a substantially rectangular cross section 344(2) having BCC <110> facet sidewalls 345(3) and 345(4), and a width 346 between the BCC <110> facet sidewalls 345(1) and 345(2) that is at least as large as the largest of a width 348 of the top end portion 326(2) and a width 350 of the bottom end portion 336(2). Thus, in this example, the cross sections 312(4) and 312(5) of the vertically adjacent nanowires 310(4) and 310(5) are substantially hexagonal cross sections formed by BCC <111> facet sidewalls and BCC <110> facet sidewalls. For example, the cross section 312(4) of the nanowire 310(4) is formed by the BCC <111> facet sidewalls 313(1)(1)-313(1)(4) and the BCC <110> facet sidewalls 345(1) and 345(2).[0052] Figure 3C further illustrates a separation area 352 between the vertically adjacent nanowires 310(4) and 310(5). The separation area 352 includes the bottom end portion 336(1) of the higher vertically adjacent nanowire 310(4) and the top end portion 326(2) of the lower vertically adjacent nanowire 310(5). The separation area 352 further includes a continuity area 354 in which the bottom end point 340(1) of the vertically adjacent nanowire 310(4) contacts the top end point 332(2) of the vertically adjacent nanowire 310(5), which is below the vertically adjacent nanowire 310(4). The separation area 352 further includes trenches 356(1) and 356(2) adjacent to each side of the continuity area 354 and between the vertically adjacent nanowires 310(4) and 310(5).[0053] Accordingly, in the configuration of the exemplary nanowire device 300, and in particular of the nanowire channel structures 306 and 308, the gate material 320 of the gate 318 does not completely surround any of the vertically adjacent nanowires of the plurality of nanowires 310(1-M). However, when the gate material 320 is disposed over the nanowire channel structures 306, 308, the gate material 320 is disposed into corresponding trenches 356(1) and 356(2) of the separation area 352 between the vertically adjacent nanowires of the plurality of nanowires 310(1-M). Therefore, when the gate 318 generates an electrostatic field to activate the nanowire channel 304, substantially all of the perimeter of the vertically adjacent nanowires of the plurality of nanowires 310(1-M) is exposed to the electrostatic field. This allows for improved gate control compared to a fin channel structure of similar height and width (not shown), and gate control similar to that of a much taller conventional nanowire channel structure (not shown).[0054] Furthermore, having the vertically adjacent nanowires of the plurality of nanowires 310(1-M) arranged in a continuously stacked arrangement eliminates the vertical separation distance 162 employed in the nanowire channel structure 140 in Figure 2B. Specifically, because the gate material 320 of the gate 318 is not disposed completely around the vertically adjacent nanowires of the plurality of nanowires 310(1- M) in Figure 3C, the vertical separation distance 162 employed in the nanowire channel structure 140 in Figure 2B is not necessary. This allows for the nanowire channel structures 306, 308 to be shorter, which in turn allows for inclusion of a higher number of vertically adjacent nanowires of the plurality of nanowires 310(1-M) compared to the nanowire channel structure 140 illustrated in Figure 2B. In addition, by being shorter, the nanowire channel structures 306, 308 provide for a lower parallel plate parasitic capacitance compared to the nanowire channel structure 140 illustrated in Figure 2B. In particular, parallel plate parasitic capacitances are generated in a parallel plate area 358 between the gate 318 and the source 314, and in a parallel plate area 360 between the gate 318 and the drain 316 as shown in Figure 3 A. These parallel plate parasitic capacitances are proportional to the size of these parallel plate areas 358, 360. Accordingly, because the nanowire channel structures 306, 308 provide for parallel plate areas 358, 360 that are smaller than those provided by the nanowire channel structure 140 illustrated in Figure 2B, the nanowire channel structures 306, 308 provide for smaller parallel plate parasitic capacitances as well. Therefore, the nanowire channel structures 306, 308 provide a smaller delay than the conventional nanowire device 132 illustrated in Figure IB.[0055] Furthermore, as noted earlier, the shorter nanowire channel structures 306, 308 allow for a higher number of vertically adjacent nanowires of the plurality of nanowires 310(1-M) compared to the nanowire channel structure 140 illustrated in Figure 2B. The higher number of vertically adjacent nanowires of the plurality of nanowires 310(1-M) provides additional area between the vertically adjacent nanowires of the plurality of nanowires 310(1-M) and the gate material 320 of the gate 318, thus improving gate control and allowing for a reduction in the channel length compared to the nanowire device 132 illustrated in Figure IB. Furthermore, having the vertically adjacent nanowires of the plurality of nanowires 310(1-M) continuously stacked provides for a significantly lower parasitic channel capacitance between the vertically adjacent nanowires in the nanowire channel 304 compared to the nanowire channel structure 140 illustrated in Figure IB. In particular, a parasitic channel capacitance is created between conducting nanowires, such as the vertically adjacent nanowires 310(4) and 310(5), separated by an isolating material, such as any oxide layers (not shown) isolating the vertically adjacent nanowires 310(4) and 310(5) from the gate material 320. By providing the continuity area 354 between the vertically adjacent nanowires of the plurality of nanowires 310(1-M), this parasitic channel capacitance is removed or significantly reduced in the nanowire channel 304 compared to the nanowire channel structure 140 illustrated in Figure IB.[0056] Figures 4A-4C and Figure 5 are provided to illustrate and contrast features of the exemplary nanowire channel 304 in Figures 3A-3C, the channel structure 112 in Figures 1A and 2 A, and the nanowire channel structure 140 in Figures IB and 2B. In particular, Figure 4A illustrates a cross section of the nanowire channel 304, including the nanowire channel structures 306, 308 of the exemplary nanowire device 300 across a C-C line illustrated in Figure 3A. Figures 4B and 4C illustrate, respectively, a cross section of the channel structure 112 of the FinFET 100 illustrated in Figure 1A across an A-A line, and a cross section of the nanowire channel structure 140 of the nanowire device 132 illustrated in Figure IB across a B-B line. Figures 4A-4C are provided to illustrate exemplary dimensions of the corresponding channel structures 306, 308, 112, and 140 for comparing their effective channel length, channel width, sub-threshold slope (SS), and parasitic parallel plate capacitance. Figure 5, which will be discussed in conjunction with Figures 4A-4C, is a table 500 showing effective characteristics that effect the effective channel length of the nanowire device 300 illustrated in Figure 3A, the FinFET 100 illustrated in Figure 1A, and the nanowire device 132 illustrated in Figure IB, based on the dimensions provided in Figures 4A-4C. It is noted that, although the dimensions provided for comparison purposes do not provide the actual channel width, SS, and parasitic parallel plate capacitance therein, these dimensions provided herein with respect to Figures 4A-4C and 5 can be used to compare the channel structures regarding these characteristics. For example, because the parasitic parallel plate capacitance of a channel structure is proportional to the parallel plate area therein, i.e., the area between a gate and source/drain contacts therein, comparing the parallel plate area of the illustrated channel structures allows for a comparison of corresponding parasitic parallel plate capacitances. Similarly, because the effective channel width of a channel structure is proportional to its perimeter, i.e., the area of the channel structure exposed to the gate material therein, comparing perimeters of the illustrated channel structures allows for a comparison of corresponding effective channel widths.[0057] Regarding Figure 4A, the nanowire channel 304 provided therein comprises six (6) nanowires 310(1)-310(6) configured in two (2) nanowire channel structures 306 and 308, as an example. Each of the nanowire channel structures 306, 308 comprises three (3) continuously stacked nanowires 310(l)-310(3) and 310(4)-310(6), respectively, in this example. Furthermore, the continuously stacked nanowires 310(1)- 310(6) are illustrated as having corresponding cross sections 312(1)-312(6), which are illustrated as substantially hexagonal cross sections with BCC <111> and BCC <110> facet sidewalls, as an example. In this example, the nanowire channel 304 has a lateral pitch 400, i.e., the pitch for a nanowire channel structure 306, 308 therein, of 24 nanometers (nm) (Lateral Pitch in Figure 5). Furthermore, each of the nanowire channel structures 306, 308 has a width 402 of 7 nm (Width in Figure 5). Further still, the nanowire channel structures 306, 308 have a height 406 of 40 nm (Height in Figure 5). Further still, each of the nanowire channel structures 306, 308 has a perimeter 408 of 93 nm ((3 x perimeter of a nanowire), or (3 x ~ 31 nm)) (Weff in Figure 5). Based on these dimensions, a parallel plate area per column pitch of the nanowire channel 304, i.e., the area between the gate 318 and the source 314 and drain 316 illustrated in Figure 3 A, is 764 squared nm ((total area = 24 nm x 40 nm) - 196 squared nm (column area)). In addition, based on a gate length of 15 nm, and the use of Silicon (Si) as gate material, the nanowire channel 304 provides a SS, i.e., a feature of a FET's current-voltage characteristic, of 62-66 mV/dec (lowest is best). [0058] Regarding Figure 4B, the channel structure 112 provided therein comprises two (2) fin structures 108 and 110. In this example, the channel structure 112 has a lateral pitch 120 of 24 nm (Lateral Pitch in Figure 5). Furthermore, each of the fin structures 108 and 110 has a width 124 of 7 nm (Width in Figure 5). Further still, the channel structure 112 has a height 126 of 42 nm (Height in Figure 5). Further still, each of the fin structures 108 and 110 has a perimeter 122 of 91 nm (2 x 42 nm + 7 nm) (Weff in Figure 5). Based on the dimensions, a parallel plate area per lateral pitch of the channel structure 112, i.e., the area between the gate 118 and the source 104 and drain 106 illustrated in Figure 1A, is 714 squared nm (24 nm x 42 nm - 294 squared nm (column area)). In addition, based on a gate length of 15 nm, and the use of Silicon (Si) as gate material, the channel structure 112 provides a sub-threshold slope of 79 mV/dec (lowest is best). Thus, as is illustrated in Figure 5, the nanowire channel 304 provides a Weff similar to that of the channel structure 112 on a shorter channel structure and with a significantly lower SS (i.e., better gate control). However, due to a higher parallel plate area, the nanowire channel 304 provides a higher parallel plate capacitance.[0059] Regarding Figure 4C, the nanowire channel structure 140 comprises six (6) nanowires 142(1)-142(6) configured in two (2) channel structure columns labeled 144 and 146. Each of the channel structure columns 144 and 146 comprises three (3) nanowires 142(1)-142(3) and 142(4)-142(6), respectively, separated by a vertical separation distance 162. The vertical separation distance 162 can be, for example, 11 nm. The nanowire channel structure 140 has a lateral pitch 154 of 24 nm (Lateral Pitch in Figure 5). Furthermore, each of the channel structure columns 144 and 146 has a width 158 of 7 nm (Width in Figure 5). Further still, the nanowire channel structure 140 has a height 166 of 54 nm (Height in Figure 5). Further still, each of the channel structure columns 144 and 146 has a perimeter 156 of 84 nm ((3 x perimeter of a nanowire), or (3 x ~ 28 nm)) (Weff in Figure 5). Based on the dimensions, a parallel plate area per column pitch of the nanowire channel structure 140, i.e., the area between the gate 152 and the source/drain elements 136, 138 illustrated in Figure IB, is 1149 squared nm ((total area = 24 nm x 66 nm) - 147 squared nm (column area)). In addition, based on a gate length of 15 nm, and the use of Silicon (Si) as gate material, the nanowire channel structure 140 provides a SS, i.e., a feature of a FET's current- voltage characteristic, of 62 mV/dec (lowest is best). Thus, as is illustrated in Figure 5, the nanowire channel 304 provides a Weff that is higher to that of the nanowire channel structure 140 on a significantly shorter channel structure and significantly lower parasitic parallel plate capacitance. Accordingly, the nanowire channel 304 provides a structure that is shorter, and thus much easier to fabricate than the nanowire channel structure 140. Furthermore, although the nanowire channel 304 and the nanowire channel structure 140 have similar SS, the nanowire channel 304 has a much lower parallel plate area, and thus lower parasitic parallel plate capacitance. This allows for the nanowire channel 304 to operate at higher frequencies.[0060] As shown in the table 500 in Figure 5, the nanowire channel 304 provides a higher (improved) effective channel width (Weff), lower (improved) SS (SS), and a shorter (improved) structure (Height) than the channel structure 112 of the FinFET 100 at a similar footprint (Width, Lateral Pitch, and Lgate). However, because of the separation areas, the nanowire channel 304 provides a slight increase in the parallel plate area, which may increase signal delay. Furthermore, the table 500 shows that the nanowire channel 304 provides a higher Weff, a significantly lower Height, and a significantly lower parallel plate area than the nanowire channel structure 140 of the nanowire device 132, while having the same footprint and without significantly increasing SS.[0061] Figures 6A and 6B provide a flowchart illustrating an exemplary process 600 for fabricating the exemplary nanowire device 300, including the exemplary nanowire channel 304, illustrated in Figures 3A and 3B. The steps in the process 600 are illustrated respectively in Figures 7A-17B. Figures 7A-17B will be referenced as the exemplary steps in the process 600 in Figure 6 are as described below.[0062] A first exemplary step to fabricate the nanowire device 300 includes providing a semiconductor die comprising the source 314 formed on the substrate 302, the drain 316 formed on the substrate 302, and fin structure 714. The fin structure 714 is interposed lengthwise between the source 314 and the drain 316. The fin structure 714 comprises a width 716, a length 718, a first lateral side 720 and a second lateral side 722 (block 602 in Figure 6A). Figures 7A and 7B illustrate a first stage 700 in the fabrication of the nanowire device 300 according to the first step in profile and cross section views, respectively. The first stage 700 illustrates channel material portions 702 and 703, formed using a self-aligned quadruple patterning process, for example. The channel material portions 702 and 703 are of a height 704 of 100 nm, for example. Accordingly, the channel material portions 702 and 703 are formed with increased tapering 706 near bottoms 708 and 709, respectively, due to fabrication limitations that prevent etching a minimally tapered semiconductor structure. The channel material portions 702 and 703 comprise, for example, Silicon (Si), Silicon Germanium (SiGe), or Germanium (Ge).[0063] The first stage 700 further illustrates a shallow trench isolation substrate 302 disposed over the channel material portions 702 and 703 to provide isolation between the channel material portions 702 and 703 and between the nanowire device 300 and adjacent devices (not shown). The first stage 700 further illustrates that the substrate 302 is recessed down to expose the fin structures 714 and 715 from the channel material portions 702 and 703, respectively. The fin structures 714 and 715 are, for example, forty (40) nm in height and are minimally tapered. Accordingly, the first stage 700 illustrates, in particular, the substrate 302 and the fin structures 714 and 715 exposed above the substrate 302.[0064] In one aspect, a next step to fabricate the nanowire device 300 may include forming an isolation layer 802 over a portion of the fin structure 714 above the substrate 302 to isolate a material of the fin structure 714 disposed within the substrate 302 from an electrostatic field applied above the substrate 302 to a plurality of continuously stacked nanowires 310(1-M) formed in the fin structure 714 (block 604 in Figure 6A). Figures 8A and 8B illustrate a second stage 800 in the fabrication of the nanowire device 300 according to this step in profile and cross section views, respectively. This step can be performed, for example, by implanting oxygen at a lower portion 804 of the fin structure 714 above the substrate 302 to oxidize the lower portion 804 of the fin structure 714.[0065] In particular, Figures 8A and 8B show the isolation layer 802 formed over the lower portions 804, 805 of fin structures 714 and 715, respectively, above the substrate 302. As will be described with further detail below, the fin structures 714 and 715 will be processed in later steps to form the continuously stacked nanowires 310(1- M) illustrated in Figures 3A and 3B. Furthermore, the gate material 320 will be disposed over the continuously stacked nanowires 310(1-M) and the substrate 302 in a later step, as illustrated in Figure 3 A, to provide the gate 318. Having the gate material 320 disposed over the continuously stacked nanowires 310(1-M) and the substrate 302 in this manner can result in an undesired parasitic channel in bottom sections 710, 711 of the channel material portions 702, 703. In this regard, the isolation layer 802 can be provided to isolate the bottom sections 710, 711 within the substrate 302 from an electrostatic field (not shown) provided by the gate 318. The isolation provided by the isolation layer 802 thus minimizes undesired parasitic channel in the bottom sections 710, 711 within the substrate 302. It is noted that the isolation layer 802 can be desirable in low power applications to minimize power loss to a parasitic channel within the substrate 302. In high performance applications, however, such a parasitic channel may increase drive current, thus improving performance. For purposes of the next steps for fabricating the nanowire device 300, however, the isolation layer 802 will not be illustrated as to not obscure other exemplary aspects. Nevertheless, it is noted that the next steps may be performed when the isolation layer 802 is formed in the lower portions 804, 805 of the fin structures 714 and 715, respectively.[0066] Figures 9A and 9B illustrate a third stage 900 in the fabrication of the nanowire device 300 according to the first step of the process 600 in profile and cross section views, respectively. The third stage 900 of the first step includes disposing a dielectric material layer 902 above the fin structures 714 and 715, and a poly mask/dummy gate 904 above the substrate 302 and above the fin structures 714 and 715. The dielectric material layer 902 is disposed because the material of the poly mask/dummy gate 904 may be comparable to the material 328 of the fin structures 714 and 715, and thus, isolation is needed for later etching of the poly mask/dummy gate 904 without removing the material 328 from the fin structures 714 and 715. The poly mask/dummy gate 904 is disposed above the substrate 302 and above the fin structures 714 and 715 for later formation of the spacer layers 322 and 323, the source 314, and the drain 316.[0067] Figures 10A and 10B illustrate a fourth stage 1000 in the fabrication of the nanowire device 300 according to the first step of the process 600 in profile and cross section views, respectively. The fourth stage 1000 illustrates the spacer layers 322 and 323 disposed on the substrate 302 adjacent to the poly mask/dummy gate 904, and the source 314 and the drain 316 disposed on the substrate 302 adjacent to the spacer layers 322 and 323, respectively. The spacer layers 322 and 323 include a dielectric material. The source 314 and the drain 316 can be disposed by growing conductive material over the fin structures 714 and 715 using vapor-phase epitaxy, for example. The source 314 and the drain 316 can also be formed by disposing a conductive material over the fin structures 714 and 715, for example.[0068] Figures 11A and 11B illustrate a fifth stage 1100 in the fabrication of the nanowire device 300 according to the first step of the process 600 in profile and cross section views, respectively. After forming the spacer layers 322 and 323, the source 314, and the drain 316, the fifth stage 1100 of the first step includes removing the poly mask/dummy gate 904 and exposing the fin structures 714 and 715 in a gate area 1102 between the spacer layers 322 and 323. Accordingly, the fin structures 714 and 715 have been formed on the substrate 302 interposed lengthwise between the source 314 and the drain 316.[0069] With continuing reference to Figure 6A, a second step to fabricate the nanowire device 300 is disposing a plurality of block co-polymer layers 1202 on the substrate 302 adjacent to the fin structure 714, each block co-polymer layer 1202 comprising one of a first material 1204 of a first etching sensitivity and a second material 1206 of a second etching sensitivity that is different from the first etching sensitivity. The plurality of block co-polymer layers 1202 are disposed in an alternating configuration between a block co-polymer layer 1202 of the first material 1204 and a block co-polymer layer 1202 of the second material 1206 (block 606 in Figure 6A).[0070] Figures 12A and 12B illustrate a sixth stage 1200 in the fabrication of the nanowire device 300 according to the second step of the process 600. In particular, Figures 12A and 12B illustrate a plurality of block co-polymer layers 1202 disposed above the fin structures 714 and 715 in an alternating configuration between a block copolymer layer 1202 of the first material 1204 and a block co-polymer layer 1202 of the second material 1206. In one aspect, the first material 1204 and the second material 1206 are self-organizing, such that the plurality of block co-polymer layers 1202 provides a deterministic way to etch the material 328, Silicon (Si) for example, out of the fin structures 714 and 715 to form the continuously stacked nanowires 310 (1-M), illustrated in Figure 3B, in a later step.[0071] A third step in the fabrication of the nanowire device 300 is disposing a capping layer 1302 above the plurality of block co-polymer layers 1202. Figures 13A and 13B illustrate a seventh stage 1300 in the fabrication of the nanowire device 300 according to this third step in profile and cross section views, respectively. In particular, Figures 13A and 13B illustrate the capping layer 1302 above the plurality of block co-polymer layers 1202. This step of disposing the capping layer 1302 is performed to allow removal of the plurality of block co-polymer layers 1202 from the area between the fin structures 714 and 715 in a later step. This step of disposing the capping layer 1302 is also performed to allow etching the material 328 out of the fin structures 714 and 715 from the area between the fin structures 714 and 715 and forming the separation areas 352 of the continuously stacked nanowires 310(1-M), illustrated in Figure 3B, in a later step.[0072] A fourth step to fabricate the nanowire device 300 is removing a portion of the capping layer 1302 and a portion of each of the plurality of block co-polymer layers 1202 between the fin structures 714 and 7715 down to the substrate 302. Figures 14A and 14B illustrate an eighth stage 1400 in the fabrication of the nanowire device 300 according to this fourth step in profile and cross section views, respectively. In particular, Figures 14A and 14B illustrate an opening 1402 on the capping layer 1302 down to the substrate 302 corresponding to a removed portion of the capping layer 1302. This fourth step is performed by etching, for example, and to allow etching of the material 328 out of the fin structures 714 and 715 from the area between the fin structures 714 and 715, and to form the separation areas 352 of the continuously stacked nanowires 310(1-M), illustrated in Figure 3B, in a later step.[0073] With reference to Figure 6B, a fifth step to fabricate the nanowire device 300 is removing each block co-polymer layer 1202 of the first material 1204 to form a plurality of exposed portions of the fin structure 714 and a plurality of masked portions of the fin structure 714, each masked portion of the plurality of masked portions being masked by a block co-polymer layer 1202 of the second material 1206 (block 608 of Figure 6B). Figures 15A and 15B illustrate a ninth stage 1500 in the fabrication of the nanowire device 300 according to this fifth step in profile and cross section views, respectively. In particular, Figures 15A and 15B illustrate the absence of each block copolymer layer 1202 of the first material 1204 from the plurality of block co-polymer layers 1202, leaving only each block co-polymer layer 1202 of the second material 1206. This fifth step is performed to allow etching of the material 328 out of the fin structures 714 and 715 and to form the separation areas 352 of the continuously stacked nanowires 310(1-M), illustrated in Figure 3B, in a later step. The block co-polymer layers 1202 of the second material 1206 protect sections of the fin structures 714 and 715 from the etching forming the separation areas 352 in a later step (not shown), the protected sections becoming the central portions of the continuously stacked nanowires 310(1-M), illustrated in Figure 3B.[0074] With continuing reference to Figure 6B, a sixth step to fabricate the nanowire device 300 is etching a plurality of trenches 356(1-6) in the fin structure 714 in each of the plurality of exposed portions of the fin structure 714, along the length 718 of the fin structure 714 on one of the first lateral side 720 and the second lateral side 722 of the fin structure 714 to form a plurality of continuously stacked nanowires 310(1-M) separated by a plurality of separation areas 352(1) and 352(2), each of the plurality of separation areas 352(1) and 352(2) comprising a first trench of the plurality of trenches 356(1-6) on the first lateral side 720 and a second trench of the plurality of trenches 356(1-6) on the second lateral side 722 (block 610 of Figure 6B). Figures 16A and 16B illustrate a tenth stage 1600 in the fabrication of the nanowire device 300 according to this sixth step in profile and cross section views, respectively. In particular, the tenth stage 1600 illustrates the plurality of trenches 356(1-6) in the fin structure 714 and a plurality of trenches 357 (1-6) in the fin structure 715. In the fin structure 714, the plurality of trenches 356(1-6) are etched on the first and second lateral sides 720 and 722, and in the fin structure 715, the plurality of trenches 357(1-6) are etched on the first and second lateral sides 721 and 723. The plurality of trenches 356(1-6) and 357(1-6) form a plurality of continuously stacked nanowires 310(1-6) separated by a plurality of separation areas 352(1-4), each of the plurality of separation areas 352(1-4) comprising first and second trenches of the plurality of trenches 356(1-6) and 357(1-6). The etching to create the plurality of trenches 356(1-6) and 357(1-6) can be performed as a time-based wet chemical etch, where the material 328 of the fin structures 714 and 715 is exposed to a wet chemical for a predetermined period of time according to a time necessary to etch the material 328 to a stop at a BCC <111> facet sidewalk In particular, when the material 328 is disposed or grown on a (100) surface orientation with a (110) sidewall orientation, exposing a portion of the material 328 to the chemical etch causes an etching stop on a BCC <111> facet sidewall. Thus, the chemical etch forms a triangular recess area, or a trench, as is illustrated in further detail in Figure 16C below with respect to the fin structure 715 of the nanowire device 300.[0075] Figure 16C illustrates an insert section 1602 to provide further detail of elements of the tenth stage 1600 illustrated in Figures 16A and 16B when using a time- based chemical etch. In particular, Figure 16C illustrates a separation area 352(3) in detail. In Figure 16C, the separation area 352(3) comprises a bottom end portion 336(1) of a higher vertically adjacent nanowire 310(4), and a top end portion 326(2) of a lower vertically adjacent nanowire 310(5). The separation area 352(3) further comprises a continuity area 354 having a contact of a bottom end point 340(1) of the higher vertically adjacent nanowire 310(4) and a top end point 332(2) of the lower vertically adjacent nanowire 310(5), and trenches 357(1) and 357(2) adjacent to the continuity area 354 on each of the first lateral side 721 and the second lateral side 723 and between the vertically adjacent nanowires 310(4), 310(5). The trenches 357(1) and 357(2) of the plurality of trenches 357(1-6) have a substantially triangular cross section 330 with BCC <111> facet sidewalls 313(1)(3), 313(1)(4), 313(2)(1), and 313(2)(2). In particular, the trenches 357(1) and 357(2) have a depth that is substantially half the width 717 of the fin structure 715, at a vertical center 724 of the layer of the material 328 corresponding to the trenches 357(1) and 357(2), substantially zero at an edge 1604(0)-1604(8) of the corresponding trenches 357(1) and 357(2), and substantially linearly variable between the vertical center 335 of the layer of the material 328 corresponding to the trenches 357(1) and 357(2) and the edge 1604(0)- 1604(8) of the corresponding trenches 357(1) and 357(2), to form the nanowires 310(4) and 310(5) separated by a separation areas 352(3). It is noted that the edges 1604(0)-1604(8) result from the masking of the central portions of the fin structure 715 by the block copolymer layers 1202 of the second material 1206.[0076] As illustrated in Figure 16C, each of the vertically adjacent nanowires 310(4) and 310(5) has a corresponding cross section 312(4) and 312(5), which in this example is a substantially hexagonal-shaped cross section formed, in part, by the BCC <111> facet sidewalls 313(1)(1)-313(1)(4) and 313(2)(1)-313(2)(4), respectively. Furthermore, the vertically adjacent nanowires 310(4) and 310(5) are interconnected at the continuity area 354. This continuously stacked arrangement allows the gate material 320 to be disposed within the trenches 357(1) and 357(2) between the vertically adjacent nanowires 310(4) and 310(5). With regards to the nanowire device 300 as a whole, having the plurality of nanowires 310(1)-310(6) in the continuously stacked arrangement described above provides improved gate control compared to a fin channel structure of similar height and width, and gate control similar to that of a much taller conventional nanowire channel structure. Furthermore, this continuously stacked arrangement provides for a significantly lower parasitic channel capacitance between the vertically adjacent nanowires of the plurality of nanowires 310(1)-310(6) in the nanowire channel 304 compared to the nanowire channel structure 140 illustrated in Figure 2B. Furthermore, this continuously stacked arrangement obviates the need for a vertical separation distance 162 employed in the nanowire channel structure 140, as illustrated in Figure 2B, which allows for shorter nanowire channel structures 306, 308 and for including a higher number of nanowires compared to the nanowire channel structure 140 illustrated in Figure 2B. The shorter nanowire channel structures 306, 308 further provide for a lower parallel plate parasitic capacitance compared to the nanowire channel structure 140 illustrated in Figure 2B.[0077] With continuing reference to Figure 6B, a seventh step to fabricate the nanowire device 300 is removing each block co-polymer layer 1202 of the second material 1206 to expose a central portion 1702 of the plurality of continuously stacked nanowires 310(1-M) (block 612 of Figure 6B). The capping layer 1302 is also removed. Figures 17A and 17B illustrate an eleventh stage 1700 in the fabrication of the nanowire device 300 according to this next step in profile and cross section views, respectively. In particular, Figures 17A and 17B illustrate a plurality of exposed central portions 1702(1-6) of the plurality of the continuously stacked nanowires 310(1-3) in the fin structure 714. Furthermore, Figures 17 A and 17B illustrate a plurality of exposed central portions 1704(1-6) of the plurality of the continuously stacked nanowires 310(4-6) in the fin structure 715.EXAMPLE ENVIRONMENT[0078] Figure 18 illustrates an example environment 1800 that includes a computing device 1802 and wireless network (not shown) in which the exemplary nanowire device 300 illustrated in Figures 3A and 3B may be employed. In this example, the computing device 1802 is implemented as a smart-phone. Although not shown, the computing device 1802 may be implemented as any suitable computing or electronic device, such as a modem, cellular base station, broadband router, access point, cellular phone, gaming device, navigation device, media device, laptop computer, cellular test equipment, desktop computer, server, network-attached storage (NAS) device, smart appliance, vehicle -based communication system, and the like. The computing device 1802 communicates data via cell towers 1804(1)-1804(N), which may be configured to provide a wireless network. Although shown as three (3) cell towers, cell towers 1804(1)-1804(N) may represent any suitable number of cell towers 1804, where n equals any suitable integer.[0079] The computing device 1802 includes a processor 1806 and a computer- readable storage medium (CRM) 1808. The processor 1806 may include any type of processor, such as an application processor or multi-core processor, configured to execute processor-executable code stored by the CRM 1808. The CRM 1808 may include any suitable type of data storage media, such as volatile memory (e.g., random access memory (RAM)), non-volatile memory (e.g., Flash memory), optical media, magnetic media (e.g., disk or tape), and the like. In the context of this disclosure, the CRM 1808 is implemented to store instructions 1810 and data 1812 of the computing device 1802, and thus does not include transitory propagating signals or carrier waves.[0080] The computing device 1802 also includes input/output (I/O) ports 1814, a display 1816, and a wireless interface 1818. The I/O ports 1814 enable data exchanges or interaction with other devices, networks, or users. The I/O ports 1814 may include serial ports (e.g., universal serial bus (USB) ports), parallel ports, audio ports, infrared (IR) ports, and the like. The display 1816 presents graphics of the computing device 1802, such as a user interface associated with an operating system, program, or application.[0081] The wireless interface 1818 provides connectivity to respective networks and other electronic devices, such as by communicating signals via an antenna 1820. Alternately or additionally, the computing device 1802 may include a wired data interface, such as Ethernet or fiber optic interfaces for communicating over a local network, intranet, or the Internet. To facilitate the communication of signals via these combinations of modes, carriers, and frequencies, the wireless interface 1818 may include a variety of components, such as processors, memories, digital signal processors (DSPs), analog and RF circuits, and the like.[0082] In some aspects, components of the wireless interface 1818 and other components of the computing device 1802 are implemented with CMOS devices 1822, such as the continuously stacked nanowires 310 for the nanowire device illustrated in Figures 3 A and 3B. The CMOS devices 1822 may be formed or configured with any suitable technology and include nanowire structures 1824 such as the nanowire channel 304 illustrated in Figures 3A and 3B, the implementations and use of which varies and is described above.[0083] The nanowire channel structures of continuously stacked nanowires for CMOS devices according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a smart phone, a tablet, a phablet, a server, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, and an automobile.[0084] In this regard, Figure 19 is a block diagram of an exemplary processor-based system 1900 that can include the exemplary nanowire device 300 illustrated in Figures 3 A and 3B. In this example, the processor-based system 1900 includes one or more CPUs 1902, each including one or more processors 1904. The processor-based system 1900 may be provided as a sy stem-on- a-chip (SoC) 1906. The CPU(s) 1902 may have cache memory 1908 coupled to the processor(s) 1904 for rapid access to temporarily stored data. The CPU(s) 1902 is coupled to a system bus 1910 and can intercouple master and slave devices included in the processor-based system 1900. As is well known, the CPU(s) 1902 communicates with these other devices by exchanging address, control, and data information over the system bus 1910. For example, the CPU(s) 1902 can communicate bus transaction requests to a memory controller 1912 in a memory system 1914 as an example of a slave device. Although not illustrated in Figure 19, multiple system buses 1910 could be provided, wherein each system bus 1910 constitutes a different fabric. In this example, the memory controller 1912 is configured to provide memory access requests to a memory array 1916 in the memory system 1914.[0085] Other devices can be connected to the system bus 1910. As illustrated in Figure 19, these devices can include the memory system 1914, one or more input devices 1918, one or more output devices 1920, one or more network interface devices 1922, and one or more display controllers 1924, as examples. The input device(s) 1918 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 1920 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 1922 can be any devices configured to allow exchange of data to and from a network 1926. The network 1926 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 1922 can be configured to support any type of communications protocol desired.[0086] The CPU(s) 1902 may also be configured to access the display controller(s) 1924 over the system bus 1910 to control information sent to one or more displays 1928. The display controller(s) 1924 sends information to the display(s) 1928 to be displayed via one or more video processors 1930, which process the information to be displayed into a format suitable for the display(s) 1928. The display(s) 1928 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.[0087] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0088] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0089] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0090] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0091] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Systems for cooling a datacenter include a liquid-to-liquid (L2L) heat exchanger 340A, B associated with a rear door 368 of a rack 302, 330 which exchanges heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with a computing device 324 of the rack 302, 330. In a first operational mode a secondary coolant flows from a coolant distribution unit (CDU) (Fig. 1: 112) of a secondary cooling loop (Fig. 2: 108) comprising room manifolds (Fig. 2: 118); row manifolds (Fig. 2: 116); and rack manifolds (Fig. 2: 114), to a rear door heat exchanger (RDHX) of a rack. In a second operational mode a local cooling loop utilising a fluid may be enabled to remove heat from a cold plate associated with the computing device. Facility fluid (primary coolant) of a primary cooling loop (Fig. 2: 106) or reservoir 362 may be supplied to remove heat from the secondary coolant or from the fluid. A processor may determine a temperature of a computing device or a fluid from sensor inputs, enable flow of secondary coolant or fluid through the L2L heat exchanger of a rack and prevent flow of secondary coolant to the secondary cooling loop associated with a primary cooling loop and a chilling facility. A server tray (Fig. 2: 202) may be immersive-cooled. A dual-cooling cold plate (Fig. 2: 250) associated with the computer device may have distinct microchannels (Fig. 2: 270; 264) for the secondary coolant and for the fluid. One or more neural networks may receive sensor inputs to infer a first or second cooling requirement, such as a failure of a secondary or primary cooling loop. Neural networks may be trained to infer that a change in a coolant state has occurred; to infer a first cooling requirement associated with the secondary cooling loop; or a second cooling requirement associated with the L2L heat exchanger, based on analysis of prior sensor inputs and prior cooling requirements.
CLAIMSA datacenter cooling system, comprising: a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquidto-liquid heat exchanger to exchange heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with a computing device of the rack.2. The datacenter cooling system of claim 1, further comprising: at least one processor to determine a temperature associated with the computing device, with the secondary coolant, or with the fluid, and to cause at least one flow controller to adjust flow rate or flow volume of one or more of the primary coolant, the secondary coolant, or the fluid through the liquid-to-liquid heat exchanger.3. The datacenter cooling system of claim I or claim 2, further comprising: at least one flow controller associated with the liquid-to-liquid heat exchanger, the at least one flow controller to be enabled based in part on a cooling requirement for the secondary coolant or for the fluid.4. The datacenter cooling system of any preceding claim, further comprising: a cold plate associated with the computing device and having first ports for a first portion of microchannels to support the secondary coolant distinctly from second ports for a second portion of the microchannels to support the fluid.5. The datacenter cooling system of any preceding claim, further comprising: at least one processor to receive sensor inputs from sensors associated with the computing device, the secondary coolant, the primary coolant, or the fluid, the at least one processor to determine a change in a coolant state based in part on the sensor inputs and to cause at least one flow controller to stop or change a flow of the secondary coolant or the fluid, the stopping or the changing of the flow to enable removal of more or less of heat from the computing device.6 The datacenter cooling system of claim 5, further comprising: one or more neural networks to receive the sensor inputs and to infer the change in the coolant state.7. The datacenter cooling system of any preceding claim, further comprising: at least one processor to cause at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to a secondary cooling loop.8. The datacenter cooling system of any preceding claim, further comprising: a latching mechanism to enable association of the liquid-to-liquid heat exchanger with a rear door of the rack.9. The datacenter cooling system of any preceding claim, further comprising: at least one flow controller associated with the liquid-to-liquid heat exchanger and a secondary cooling loop, the at least one flow controller to support flow of the fluid or the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop.10. The datacenter cooling system of any preceding claim, further comprising: at least one processor to enable a first mode of the datacenter cooling system to provide cooling from the liquid-to-liquid heat exchanger and to enable a second mode to provide cooling from a secondary cooling loop associated with a primary cooling loop and the chilling facility.I I. A processor comprising one or more circuits, the one or more circuits to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-toliquid heat exchanger that is associated with a rear door of a rack and to prevent flow the secondary coolant or the fluid to a secondary cooling loop associated with a primary cooling loop and a chilling facility, the liquid-to-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of the primary cooling loop.12. The processor of claim II, further comprising: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop.13. The processor of claim 11 or claim 12, further comprising: an input to receive sensor inputs from sensors associated with at least one computing device, the rack, a secondary coolant, or the fluid, the processor to determine a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with a liquid-to-liquid heat exchanger, based in part on the sensor inputs.14. The processor of claim 13, further comprising: one or more neural networks to receive the sensor inputs and to infer the first cooling requirement and the second cooling requirement.15. The processor of any of claims I I -14, further comprising: one or more neural networks to infer a failure of the secondary cooling loop, the one or more circuits to cause at least one flow controller to activate the liquid-to-liquid heat exchanger to coolant the secondary coolant and to prevent the secondary coolant from returning to the secondary cooling loop.16. A processor comprising one or more circuits, the one or more circuits to train one or more neural networks to infer, from sensor inputs of sensors associated with a datacenter cooling system, that a change in a coolant state has occurred, the processor to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquid-to-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of a primary cooling loop associated with a chilling facility.17. The processor of claim 16, further comprising: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop of the datacenter cooling system.18. The processor of claim 16 or claim 17, further comprising: the one or more neural networks to receive the sensor inputs and to be trained to infer a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with the liquid-to-liquid heat exchanger, based in part on an analysis of prior sensor inputs and prior cooling requirements.19. The processor of any of claims 16-18, further comprising: an output to provide signals to cause one or more of the liquid-to-liquid heat exchanger or the secondary cooling loop to be adjusted to address different cooling requirements.20. The processor of any of claims 16-19, further comprising: an input to receive the sensor inputs associated with a temperature from the at least one computing device, the secondary coolant, or the fluid, the one or more neural networks trained to infer the change in the coolant state has occurred based in part on the temperature and on prior temperatures, the change in the coolant state associated with a change in a flow rate, a flow volume, or a fluid temperature with respect to one or more thresholds for the secondary coolant or the fluid.21. A processor comprising one or more circuits, the one or more circuits to comprise one or more neural networks to infer, from sensor inputs of sensors associated with a datacenter cooling system, that a change in a coolant state has occurred, the processor to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquid-to-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of a primary cooling loop associated with a chilling facility.22. The processor of claim 21, further comprising: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop of the datacenter cooling system.23. The processor of claim 21 or claim 22, further comprising: the one or more neural networks to receive the sensor inputs and to infer a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with the liquid-to-liquid heat exchanger based in part on an analysis of prior sensor inputs and prior cooling requirements 24. The processor of any of claims 21-23, further comprising: an output to provide signals to cause one or more of the liquid-to-liquid heat exchanger or the secondary cooling loop to be adjusted to address different cooling requirements.The processor of any of claims 21-24, further comprising: an input to receive the sensor inputs associated with a temperature from the at least one computing device, the secondary coolant, or the fluid, the one or more neural networks to infer the change in the coolant state has occurred based in part on the temperature and on prior temperatures, the change in the coolant state associated with a change in a flow rate, a flow volume, or a fluid temperature with respect to one or more thresholds for the secondary coolant or the fluid.26. A method for datacenter cooling system, comprising: providing a liquid-to-liquid heat exchanger associated with a rear door of a rack; determining cooling requirements for at least one computing device of the rack; and enabling the liquid-to-liquid heat exchanger to exchange heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with the at least one computing device of the rack.27. The method of claim 26, further comprising: determining, using at least one processor, a temperature associated with the at least one computing device in the rack; determining a first cooling requirement or a second cooling requirement using the temperature and causing, based in part on the first cooling requirement or the second cooling requirement, the liquid-to-liquid heat exchanger or the secondary cooling loop to cause cooling of the secondary coolant or the fluid.28. The method of claim 27, further comprising: receiving, in at least one processor, sensor inputs from sensors associated with the at least one computing device, the rack, the secondary coolant, or the fluid; and determining, using the at least one processor, the first cooling requirement and the second cooling requirement based in part on the sensor inputs.29. The method of any of claims 26-28, further comprising: enabling, using a latching mechanism, the association of the liquid-to-liquid heat exchanger with the rear door of the rack.30. The method of any of claims 26-29, further comprising: receiving, by at least one processor, sensor inputs from sensors associated with the at least one computing device; determining, by the at least one processor, a change in a coolant state based in part on the sensor inputs; and causing, based in part on the change in the coolant state, the liquid-to-liquid heat exchanger to cause cooling of the secondary coolant or the fluid.
INTELLIGENT REAR DOOR HEAT EXCHANGER FOR LOCAL COOLING LOOPS IN A DATACENTER COOLING SYSTEMFIELD[0001] At least one embodiment pertains to cooling systems, including systems and methods for operating those cooling systems. In at least one embodiment, such a cooling system can be utilized in a datacenter containing one or more racks or computing servers.BACKGROUND[0002] Datacenter cooling systems use fans to circulate air through server components. Certain supercomputers or other high capacity computers may use water or other cooling systems instead of air-cooling systems to draw heat away from the server components or racks of the datacenter to an area external to the datacenter. The cooling systems may include a chiller within the datacenter area, which may include area external to the datacenter itself. Further, the area external to the datacenter may include a cooling tower or other external heat exchanger that receives heated coolant from the datacenter and that disperses the heat by forced air or other means to the environment (or an external cooling medium). The cooled coolant is recirculated back into the datacenter. The chiller and the cooling tower together form a chilling facility.SUMMARY OF THE INVENTION[0003] Aspects and embodiments of the present invention are set out in the appended claims.These and other aspects and embodiments of the invention are also described herein.[0004] According to an aspect described herein, there may be provided a datacenter cooling system, comprising: a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquid-to-liquid heat exchanger to exchange heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with a computing device of the rack.[0005] The datacenter cooling system may further comprise: at least one processor to determine a temperature associated with the computing device, with the secondary coolant, or with the fluid, and to cause at least one flow controller to adjust flow rate or flow volume of one or more of the primary coolant, the secondary coolant, or the fluid through the liquid-to-liquid heat exchanger.[0006] The datacenter cooling system may further comprise: at least one flow controller associated with the liquid-to-liquid heat exchanger, the at least one flow controller to be enabled based in part on a cooling requirement for the secondary coolant or for the fluid.[0007] The datacenter cooling system may further comprise: a cold plate associated with the computing device and having first ports for a first portion of microchannels to support the secondary coolant distinctly from second ports for a second portion of the microchannels to support the fluid.[0008] The datacenter cooling system may further comprise: at least one processor to receive sensor inputs from sensors associated with the computing device, the secondary coolant, the primary coolant, or the fluid, the at least one processor to determine a change in a coolant state based in part on the sensor inputs and to cause at least one flow controller to stop or change a flow of the secondary coolant or the fluid, the stopping or the changing of the flow to enable removal of more or less of heat from the computing device.[0009] The datacenter cooling system may further comprise: one or more neural networks to receive the sensor inputs and to infer the change in the coolant state.[0010] The datacenter cooling system may further comprise: at least one processor to cause at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to a secondary cooling loop.[0011] The datacenter cooling system may further comprise: a latching mechanism to enable association of the liquid-to-liquid heat exchanger with a rear door of the rack.[0012] The datacenter cooling system may further comprise: at least one flow controller associated with the liquid-to-liquid heat exchanger and a secondary cooling loop, the at least one flow controller to support flow of the fluid or the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop.[0013] The datacenter cooling system may further comprise: at least one processor to enable a first mode of the datacenter cooling system to provide cooling from the liquid-to-liquid heat exchanger and to enable a second mode to provide cooling from a secondary cooling loop associated with a primary cooling loop and the chilling facility.[0014] According to an aspect described herein, there may be provided a processor comprising one or more circuits, the one or more circuits to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-to-liquid heat exchanger that is associated with a rear door of a rack and to prevent flow the secondary coolant or the fluid to a secondary cooling loop associated with a primary cooling loop and a chilling facility, the liquidto-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of the primary cooling loop [0015] The processor may further comprise: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop.[0016] The processor may further comprise: an input to receive sensor inputs from sensors associated with at least one computing device, the rack, a secondary coolant, or the fluid, the processor to determine a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with a liquid-to-liquid heat exchanger, based in part on the sensor inputs [0017] The processor may further comprise: one or more neural networks to receive the sensor inputs and to infer the first cooling requirement and the second cooling requirement.[0018] The processor may further comprise: one or more neural networks to infer a failure of the secondary cooling loop, the one or more circuits to cause at least one flow controller to activate the liquid-to-liquid heat exchanger to coolant the secondary coolant and to prevent the secondary coolant from returning to the secondary cooling loop.[0019] According to an aspect described herein, there may be provided a processor comprising one or more circuits, the one or more circuits to train one or more neural networks to infer, from sensor inputs of sensors associated with a datacenter cooling system, that a change in a coolant state has occurred, the processor to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquid-to-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of a primary cooling loop associated with a chilling facility.[0020] The processor may further comprise: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop of the datacenter cooling system.[0021] The processor may further comprise: the one or more neural networks to receive the sensor inputs and to be trained to infer a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with the liquid-to-liquid heat exchanger, based in part on an analysis of prior sensor inputs and prior cooling requirements.[0022] The processor may further comprise: an output to provide signals to cause one or more of the liquid-to-liquid heat exchanger or the secondary cooling loop to be adjusted to address different cooling requirements.[0023] The processor may further comprise: an input to receive the sensor inputs associated with a temperature from the at least one computing device, the secondary coolant, or the fluid, the one or more neural networks trained to infer the change in the coolant state has occurred based in part on the temperature and on prior temperatures, the change in the coolant state associated with a change in a flow rate, a flow volume, or a fluid temperature with respect to one or more thresholds for the secondary coolant or the fluid.[0024] According to an aspect described herein, there may be provided a processor comprising one or more circuits, the one or more circuits to comprise one or more neural networks to infer, from sensor inputs of sensors associated with a datacenter cooling system, that a change in a coolant state has occurred, the processor to enable at least one flow controller to cause secondary coolant or fluid to flow through a liquid-to-liquid heat exchanger associated with a rear door of a rack, the liquid-to-liquid heat exchanger to enable exchange of heat from the secondary coolant or the fluid to a primary coolant of a primary cooling loop associated with a chilling facility.[0025] The processor may further comprise: an output to provide signals for the at least one flow controller to enable flow of the secondary coolant through the liquid-to-liquid heat exchanger and to prevent flow of the secondary coolant to the secondary cooling loop of the datacenter cooling system.[0026] The processor may further comprise: the one or more neural networks to receive the sensor inputs and to infer a first cooling requirement associated with the secondary cooling loop and a second cooling requirement associated with the liquid-to-liquid heat exchanger based in part on an analysis of prior sensor inputs and prior cooling requirements.[0027] The processor may further comprise: an output to provide signals to cause one or more of the liquid-to-liquid heat exchanger or the secondary cooling loop to be adjusted to address different cooling requirements.[0028] The processor may further comprise: an input to receive the sensor inputs associated with a temperature from the at least one computing device, the secondary coolant, or the fluid, the one or more neural networks to infer the change in the coolant state has occurred based in part on the temperature and on prior temperatures, the change in the coolant state associated with a change in a flow rate, a flow volume, or a fluid temperature with respect to one or more thresholds for the secondary coolant or the fluid.[0029] According to an aspect described herein, there may be provided a method for datacenter cooling system, comprising: providing a liquid-to-liquid heat exchanger associated with a rear door of a rack; determining cooling requirements for at least one computing device of the rack; and enabling the liquid-to-liquid heat exchanger to exchange heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with the at least one computing device of the rack.[0030] The method may further comprise: determining, using at least one processor, a temperature associated with the at least one computing device in the rack; determining a first cooling requirement or a second cooling requirement using the temperature; and causing, based in part on the first cooling requirement or the second cooling requirement, the liquid-to-liquid heat exchanger or the secondary cooling loop to cause cooling of the secondary coolant or the fluid.[003 I] The method may further comprise: receiving, in at least one processor, sensor inputs from sensors associated with the at least one computing device, the rack, the secondary coolant, or the fluid; and determining, using the at least one processor, the first cooling requirement and the second cooling requirement based in part on the sensor inputs.[0032] The method may further comprise: enabling, using a latching mechanism, the association of the liquid-to-liquid heat exchanger with the rear door of the rack.[0033] The method may further comprise: receiving, by at least one processor, sensor inputs from sensors associated with the at least one computing device; determining, by the at least one processor, a change in a coolant state based in part on the sensor inputs; and causing, based in part on the change in the coolant state, the liquid-to-liquid heat exchanger to cause cooling of the secondary coolant or the fluid.[0034] According to various aspects described herein, there may be provided systems and methods for cooling a datacenter. In at least one embodiment, a liquid-to-liquid heat exchanger associated with a rear door of a rack may exchange heat between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with a computing device of the rack.[0035] The disclosure extends to any novel aspects or features described and/or illustrated herein.[0036] Further features of the disclosure are characterized by the independent and dependent claims.[0037] Any feature in one aspect of the disclosure may be applied to other aspects of the disclosure, in any appropriate combination. In particular, method aspects may be applied to apparatus or system aspects, and vice versa [0038] Furthermore, features implemented in hardware may be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.[0039] Any system or apparatus feature as described herein may also be provided as a method feature, and vice versa. System and/or apparatus aspects described functionally (including means plus function features) may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.[0040] It should also be appreciated that particular combinations of the various features described and defined in any aspects of the disclosure can be implemented and/or supplied and/or used independently.[0041] The disclosure also provides computer programs and computer program products comprising software code adapted, when executed on a data processing apparatus, to perform any of the methods and/or for embodying any of the apparatus and system features described herein, including any or all of the component steps of any method.[0042] The disclosure also provides a computer or computing system (including networked or distributed systems) having an operating system which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus or system features described herein.[0043] The disclosure also provides a computer readable media having stored thereon any one or more of the computer programs aforesaid [0044] The disclosure also provides a signal carrying any one or more of the computer programs aforesaid [0045] The disclosure extends to methods and/or apparatus and/or systems as herein described with reference to the accompanying drawings.[0046] Aspects and embodiments of the disclosure will now be described purely by way of example, with reference to the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS[0047] Figure 1 illustrates an exemplary datacenter cooling system subject to improvements described in at least one embodiment; [0048] Figure 2 illustrates server-level features associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system, according to at least one embodiment; [0049] Figure 3 illustrates rack-level features associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system, according to at least one embodiment; [0050] Figure 4 illustrates datacenter-level features associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system, according to at least one embodiment; [0051] Figure 5 illustrates a method associated with a datacenter cooling system of Figure 2 -4, according to at least one embodiment; [0052] Figure 6 illustrates a distributed system in accordance with at least one embodiment; [0053] Figure 7 illustrates an exemplary datacenter, in accordance with at least one embodiment; [0054] Figure 8 illustrates a client-server network in accordance with at least one embodiment; [0055] Figure 9 illustrates a computer network in accordance with at least one embodiment; [0056] Figure 10A illustrates a networked computer system, in accordance with at least one embodiment; [0057] Figure 1011 illustrates a networked computer system, in accordance with at least one embodiment; [0058] Figure 10C illustrates a networked computer system, in accordance with at least one embodiment; [0059] Figure 11 illustrates one or more components of a system environment in which services may be offered as third-party network services, in accordance with at least one embodiment; [0060] Figure 12 illustrates a cloud computing environment, in accordance with at least one embodiment; [0061] Figure 13 illustrates a set of functional abstraction layers provided by a cloud computing environment, in accordance with at least one embodiment; [0062] Figure 14 illustrates a supercomputer at a chip level, iii accordance with at least one embodiment; [0063] Figure 15 illustrates a supercomputer at a rack module level, in accordance with at least one embodiment; [0064] Figure 16 illustrates a supercomputer at a rack level, in accordance with at least one embodiment; [0065] Figure 17 illustrates a supercomputer at a whole system level, in accordance with at least one embodiment; [0066] Figure 18A illustrates inference and/or training log c in accordance with at least one embodiment; [0067] Figure 18B illustrates inference and/or training logic in accordance with at least one embodiment; [0068] Figure 19 illustrates training and deployment of a neural network, in accordance with at least one embodiment; [0069] Figure 20 illustrates an architecture of a system of a network, in accordance with at least one embodiment; [0070] Figure 21 illustrates an architecture of a system of a network, in accordance with at least one embodiment; [0071] Figure 22 illustrates a control plane protocol stack in accordance with at least one embodiment; [0072] Figure 23 illustrates a user plane protocol stack in accordance with at least one embodiment; [0073] Figure 24 illustrates components of a core network, in accordance with at least one embodiment; [0074] Figure 25 illustrates components of a system to support network function virtual ization (NFV), in accordance with at least one embodiment; [0075] Figure 26 illustrates a processing system, in accordance with at least one embodiment, [0076] Figure 27 illustrates a computer system, in accordance with at least one embodiment; [0077] Figure 28 illustrates a system, in accordance with at least one embodiment; [0078] Figure 29 illustrates an exemplary integrated circuit, in accordance with at least one embodiment; [0079] Figure 30 illustrates a computing system according to at least one embodiment. [0080] Figure 31 illustrates an APU, in accordance with at least one embodiment; [0081] Figure 32 illustrates a CPU, in accordance with at least one embodiment; [0082] Figure 33 illustrates an exemplary accelerator integration slice, in accordance with at least one embodiment; [0083] Figures 34A-3411 illustrate exemplary graphics processors, in accordance with at least one embodiment; [0084] Figure 35A illustrates a graphics core, in accordance with at least one embodiment; [0085] Figure 3511 illustrates a GPGPU, in accordance with at least one embodiment; [0086] Figure 36A illustrates a parallel processor, in accordance with at least one embodiment; [0087] Figure 36B illustrates a processing cluster, in accordance with at least one embodiment; [0088] Figure 36C illustrates a graphics multiprocessor, in accordance with at least one embodiment; [0089] Figure 37 illustrates a software stack of a programming platform, in accordance with at least one embodiment; [0090] Figure 38 illustrates a CUDA implementation of a software stack of Figure 37, in accordance with at least one embodiment; [0091] Figure 39 illustrates a ROCm implementation of a software stack of Figure 37, in accordance with at least one embodiment; [0092] Figure 40 illustrates an OpenCL implementation of a software stack of Figure 37, in accordance with at least one embodiment; [0093] Figure 41 illustrates software that is supported by a programming platform, in accordance with at least one embodiment; and [0094] Figure 42 illustrates compiling code to execute on programming platforms of Figures 37 -40 in accordance with at least one embodiment. DETAILED DESCRIPTION [0095] In at least one embodiment, an exemplary datacenter 100 can be utilized as illustrated in Figure 1, which has a cooling system subject to improvements described herein. In at least one embodiment, numerous specific details are set forth to provide a thorough understanding, but concepts herein may be practiced without one or more of these specific details, in at least one embodiment, datacenter cooling systems can respond to sudden high heat requirements caused by changing computing-loads in present day computing components. in at least one embodiment, as these requirements are subject to change or tend to range from a minimum to a maximum of different cooling requirements, these requirements must be met in an economical manner, using an appropriate cooling system. in at least one embodiment, for moderate to high cooling requirements, liquid cooling system may be used. In at least one embodiment, high cooling requirements are economically satisfied by localized immersion cooling. In at least one embodiment, these different cooling requirements also reflect different heat features of a datacenter. In at least one embodiment, heat generated from these components, servers, and racks are cumulatively referred to as a heat feature or a cooling requirement as cooling requirement must address a heat feature entirely.[0096] In at least one embodiment, a datacenter liquid cooling system is disclosed. In at least one embodiment, this datacenter cooling system addresses heat features in associated computing or datacenter devices, such as in graphics processing units (GPUs), in switches, in dual inline memory module (DIMMs), or central processing units (CPUs). In at least one embodiment, these components may be referred to herein as high heat density computing components. Furthermore, in at least one embodiment, an associated computing or datacenter device may be a processing card having one or more CPUs, switches, or CPUs thereon. In at least one embodiment, each of CPUs, switches, and CPUs may be a heat generating feature of a computing device. In at least one embodiment, a GPU, a CPU, or a switch may have one or more cores, and each core may be a heat generating feature.[0097] In at least one embodiment, a liquid-to-liquid (L2L) heat-exchanger may be provided on a rear door of a rack, instead of fan wall. In at least one embodiment, an L2L heat exchanger may enables local cooling loops in a datacenter cooling system. In at least one embodiment, local cooling loops may be enabled by an L2L heat exchanger that may be associated with, such as being hung on features of a rear door or incorporated in features of a rear door. In at least one embodiment, one or more flow controllers may be used to control coolant flow from a cold plate to interface with primary coolant from a chilling facility. In at least one embodiment, at least one flow controller may be used to prevent coolant flow from a cold plate to a secondary cooling loop that is associated with a primary cooling loop and a chilling facility and may instead enable a local cooling loop with an L2L heat exchanger. In at least one embodiment, a local cooling loop enables removal of heat associated with at least one computing device by a rear door heat exchanger instead of requiring CDUs and other aspects associated with a secondary cooling loop.[0098] In at least one embodiment, liquid cooling in local cooling loops is enabled herein for datacenter cooling systems. In at least one embodiment, instead of fan walls associated with a rear door, an intelligent facility-fluid assisted rear-door heat exchanger (RDHX) is provided to cool fluid (such as coolant, including secondary coolant) instead of a secondary cooling loop. In at least one embodiment, secondary coolant of a secondary cooling loop may be diverted to an RDHX, also referred herein as an L2L heat exchanger. In at least one embodiment, facility fluid (such as used for primary coolant) may be supplied to an RDHX to enable removal of heat from secondary coolant of a cold plate. In at least one embodiment, intelligent control may be offered via sensors and flow controllers to control flow of facility fluid (or other primary fluid) and of secondary coolant or other local coolant or fluid, for cooling purposes. In at least one embodiment, such features may be also available to cool immersive cooling blade servers.[0099] In at least one embodiment, a problem addressed herein is to provide reliable targeted cooling of high heat-density GPU/Switch/CPU and related components of a datacenter. In at least one embodiment, these components may require additional plumbing under or over servers' racks. In at least one embodiment, valuable datacenter space may have been needed for distribution units (such as CDUs) and for in-row heat exchangers (TRHXs), which may be replaced by an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system. In at least one embodiment, additional components otherwise used, and now addressed by an intelligent rear door heat exchanger for local cooling loops, may have also contributed to possible modes of failure in a datacenter. In at least one embodiment, such failures addressed include failures from potential leaks or from pressure leaks due to distances travelled by a secondary coolant.[0100] In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system can reduce or eliminate air movement design from wall fans and can eliminate CDU requirements. In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system can create a thermally neutral design using facility fluid in an indirect contact heat exchanger design placed in a rear door to exchange heat of a coolant of a secondary cooling loop received from a cold plate in a rack.[0101] In at least one embodiment, an exemplary datacenter 100 can be utilized as illustrated in Figure 1, which has a cooling system subject to improvements described herein. In at least one embodiment, a datacenter 100 may be one or more rooms 102 having racks 110 and auxiliary equipment to house one or more servers on one or more server trays. In at least one embodiment, a datacenter 100 is supported by a cooling tower 104 located external to a datacenter 100. In at least one embodiment, a cooling tower 104 dissipates heat from within a datacenter 100 by acting on a primary cooling loop 106. In at least one embodiment, a cooling distribution unit (CDU) 112 is used between a primary cooling loop 106 and a second or secondary cooling loop 108 to enable extraction of heat from a second or secondary cooling loop 108 to a primary cooling loop 106. In at least one embodiment, a secondary cooling loop 108 can access various plumbing into a server tray as required, in an aspect. In at least one embodiment, loops 106, 108 are illustrated as line drawings, but a person of ordinary skill would recognize that one or more plumbing features may be used. In at least one embodiment, flexible polyvinyl chloride (PVC) pipes may be used along with associated plumbing to move fluid along in each provided loop 106; 108. In at least one embodiment, one or more coolant pumps may be used to maintain pressure differences within coolant loops 106, 108 to enable movement of coolant according to temperature sensors in various locations, including in a room, in one or more racks 110, and/or in server boxes or server trays within one or more racks 110.[0102] In at least one embodiment, coolant in a primary cooling loop 106 and in a secondary cooling loop 108 may be at least water and an additive. In at least one embodiment, an additive may be glycol or propylene glycol. In operation, in at least one embodiment, each of a primary and a secondary cooling loops may have their own coolant. In at least one embodiment, coolant in secondary cooling loops may be proprietary to requirements of components in a server tray or in associated racks 110. In at least one embodiment, a CDU 112 is capable of sophisticated control of coolants, independently or concurrently, within provided coolant loops 106, 108. In at least one embodiment, a CDU may be adapted to control flow rate of coolant so that coolant is appropriately distributed to extract heat generated within associated racks 110. In at least one embodiment, more flexible tubing 114 is provided from a secondary cooling loop 108 to enter each server tray to provide coolant to electrical and/or computing components therein.[0103] In at least one embodiment, tubing 118 that forms part of a secondary cooling loop 108 may be referred to as room manifolds. Separately, in at least one embodiment, further tubing 116 may extend from row manifold tubing 118 and may also be part of a secondary cooling loop 108 but may be referred to as row manifolds. In at least one embodiment, coolant tubing 114 enters racks as part of a secondary cooling loop 108 but may be referred to as rack cooling manifold within one or more racks. In at least one embodiment, row manifolds 116 extend to all racks along a row in a datacenter 100. In at least one embodiment, plumbing of a secondary cooling loop 108, including coolant manifolds 118, 116, and 114 may be improved by at least one embodiment herein. In at least one embodiment, a chiller 120 may be provided in a primary cooling loop within datacenter 102 to support cooling before a cooling tower. In at least one embodiment, additional cooling loops that may exist in a primary control loop and that provide cooling external to a rack and external to a secondary cooling loop, may be taken together with a primary cooling loop and is distinct from a secondary cooling loop, for this disclosure.[0104] In at least one embodiment, in operation, heat generated within server trays of provided racks 110 may be transferred to a coolant exiting one or more racks 110 via flexible tubing of a row manifold 114 of a second cooling loop 108. In at least one embodiment, second coolant (in a secondary cooling loop 108) from a CDU 112, for cooling provided racks 110, moves towards one or more racks 110 via provided tubing. In at least one embodiment, second coolant from a CDU 112 passes from on one side of a room manifold having tubing 118, to one side of a rack via a row manifold 116, and through one side of a server tray via different tubing 114. In at least one embodiment, spent or returned second coolant (or exiting second coolant carrying heat from computing components) exits out of another side of a sewer tray (such as enter left side of a rack and exits right side of a rack for a server tray after looping through a sewer tray or through components on a server tray). In at least one embodiment, spent second coolant that exits a server tray or a rack 110 comes out of different side (such as exiting side) of tubing 114 and moves to a parallel, but also exiting side of a row manifold 116. In at least one embodiment, from a row manifold 116, spent second coolant moves in a parallel portion of a room manifold 118 and is going in an opposite direction than incoming second coolant (which may also be renewed second coolant), and towards a CDU 112.[0105] In at least one embodiment, spent second coolant exchanges its heat with a primary coolant in a primary cooling loop 106 via a CDU 112. In at least one embodiment, spent second coolant may be renewed (such as relatively cooled when compared to a temperature at a spent second coolant stage) and ready to be cycled back to through a second cooling loop 108 to one or more computing components. In at least one embodiment, various flow and temperature control features in a CDU 112 enable control of heat exchanged from spent second coolant or flow of second coolant in and out of a CDU 112. In at least one embodiment, a CDU 112 may be also able to control a flow of primary coolant in primary cooling loop 106.[0106] In at least one embodiment, server-level features 200 as illustrated in Figure 2 can be associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system. In at least one embodiment, server-level features 200 include a server tray or box 202. In at least one embodiment, a server tray or box 202 includes a server manifold 204 to be intermediately coupled between provided cold plates 210A-D of a server tray or box 202 and rack manifolds of a rack hosting a server tray or box 202. In at least one embodiment, a server tray or box 202 includes one or more cold plates 210A-D associated with one or more computing or datacenter components or devices 220A-D. In at least one embodiment, one or more server-level cooling loops 214A, B may be provided between a sewer manifold 204 and one or more colds plates 210A-D. In at least one embodiment, each server-level cooling loop 214A; B includes an inlet line 210 and an outlet line 212. In at least one embodiment, when there are series configured cold plates 210A, B, an intermediate line 216 may be provided. In at least one embodiment, one or more cold plates 2I0A-D may support distinct ports and channels for a secondary coolant of a secondary cooling loop or a different local coolant, such as a fluid circulated from a pre-loaded L2L heat exchanger. In at least one embodiment, fluid for cooling may be provided to a server manifold 204 via provided inlet and outlines 206A, 206B.[0107] In at least one embodiment, a server tray 202 is an immersive-cooled server tray that may be flooded by fluid. In at least one embodiment, a fluid for an immersive-cooled server tray may be a dielectric engineered fluid capable of being used in an immersive-cooled server. In at least one embodiment, a secondary coolant or local coolant may be used to cool engineered fluid. In at least one embodiment, a local coolant may be used to cool engineered fluid when a primary cooling loop associated with a secondary cooling loop circulating a secondary coolant has failed or is failing. In at least one embodiment, at least one cold plate therefore has ports for a secondary cooling loop and for a local cooling loop and can support a local cooling loop that is activated in an event of a failure in a primary cooling loop. In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops may be used without a secondary cooling loop.[0108] In at least one embodiment, at least one dual-cooling cold plate 210B; 250 may be configured to work alongside regular cold plates 210A, C, D. In at least one embodiment, a three-dimensional (3D) blow-up illustration (cold plate 250) provides internal detail of at least some features that may be included in a dual-cooling cold plate 210B. In at least one embodiment, a regular cold plate may have one set of microchannels 264; 270 instead of two sets illustrated. In at least one embodiment, a dual-cooling cold plate 250 has distinct paths 264, 270 (each path also referred to as microchannels) for secondary coolant of a secondary cooling loop and for local coolant of a local cooling loop. In at least one embodiment, secondary or local coolant may not be dielectric in property. In at least one embodiment, in a use case of an immersive-cooled server, fluid that may be a dielectric engineered fluid may be adapted for both, a cold plate application and an immersive-cooled sewer tray application.[0109] In at least one embodiment, reference to cold plate, along with its dual-cooling features, implies a reference to a cold plate that can support at least two types of cooling loops, unless otherwise stated. In at least one embodiment, both types of colds plates receive fluid for cooling from a same secondary cooling loop and can both support a local cooling loop. In at least one embodiment, a standard coolant, such as facility water may be used in both a secondary cooling loop and a local cooling loop, but for a limited timed. In at least one embodiment, however, facility water is used as a primary coolant. In at least one embodiment, secondary coolant already within a cold plate is diverted to a local cooling loop and may be mixed with pre-loaded secondary coolant already within an L2L heat exchanger.[0110] In at least one embodiment, local coolant may therefore be same or similar to a secondary coolant to avoid issues regarding chemistry differences and manufacturer requirements of cold plates used in a datacenter cooling system. In at least one embodiment, a fluid may only support cold plate usage and may not be available for immersive cooling. In at least one embodiment, each type of cold plate receives different fluid from respective secondary or other cooling loops interfacing with a primary cooling loop. In at least one embodiment, in situations where different fluids are used with different coolant distribution units (CDUs) of different secondary loops, then a local cooling loop may be suited for a dual-cooling cold plate so that different channels may be used for each of a local coolant and different secondary coolants.[0111] In at least one embodiment, a dual-cooling cold plate 250 is adapted to receive a two types of fluids (such as a secondary coolant and a local coolant) and to keep two types of fluids distinct from each other via their distinct ports 252, 272; 268, 262 and their distinct paths 264, 270. In at least one embodiment, each distinct path is a fluid path. In at least one embodiment, fluid (such as local coolant) from a fluid source and a secondary coolant may be of a same or similar composition and may be restocked from a same source in a datacenter cooling system.[M12] In at least one embodiment, a dual-cooling cold plate 250 includes ports 252, 272 to receive fluid into a cold plate 250 and to pass fluid out of a cold plate 250. In at least one embodiment, a dual-cooling cold plate 250 includes ports 268, 262 to receive a secondary coolant into a cold plate 250 and to pass a secondary coolant out of a cold plate 250. In at least one embodiment, ports 252, 272 may have valve covers 254, 260 that may be directional, and pressure controlled. In at least one embodiment, valve covers may be associated with all provided ports. In at least one embodiment, provided valve covers 254, 260 are mechanical features of associated flow controllers that also have corresponding electronic features (such as at least one processor to execute instructions stored in associated memory and to control mechanical features for associated flow controllers).[0113] In at least one embodiment, each valve may be actuated by an electronic feature of an associated flow controller. In at least one embodiment, electronic and mechanical features of provided flow controllers are integrated. In at least one embodiment, electronic and mechanical features of provided flow controllers are physically distinct. In at least one embodiment, reference to flow controllers may be to one or more of provided electronic and mechanical features or to their union but is at least in reference to features enabling control of flow of coolant or fluid through each cold plate or an immersion-cooled server tray or box.[0114] In at least one embodiment, electronic features of provided flow controllers receive control signals and assert control over mechanical features. In at least one embodiment, electronic features of provided flow controllers may be actuators or other electronic parts of other similar electromechanical features. In at least one embodiment, flow pumps may be used as flow controllers. In at least one embodiment, impellers, pistons, or bellows may be mechanical features, and an electronic motor and circuitry form electronic features of provided flow controllers.[0115] In at least one embodiment, circuitry of provided flow controllers may include processors, memories, switches, sensors, and other components, altogether forming electronic features of provided flow controllers. In at least one embodiment, provided ports 252, 262, 272, 268 of provided flow controllers are adapted to either allow entry or to allow egress of an immersive fluid. In at least one embodiment, flow controllers 280 may be associated with fluid lines 276 (also 256, 274) that enable entry and egress of fluid (such as a local coolant) to a cold plate 210B. In at least one embodiment, other flow controllers may be similarly associated with coolant lines 210, 216, 212 (also 266, 258) to enable entry and egress of a secondary coolant to a cold plate 21 OB.[0116] In at least one embodiment, fluid (such as a local coolant) enters provided fluid lines 276 via dedicated fluid inlet and outlet lines 208A, B. in at least one embodiment, a server manifold 204 is adapted with channels therein (illustrated by dotted lines) to support distinct paths to distinct fluid lines 276 (also 256, 274) and to any remaining loops 21 4A, B that are associated with secondary coolant inlet and outlet lines 206A, B. In at least one embodiment, there may be multiple manifolds to support fluid (a local coolant) and secondary coolant distinctly. In at least one embodiment, there may be multiple manifolds to support entry and egress, distinctly, for each of a fluid and a secondary coolant. In at least one embodiment, if a fluid is same or similar as a secondary coolant, then at least two different flows via a same fluid path (at least within a cold plate or a server tray) to a fluid source and to a secondary coolant row manifold (such as row manifold 350 in Figure 3) are enabled.[0117] In at least one embodiment, a first flow may be to enable fluid (such as local coolant) to flow through one or more provided ports 252, 272 and an associated path 270. In at least one embodiment, a dual-cooling cold plate 250 may have isolated plate sections that are flooded with a fluid andlor a secondary coolant, while being kept distinct from each other by gaskets or seals. In at least one embodiment, a second flow may be to enable secondary coolant to flow through provided ports 268, 262, and an associated path 264.[0118] In at least one embodiment, flow controllers 278 may be associated with a fluid inlet 276 and outlet portions at a server manifold 204 instead of provided flow controllers 280 at respective cold plates. In at least one embodiment, a first flow uses only local coolant and may be enabled when a failure is determined in a secondary cooling loop or a primary cooling loop, so that a secondary coolant is unable to effectively extract heat from at least one computing device. In at least one embodiment, a failure may be that a secondary coolant is not sufficiently cooled via a CDU and so it may be unable to extract sufficient heat of at least one computing device via its associated cold plate.[0119] In at least one embodiment, rack-level features 300 as illustrated in Figure 3 can be associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system. In at least one embodiment, rack-level features 300 include a rack 302 having brackets 304, 306 to hang cooling manifolds 3 I4A, B. In at least one embodiment, while a rack 330 is separately illustrated from a rack 302, this rack 330 may be illustrative of a rear perspective view of a rack 302. In at least one embodiment, as such, brackets 334, 336 provided on rack 330 are perspective views of brackets 304, 306 provided on rack 302. In at least one embodiment, brackets 304, 306 provided for a rack are flat structures against an inner wall of a rack. In at least one embodiment, brackets 304, 306 provided for a rack extend from an inner wall of a rack. In at least one embodiment, brackets 304, 306 provided for a rack are affixed to an inner wall of a rack and have multiple mounting points facing one or more directions, including inside or towards a rear of a rack.[0120] In at least one embodiment, cooling manifolds 314A, B may be provided to pass secondary coolant or local coolant between server-level features 200 (and illustrated in Figure 3 as server trays or boxes 308) and a CDU (such as CDU 406 of Figure 4) of a secondary cooling loop or a local cooling loop of a datacenter cooling system. In at least one embodiment, different CDUs may serve different racks. In at least one embodiment, different rack cooling manifolds may be distinctly part of a secondary cooling loop and a local cooling loop. In at least one embodiment, at least one server tray or box 308 (such as a bottom-most server tray or box 308 in rack 302) may be designated as a control system for an intelligent rear door heat exchanger for local cooling loops so that, if a local coolant is used, such a system may be isolated from a secondary cooling loop. In at least one embodiment, a control system in a server tray or box 308 may include safety feature (such as sensors to provide sensor data or proper function), communication features (to communicate with at least one flow controller for an active mode and to communicate to an external monitor), power features to power one or more flow controllers and at least one processor (and its related features), and control features offered by at least one processor that may be associated with at least one flow controller.[0121] In at least one embodiment, row manifold 350 may be part of a secondary cooling loop to feed an inlet rack manifold 314A via provided lines 310A, 310. In at least one embodiment, secondary coolant proceeds via a provided line 316 to cold plate 326 to extract heat from associated computing device 324 within a server 308; and proceeds via a provided line 318 to outlet rack manifold 31 4B and through provided lines 312, 3 I 2A, and back into a same or a different row manifold 350. In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops can work independent of a secondary cooling loop and can cool either secondary coolant of a secondary cooling loop or local coolant via provided lines 31 2B, 31 OB for a local cooling loop. In at least one embodiment, one or more diverter flow controllers 310C, 312C isolates each of a secondary cooling loop and a local cooling loop.[0122] In at least one embodiment, a datacenter cooling system includes a liquid-to-liquid (L2L) heat exchanger 340A, B that is associated with a rear door 368 of a rack 330 (or 302). In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops includes heat exchange pipes or a gasket heat exchanger forming an L2L heat exchanger 340A, B. In at least one embodiment, sections 340A, B of an L2L heat exchanger may be integrated together into a singular unit and may be internally separated by gaskets to allow separate fluids in different sections of an L2L heat exchanged while sharing common surfaces. In at least one embodiment, this enables a full effect of heat transfer between separate fluids. In at least one embodiment, sections 340A, B of an L2L heat exchanger may be integrated into a rear door of a rack and may be associated with a rack at hinge areas 360 provided on a rack 330 (or 302) or via its brackets 334, 336. In at least one embodiment, heat exchange pipes or a gasket heat exchanger in a first section 340B is adapted to circulate secondary coolant or fluid entering through a provided flow controller 366A In at least one embodiment, heat exchange pipes or a gasket heat exchanger in a second section 340A circulates facility fluid or other primary coolant, via a provided flow controller 366B, to provide cooling for a secondary coolant or fluid of an L2L heat exchanger 340A, B. [0123] In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops is part of or incorporated within a rear door of a rack 302 (or 330). In at least one embodiment, a separate facility or primary manifold 364 provides facility fluid or primary coolant to one or more L2L heat exchangers of one or more racks. In at least one embodiment, an L2L heat exchanger 340A, B includes channels instead of pipes or plates to pass fluid for cooling. In at least one embodiment, a datacenter cooling system is able to address a first cooling requirement of a rack 330 (or 302), in a first mode, by an L2L heat exchanger 340A, B of a rack 330. In at least one embodiment, in a first mode, an L2L heat exchanger may be used to dissipate heat from secondary coolant or a fluid of a cold plate via a primary coolant or fluid. In at least one embodiment, a datacenter cooling system is able to address a second cooling requirement of a rack 330 (or 302) in a second mode by a secondary cooling loop interfacing with a CDU, a primary coolant, and a chilling facility, in at least one embodiment, for high density computing components both modes are in operation for any cooling requirement determined for a rack. In at least one embodiment, a reservoir 362 may be provided to store primary coolant independent of a chilling facility but provided periodically from a chilling facility.[0124] In at least one embodiment, a first cooling requirement and a second cooling requirement may pertain to different heat features of a datacenter. In at least one embodiment, a first cooling requirement may be associated with heat generated from one or more computing devices that may be addressed by an L2L heat exchanger alone. In at least one embodiment, a second cooling requirement may be associated with heat generated from one or more computing devices by being retained within a fluid, via a cold plate, for instance, and that may need dissipation by one or more of an L2L heat exchanger or a secondary cooling loop. In at least one embodiment, an amount of heat generated, extracted, or retained may be temperature value that needs to below an operating value or an operating range; or that needs to be maintained at an operating value or range.[0125] In at least one embodiment, at least one processor may be provided to determine a temperature associated with a computing device 324 in a rack 330 (or 302). In at least one embodiment, at least one processor is able to cause a datacenter cooling system to operate in a first mode or a second mode based at least in part on a temperature associated with or determined from a computing device 324. In at least one embodiment, an immersive-cooled server 352 within a rack 302 (or 330) may have its cooling requirements addressed concurrently with an air-cooled server 308 within a rack 302 (or 330). In at least one embodiment, an immersive-cooled server 352 may include a dielectric engineered fluid surrounding a computing device. In at least one embodiment, an immersive-cooled server 352 may include a second heat exchanger to exchange heat between a dielectric engineered fluid and fluid to be circulated in an L2L heat exchanger 340.[0126] In at least one embodiment, a cold plate 326 may be associated with a computing device 324. In at least one embodiment, a cold plate may have first ports for a first portion of microchannels to support a secondary coolant distinctly from a second portion of microchannels that support a fluid of an L2L heat exchanger. In at least one embodiment, at least one processor may be adapted to receive sensor inputs from sensors associated with a computing device 324. In at least one embodiment, sensors may also be associated with one or more of a rack, a secondary coolant, or a fluid. In at least one embodiment, at least one processor may be adapted to determine a first cooling requirement and a second cooling requirement based in part on sensor inputs. In at least one embodiment, sensor inputs may be temperature sensed at one or more time intervals from sensors as described.[0127] In at least one embodiment, one or more neural networks are adapted to receive sensor inputs from provided sensors and are adapted to infer a first cooling requirement and a second cooling requirement for a datacenter cooling system. In at least one embodiment, at least one processor may cause at least one flow controller to enable flow of fluid through an L2L heat exchanger and to prevent flow of fluid to a secondary cooling loop. In at least one embodiment, one or more diverter flow controllers 310C, 31 2C may be enabled to cause such flow and prevention of flow of fluid. In at least one embodiment, provided lines 310B, 312B may be provided to fluidly couple with inlet line 342 and outlet line 344 of an L2L heat exchanger 340A, B. In at least one embodiment, further flow controllers 366A, B on an L2L heat exchanger 340A, B may be enabled to prevent or cause flow of fluid (primary coolant, secondary coolant, or fluid) through an L2L heat exchanger 340A, B. In at least one embodiment, a reservoir 362 provides sufficient primary coolant to enable cooling via an L2L heat exchanger till any issues in an primary cooling loop or a secondary cooling loop may be addressed. In at least one embodiment, a period of time for such issues to be addressed may be defined in a service level agreement (SLA) and may be used to determine a capacity of a reservoir 362 to hold sufficient primary coolant for an L2L heat exchanger.[0128] In at least one embodiment, at least one processor may cause one or more flow controllers to control flow rate and flow volume of primary coolant, secondary coolant, or fluid when cooling within an L2L heat exchanger in a first mode, differently than secondary cooling loop-based cooling in a second mode. In at least one embodiment, one or more latching mechanisms 356 may be provided to enable association of an L2L heat exchanger 340A, B with a rear door368 of a rack 330 (or 302). In at least one embodiment, electrical coupling may be provided to power at least one component of a flow controller 366A, B. In at least one embodiment, at least one processor may be adapted to receive sensor inputs from sensors associated with at least one computing device, such as computing device 324. In at least one embodiment, at least one processor may determine a change in a coolant state based in part on sensor inputs. In at least one embodiment, a coolant state may relate to a temperature of coolant, a flow rate, a flow volume, or status (such as flowing or not).[0129] In at least one embodiment, a coolant state may be sensed from an egress or an entry to one or more of a cold plate, a rack, or a cooling manifold. In at least one embodiment, at least one processor can cause a datacenter cooling system to operate in a first mode or a second mode based in part on a change determined for a coolant state. In at least one embodiment, when it is determined that coolant temperatures at an egress from a cold plate are not beyond a threshold (implying that not much heat is being generated by an associated computing device), a first mode may be enabled for a coolant to flow through an L2L heat exchanger. In at least one embodiment, this enables economical use of a datacenter cooling system. In at least one embodiment, when temperature at a hot aisle of a rack, at a vicinity of a computing device, or of a fluid (secondary coolant or local coolant) is determined to be beyond a threshold (implying that more heat is being generated by an associated computing device than can be handled by forced air alone), a second mode for a datacenter cooling system may be enabled to use a secondary cooling loop without or without an L2L heat exchanger. In at least one embodiment, a second mode engages or enables a secondary cooling loop in addition to a first mode already provided to cool an L2L heat exchanger having coolant circulating from a cold plate associated with a computing device to provide further cooling than provided by an L2L heat exchanger.[0130] In at least one embodiment, datacenter-level features 400 as illustrated in Figure 4 can be associated with an intelligent rear door heat exchanger for local cooling loops for a datacenter cooling system. In at least one embodiment, datacenter-level features 400, within a datacenter 402, may include racks 404 for hosting one or more server trays or boxes; one or more CDUs 406 for exchanging heat between a secondary cooling loop 412 and a primary cooling loop 422; one or more row manifolds 410 for distributing coolant from a CDU 406; and associated various flow controllers 424, and inlet and outlet lines 412, 414, 416, 418.[0131] In at least one embodiment, an intelligent rear door heat exchanger for local cooling loops are provided on each of rear doors of each of provided racks 404 in a datacenter 402. In at least one embodiment, an aisle behind racks 404 is a hot aisle for discharging heat from at least one computing device in at least one rack during a first mode of operation of a datacenter cooling system. In at least one embodiment, a primary or local reservoir 432 may be provided along with a primary or local manifold 430 to distribute primary coolant to different L2L heat exchangers in different racks 404 of a datacenter cooling system having an intelligent rear door heat exchanger for local cooling loops for liquid cooling. In at least one embodiment, a primary or local manifold 430 may be provided to directly provide primary coolant without a primary or local reservoir 432. In at least one embodiment, a primary or local reservoir 432 may be located within a datacenter 402 or within a controlled environment to ensure that it maintains a temperature that may be predetermined.[0132] In at least one embodiment, different row manifolds may be associated with different racks. In at least one embodiment, different coolant may be a chemical match or mismatch with respect to a local coolant. In at least one embodiment, different fluid sources are provided as redundant features to different CDUs depending on chemistries of different secondary coolant used with each of different provided CDUs. In at least one embodiment, there need not be a secondary cooling loop and CDU for one or more racks 404. In at least one embodiment, these racks not associated with a secondary cooling loop may be sufficiently addressed by an intelligent rear door heat exchanger for local cooling loops.[0133] In at least one embodiment, a rack 404 may be associated with at least one processor for operating an intelligent rear door heat exchanger for local cooling loops thereon. In at least one embodiment, a processor may include one or more circuits. In at least one embodiment, one or more circuits of a processor may be adapted to determine cooling requirements for a datacenter cooling system. In at least one embodiment, a processor may cause a first mode of operation for a datacenter cooling system to address a first cooling requirement by an L2L heat exchanger exchanging heat between secondary coolant or fluid and a primary coolant from a chilling facility 408. In at least one embodiment, a processor may cause a second mode of operation for a datacenter cooling system to address a second cooling requirement by a secondary cooling loop having row manifold 410, flow controllers 416, 418, and a CDU 406 that is, in turn, coupled to a primary cooling loop 422 having a chilling facility 408.[0134] In at least one embodiment, a local cooling loop may be more economical than a secondary cooling loop, but a secondary cooling loop may address higher cooling requirements than a primary cooling loop. In at least one embodiment, both modes are caused to occur concurrently. In at least one embodiment, a gasket or pipe heat exchanger having primary coolant and secondary or local coolant and may be used as an L2L heat exchanger.[0135] In at least one embodiment, a processor used with an intelligent rear door heat exchanger for local cooling loops includes an output to provide signals for one or more flow controllers. In at least one embodiment, one or more flow controllers may enable flow of fluid through an L2L heat exchanger and may prevent flow of fluid to a secondary cooling loop in a mode of a datacenter cooling system so that an intelligent rear door heat exchanger for local cooling loops provides a singular source of cooling in a rack. In at least one embodiment, this feature enables use of an intelligent rear door heat exchanger for local cooling loops in isolation without a secondary cooling loop, a primary cooling loop, a CDU, and associated chilling towers. In at least one embodiment, such cooling may be provided for a period of time using a primary coolant reservoir till any issue in a primary cooling loop has be addressed. In at least one embodiment, such cooling may be of a capacity defined by a downtime in a service level agreement (SLA).[0136] In at least one embodiment, a processor used with an intelligent rear door heat exchanger for local cooling loops includes an input to receive sensor inputs from sensors associated with at least one computing device of a rack 404. In at least one embodiment, sensors may be also or separately associated with a rack, a secondary coolant, or fluid from an associated cold plate of a rack. In at least one embodiment, a processor may determine a first cooling requirement and a second cooling requirement based in part on sensor inputs from these associated sensors. In at least one embodiment, based in part on sensor inputs from these associated sensors, flow rate or flow volume may be adjusted for one or more of a primary coolant, a secondary coolant, or a fluid through a liquid-to-liquid heat exchanger.[0 1 37] In at least one embodiment, one or more neural networks may be provided within at least one processor to receive sensor inputs and to infer a first cooling requirement and a second cooling requirement from computing devices or aspects of a datacenter cooling system. In at least one embodiment, one or more neural networks may infer a failure of a secondary cooling loop or a primary cooling loop. In at least one embodiment, based in part on sensor inputs associated with flow rates, flow volumes, temperature, humidity, and leaks, one or more circuits of a processor may cause one or more flow controllers to support either modes of cooling.[0138] In at least one embodiment, a processor used with a rack 404 and an intelligent rear door heat exchanger for local cooling loops includes one or more circuits. In at least one embodiment, one or more circuits of a processor may cause a first mode or a second mode of different modes of operation for a datacenter cooling system. In at least one embodiment, causing a first mode or a second mode is in reference to causing a datacenter cooling system to operate in a first mode or a second mode. In at least one embodiment, a datacenter cooling system includes an L2L heat exchanger for a local cooling loop. In at least one embodiment, one or more circuits of a processor may be provided to train one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack or with a fluid from at least one cold plate of a rack. In at least one embodiment, a processor may cause a first mode to address a first cooling requirement by cooling using an L2L heat exchanger coupled to a primary cooling loop and a rear door of a rack. In at least one embodiment, an L2L heat exchanger may be cooled by a primary coolant from a reservoir without a primary cooling loop being active In at least one embodiment, a processor may cause a second mode to address a second cooling requirement while maintaining flow through a local cooling loop by engaging a secondary cooling loop and a CDU concurrently.[0139] In at least one embodiment, an output of a processor used with an intelligent rear door heat exchanger for local cooling loops may be adapted to provide signals for one or more flow controllers. In at least one embodiment, this enables flow of fluid through an L2L heat exchanger and enables prevention of flow of fluid to a secondary cooling loop in a first mode of a datacenter cooling system. In at least one embodiment, a secondary cooling loop is not used with an intelligent rear door heat exchanger for local cooling loops; however, when used, if chemistry matches between a secondary coolant and a local coolant to be used with an L2L heat exchanger, then it is possible to use at least one diversion flow controller to divert secondary coolant for use with an intelligent rear door heat exchanger for local cooling loops.[0140] In at least one embodiment, one or more neural networks of a processor may be adapted to receive sensor inputs. In at least one embodiment, one or more neural networks may be trained to infer a first cooling requirement and a second cooling requirement as part of an analysis of prior sensor inputs and prior cooling requirements. In at least one embodiment, one or more neural networks may be trained with correlated data of prior sensor inputs and prior cooling requirements so that new sensor inputs within thresholds of prior sensor inputs may be correlated to prior cooling requirements or variations thereof [014 I] In at least one embodiment, an output of processor used with an intelligent rear door heat exchanger for local cooling loops may be adapted to provide signals to cause one or more flow controllers to be adjusted in a first mode so that fluid flow occurs differently in a first mode than in a second mode. In at least one embodiment, fluid flow may be increased or decreased to an L2L heat exchanger depending on which mode is active.[0142] In at least one embodiment, an input of a processor used with an intelligent rear door heat exchanger for local cooling loops is adapted to receive sensor inputs associated with a temperature from at least one computing device or from fluid exiting a cold plate. In at least one embodiment, one or more neural networks of a processor may be trained to infer that a change in coolant state has occurred based in part on a temperature and on prior temperatures of at least one computing device or fluid. In at least one embodiment, one or more circuits of a processor may be adapted to cause a first mode or a second mode of operation for a datacenter cooling system.[0143] In at least one embodiment, a processor to be used with an intelligent rear door heat exchanger for local cooling loops includes one or more circuits to cause a first mode or a second mode of operation for a datacenter cooling system. In at least one embodiment, one or more circuits or a processor is to include one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack 404 or with fluid from at least one cold plate. In at least one embodiment, a processor may be adapted to cause a first mode to address a first cooling requirement by enabling fluid flow through an L2L heat exchanger. In at least one embodiment, a processor may be adapted to also cause a second mode to address a second cooling requirement by a secondary cooling loop and a CDU to cool fluid circulating from a cold plate.[0144] In at least one embodiment, each of at least one processor described throughout Figures 1 -4 has inference and/or training logic 1815 that may include, without limitation, code and/or data storage 1801 to store forward and/or output weight and/orinput/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1801 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information may be to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, code and/or data storage 1801 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 1801 may be included with other on-chip or off-chip data storage, including a processor' s LI, L2, or L3 cache or system memory.[0145] In at least one embodiment, an inference and/or training logic 1815 of at least one processor may be part of a building management system (BMS) for controlling flow controllers at one or more of a server-level, a rack-level, and a row-level. In at least one embodiment, a determination to engage a flow controller associated with a secondary cooling loop, an intelligent rear door heat exchanger for local cooling loops, a CDU, cold plates, or other cooling manifolds may be provided to one or more neural networks of an inference and/or training logic 1815 to cause one or more neural networks to infer which flow controllers to gracefully engage or disengage for coolant requirements for one or more cold plates, servers, or racks from either an L2L heat exchanger or a secondary cooling loop of a datacenter cooling system. In at least one embodiment, increase or decrease of fluid flow through an L2L heat exchanger may be enabled by flow controllers that are controlled by an inference and/or training logic 1815 of at least one processor associated with control logic that is associated with a local cooling loop.[0146] In at least one embodiment, at least one processor may be associated with a local cooling loop and with a secondary cooling loop. In at least one embodiment, at least one processor may be associated with an intelligent rear door heat exchanger for local cooling loops. In at least one embodiment, at least one processor includes control logic, such as inference and/or training logic 1815 and is associated with at least one flow controller. In at least one embodiment, at least one flow controller may have their own respective processor or micro controller. In at least one embodiment, a processor or a micro controller performs instructions sent to it from a control logic. In at least one embodiment, a control logic may be to determine a change in a coolant state, such as a failure in a secondary cooling loop (such as a CDU and cooling manifolds) or a primary cooling loop (such as a chilling facility, cooling manifolds, and also an associated CDU). In at least one embodiment, a failure may also occur with a cooling manifold requiring replacement. In at least one embodiment, a control logic may cause at least one flow controller to provide a coolant response, such as by engaging a local cooling loop having a fluid source (such as a reservoir) to provide cooling for local coolant or secondary coolant for at least one computing device.[0147] In at least one embodiment, a control logic may cause a first signal to at least one flow controller to enable a stopping of a secondary coolant from a secondary cooling loop as part of a coolant response. In at least one embodiment, a control logic may cause a second signal to at least one flow controller to enable a starting of a local coolant from a local cooling loop as part of a coolant response. In at least one embodiment, a control logic may receive sensor inputs from sensors associated with secondary coolant of a CDU, local coolant, and/or at least one computing device. In at least one embodiment, at least one processor can determine a change in a coolant state based in part on sensor inputs. In at least one embodiment, one or more neural networks of an inference and/or training logic 1815 may be adapted to receive sensor inputs and to infer a change in a coolant state.[0148] In at least one embodiment, at least one processor may include one or more circuits for one or more neural networks, such as an inference and/or training logic 1 81 5. In at least one embodiment, an inference and/or training logic 18 15 may be adapted to infer, from sensor inputs associated with at least one server or at least one rack, a change in a coolant state, such as coolant from a CDU being ineffective or retaining too much heat upon entry into a rack. In at least one embodiment, one or more circuits may be adapted to cause at least one flow controller to provide a coolant response from a local cooling loop.[0149] In at least one embodiment, control logic associated with one or more circuits may cause a first signal (along with any associated signals) to at least one flow controller to enable a coolant response -either from a secondary cooling loop or a local cooling loop having an intelligent rear door heat exchanger for local cooling loops. In at least one embodiment, a second signal may be provided to at least flow controller and may also enable only an L2L heat exchanger without a secondary cooling loop but may engage or activate a secondary cooling loop if further cooling is required. In at least one embodiment, a distributed or an integrated architecture is enabled by one or more circuits of at least one processor. In at least one embodiment, a distributed architecture may be supported by distinctly located circuits of one or more circuits.[0150] In at least one embodiment, one or more neural networks of an inference and/or training logic 1815 may be adapted to infer that an increase or a decrease in cooling requirements of at least one computing component of at least one server. In at least one embodiment, one or more circuits may be adapted to cause a cooling loop to economically address decreased cooling requirements or to supplement increased cooling requirements for at least one computing component. In at least one embodiment, enabling a cooling loop represents a coolant response from a local cooling loop to preempt a respective increase or a respective decrease in cooling requirements of at least one computing component of at least one server based in part on workload sent to at least one computing component.[0151] In at least one embodiment, at least one processor includes one or more circuits, such as an inference and/or training logic I 8 1 5, to train one or more neural networks to make inferences from provided data. In at least one embodiment, inference and/or training logic 1815 may infer, from sensor inputs associated with at least one server or at least one rack, a change in a coolant state. In at least one embodiment, an inference may be used to enable one or more circuits to cause at least one flow controller of a local cooling loop to provide a coolant response. In at least one embodiment, a coolant response may be to cause a coolant response from a local cooling loop to absorb heat into a local coolant and to exchange absorbed heat to a primary coolant, instead of a secondary cooling loop having a CDU.[0152] In at least one embodiment, one or more circuits may be adapted to train one or more neural networks to infer that an increase or a decrease in cooling requirements of at least one computing component of at least one server. In at least one embodiment, one or more circuits may be adapted to train one or more neural networks to infer that an increase or a decrease in flow output from a secondary cooling loop is associated with an improper flow of secondary coolant because of a failed CDU or a respective increase or a respective decrease in power requirements of at least one computing component of at least one server.[0153] In at least one embodiment, one or more neural networks may be trained to make inferences by prior associated heat features or cooling requirements from computing devices, servers, or racks, and cooling capacity or capabilities indicated by a fluid source of a local cooling loop, such as by an intelligent rear door heat exchanger for local cooling loops having a specific cooling capability that is above a forced air cooling capability. In at least one embodiment, prior cooling requirements satisfied by a local cooling loop may be used to cause one or more neural networks to make similar inferences for future similar cooling requirements (in consideration of small variations there from) to be satisfied by adjusting one or more flow controllers to engage a local cooling loop.[0154] Figure 5 illustrates a method 500 associated with a datacenter cooling system of Figure 2 -4, according to at least one embodiment. In at least one embodiment, a method 500 includes a step 502 for providing a liquid-to-liquid heat exchanger associated with a rear door of a rack. In at least one embodiment, step 504 is for enabling a determination of cooling requirements for at least one computing device of a rack. In at least one embodiment, when a determination is made of at least one cooling requirement for at least one computing device of a rack, via step 506, then steps 508 and 510 may be performed. In at least one embodiment, step 508 is for enabling a liquid-to-liquid heat exchanger. In at least one embodiment, flow controllers may be activated to begin a flow of fluid or to divert a flow of secondary coolant between a cold plate and a liquid-to-liquid heat exchanger. In at least one embodiment, exchange of heat may be enabled in step 510 between a primary coolant associated with a chilling facility and a secondary coolant or fluid associated with at least one computing device of a rack. In at least one embodiment, this may be flow controllers adjusted to pass primary coolant from a chilling facility (either directly or from a reservoir) into a liquid-to-liquid heat exchanger.[0155] In at least one embodiment, method 500 may include a further step or a sub-step for determining, using at least one processor, a temperature associated with a computing device in a rack. In at least one embodiment, method 500 may include a further step or a sub-step for determining a first cooling requirement or a second cooling requirement using a temperature associated with a computing device, such as area temperature, device temperature, fluid or secondary coolant temperature, or manifold temperature. In at least one embodiment, method 500 may include a further step or a sub-step for causing, based in part on a first cooling requirement or a second cooling requirement, a liquid-to-liquid heat exchanger or a secondary cooling loop to cause cooling of a secondary coolant or a fluid associated with at least one computing device.[0156] In at least one embodiment, method 500 may include a further step or a sub-step for receiving, in at least one processor, sensor inputs from sensors associated with a computing device, a rack, a secondary coolant, or fluid of a cold plate. In at least one embodiment, method 500 may include a further step or a sub-step for determining, using at least one processor, a first cooling requirement and a second cooling requirement based in part on sensor inputs received. In at least one embodiment, method 500 may include a further step or a sub-step for enabling, using a latching mechanism, an association of a liquid-to-liquid heat exchanger with a rear door of a rack.[0157] In at least one embodiment, method 500 may include a further step or a sub-step for receiving, by at least one processor, sensor inputs from sensors associated with at least one computing device. In at least one embodiment, method 500 may include a further step or a sub-step for determining, by at least one processor, a change in a coolant state based in part on sensor inputs received. In at least one embodiment, method 500 may include a further step or a sub-step for causing, based in part on a change in a coolant state detected, a liquid-to-liquid heat exchanger to cause cooling of secondary coolant or fluid received in a liquid-to-liquid heat exchanger.Servers and Data Centers [0158] The following figures set forth, without limitation, exemplary network server and datacenter based systems that can be used to implement at least one embodiment [0159] Figure 6 illustrates a distributed system 600, in accordance with at least one embodiment. In at least one embodiment, distributed system 600 includes one or more client computing devices 602, 604, 606, and 608, which are configured to execute and operate a client application such as a web browser, proprietary client, and/or variations thereof over one or more network(s) 610. In at least one embodiment, server 612 may be communicatively coupled with remote client computing devices 602, 604, 606, and 608 via network 610.[0160] In at least one embodiment, server 612 may be adapted to run one or more services or software applications such as services and applications that may manage session activity of single sign-on (SSO) access across multiple datacenters. In at least one embodiment, server 612 may also provide other services or software applications can include non-virtual and virtual environments. In at least one embodiment, these services may be offered as web-based or cloud services or under a Software as a Service (SaaS) model to users of client computing devices 602, 604, 606, and/or 608. In at least one embodiment, users operating client computing devices 602, 604, 606, and/or 608 may in turn utilize one or more client applications to interact with server 612 to utilize services provided by these components.[0161] In at least one embodiment, software components 618, 620 and 622 of system 600 are implemented on server 612. In at least one embodiment, one or more components of system 600 and/or services provided by these components may also be implemented by one or more of client computing devices 602, 604, 606, and/or 608. In at least one embodiment, users operating client computing devices may then utilize one or more client applications to use services provided by these components. In at least one embodiment, these components may be implemented in hardware, firmware, software, or combinations thereof. It should be appreciated that various different system configurations are possible, which may be different from distributed system 600. The embodiment shown in Figure 6 is thus at least one embodiment of a distributed system for implementing an embodiment system and is not intended to be limiting.[0162] In at least one embodiment, client computing devices 602, 604, 606, and/or 608 may include various types of computing systems. in at least one embodiment, a client computing device may include portable handheld devices (e.g., an 'Phone®, cellular telephone, an iPade, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as i0S, Windows Phone, Android, BlackBerry 10, Palm OS, and/or variations thereof In at least one embodiment, devices may support various applications such as various Internet-related apps, e-mail, short message service (SMS) applications, and may use various other communication protocols. In at least one embodiment, client computing devices may also include general purpose personal computers including, by way of at least one embodiment, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.[0163] In at least one embodiment, client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation a variety of GNU/Linux operating systems, such as Google Chrome OS. In at least one embodiment, client computing devices may also include electronic devices such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over network(s) 610. Although distributed system 600 in Figure 6 is shown with four client computing devices, any number of client computing devices may be supported. Other devices, such as devices with sensors, etc., may interact with server 612 [0164] In at least one embodiment, network(s) 610 in distributed system 600 may be any type of network that can support data communications using any of a variety of available protocols, including without limitation TCP/IP (transmission control protocol/Internet protocol), SN A (systems network architecture), IPX (Internet packet exchange), AppleTalk, and/or variations thereof In at least one embodiment, network(s) 610 can be a local area network (LAN), networks based on Ethernet, Token-Ring, a wide-area network, Internet, a virtual network, a virtual private network (VPN), an intranet, an extranet, a public switched telephone network (PSTN), an infra-red network, a wireless network (e.g., a network operating under any of the Institute of Electrical and Electronics (IEEE) 802.11 suite of protocols, Bluetooth®, and/or any other wireless protocol), and/or any combination of these and/or other networks.[0165] In at least one embodiment, server 612 may be composed of one or more general purpose computers, specialized server computers (including, by way of at least one embodiment, PC (personal computer) servers, UNIX® servers, mid-range servers, mainframe computers, rack-mounted servers, etc.), server farms, server clusters, or any other appropriate arrangement and/or combination. In at least one embodiment, server 612 can include one or more virtual machines running virtual operating systems, or other computing architectures involving virtualization. In at least one embodiment, one or more flexible pools of logical storage devices can be virtualized to maintain virtual storage devices for a server. In at least one embodiment, virtual networks can be controlled by server 612 using software defined networking. In at least one embodiment, server 612 may be adapted to run one or more services or software applications.[0166] In at least one embodiment, server 612 may run any operating system, as well as any commercially available server operating system. In at least one embodiment, sewer 612 may also run any of a variety of additional server applications and/or mid-tier applications, including HTTP (hypertext transport protocol) servers, FTP (file transfer protocol) servers, CGT (common gateway interface) sewers, JAVA® servers, database sewers, and/or variations thereof In at least one embodiment, exemplary database servers include without limitation those commercially available from Oracle, Microsoft, Sybase, IBM (International Business Machines), and/or variations thereof [0167] In at least one embodiment, server 612 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client computing devices 602, 604, 606, and 608. In at least one embodiment, data feeds and/or event updates may include, but are not limited to, Twitter® feeds, Facebook0 updates or real-time updates received from one or more third party information sources and continuous data streams, which may include real-time events related to sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and/or variations thereof. In at least one embodiment, server 612 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client computing devices 602, 604, 606, and 608.[0168] In at least one embodiment, distributed system 600 may also include one or more databases 614 and 616. In at least one embodiment, databases may provide a mechanism for storing information such as user interactions information, usage patterns information, adaptation rules information, and other information. In at least one embodiment, databases 614 and 616 may reside in a variety of locations. In at least one embodiment, one or more of databases 614 and 616 may reside on a non-transitory storage medium local to (and/or resident in) server 612. In at least one embodiment, databases 614 and 616 may be remote from server 612 and in communication with server 612 via a network-based or dedicated connection. In at least one embodiment, databases 614 and 616 may reside in a storage-area network (SAN). In at least one embodiment, any necessary files for performing functions attributed to server 612 may be stored locally on server 612 and/or remotely, as appropriate. In at least one embodiment, databases 614 and 616 may include relational databases, such as databases that are adapted to store, update, and retrieve data in response to SQL-formatted commands.[0169] Figure 7 illustrates an exemplary datacenter 700, in accordance with at least one embodiment. In at least one embodiment, datacenter 700 includes, without limitation, a datacenter infrastructure layer 710, a framework layer 720, a software layer 730 and an application layer 740.[0170] In at least one embodiment, as shown in Figure 7, datacenter infrastructure layer 710 may include a resource orchestrator 712, grouped computing resources 714, and node computing resources ("node C.R.s") 716(1)-716(N), where "N" represents any whole, positive integer. In at least one embodiment, node C.R.s 716(1)-716(N) may include, but are not limited to, any number of central processing units (-CPUs") or other processors (including accelerators, field programmable gate arrays ("FPGAs"), graphics processors, etc.), memory devices (e.g. dynamic read-only memory), storage devices (e.g., solid state or disk drives), network inputloutput ("NW I/0") devices, network switches, virtual machines ("VMs"), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 716(1)-716(N) may be a server having one or more of above-mentioned computing resources.[0171] In at least one embodiment, grouped computing resources 714 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in datacenters at various geographical locations (also not shown). Separate groupings of node C.Rs within grouped computing resources 714 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.[0172] In at least one embodiment, resource orchestrator 712 may configure or otherwise control one or more node C.R.s 716(1)-716(N) and/or grouped computing resources 714. In at least one embodiment, resource orchestrator 712 may include a software design infrastructure ("SDI") management entity for datacenter 700. In at least one embodiment, resource orchestrator 712 may include hardware, software or some combination thereof [0173] In at least one embodiment, as shown in Figure 7, framework layer 720 includes, without limitation, a job scheduler 732, a configuration manager 734, a resource manager 736 and a distributed file system 738. In at least one embodiment, framework layer 720 may include a framework to support software 752 of software layer 730 and/or one or more application(s) 742 of application layer 740. In at least one embodiment, software 752 or application(s) 742 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 720 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM (hereinafter "Spark") that may utilize distributed file system 738 for large-scale data processing (e.g., "big data"). In at least one embodiment, job scheduler 732 may include a Spark driver to facilitate scheduling of workloads supported by various layers of datacenter 700. In at least one embodiment, configuration manager 734 may be capable of configuring different layers such as software layer 730 and framework layer 720, including Spark and distributed file system 738 for supporting large-scale data processing. In at least one embodiment, resource manager 736 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 738 and job scheduler 732. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource 714 at datacenter infrastructure layer 710. In at least one embodiment, resource manager 736 may coordinate with resource orchestrator 712 to manage these mapped or allocated computing resources.[0174] In at least one embodiment, software 752 included in software layer 730 may include software used by at least portions of node C.R.s 716(1)-716(N), grouped computing resources 714, and/or distributed file system 738 of framework layer 720. One or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.[0175] In at least one embodiment, application(s) 742 included in application layer 740 may include one or more types of applications used by at least portions of node C.Rs 716(1)-716(N), grouped computing resources 714, and/or distributed file system 738 of framework layer 720. In at least one or more types of applications may include, without limitation, CUDA applications, 5G network applications, artificial intelligence application, datacenter applications, and/or variations thereof.[0176] In at least one embodiment, any of configuration manager 734, resource manager 736, and resource orchestrator 712 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a datacenter operator of datacenter 700 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a datacenter.[0177] Figure 8 illustrates a client-server network 804 formed by a plurality of network server computers 802 which are interlinked, in accordance with at least one embodiment. In at least one embodiment, each network server computer 802 stores data accessible to other network server computers 802 and to client computers 806 and networks 808 which link into a wide area network 804. In at least one embodiment, configuration of a client-server network 804 may change over time as client computers 806 and one or more networks 808 connect and disconnect from a network 804, and as one or more trunk line server computers 802 are added or removed from a network 804. In at least one embodiment, when a client computer 806 and a network 808 are connected with network server computers 802, client-server network includes such client computer 806 and network 808. In at least one embodiment, the term computer includes any device or machine capable of accepting data, applying prescribed processes to data, and supplying results of processes.[0178] In at least one embodiment, client-server network 804 stores information which is accessible to network server computers 802, remote networks 808 and client computers 806. In at least one embodiment, network server computers 802 are formed by main frame computers minicomputers, and/or microcomputers having one or more processors each. In at least one embodiment, server computers 802 are linked together by wired and/or wireless transfer media, such as conductive wire, fiber optic cable, and/or microwave transmission media, satellite transmission media or other conductive, optic or electromagnetic wave transmission media. In at least one embodiment, client computers 806 access a network server computer 802 by a similar wired or a wireless transfer medium. In at least one embodiment, a client computer 806 may link into a client-server network 804 using a modem and a standard telephone communication network. In at least one embodiment, alternative carrier systems such as cable and satellite communication systems also may be used to link into client-server network 804. In at least one embodiment, other private or time-shared carrier systems may be used. In at least one embodiment, network 804 is a global information network, such as the Internet. In at least one embodiment, network is a private intranet using similar protocols as the Internet, but with added security measures and restricted access controls. In at least one embodiment, network 804 is a private, or semi-private network using proprietary communication protocols.[0179] In at least one embodiment, client computer 806 is any end user computer, and may also be a mainframe computer, mini-computer or microcomputer having one or more microprocessors. In at least one embodiment, server computer 802 may at times function as a client computer accessing another server computer 802. In at least one embodiment, remote network 808 may be a local area network, a network added into a wide area network through an independent service provider (TSP) for the Internet, or another group of computers interconnected by wired or wireless transfer media having a configuration which is either fixed or changing over time. In at least one embodiment, client computers 806 may link into and access a network 804 independently or through a remote network 808.[0180] Figure 9 illustrates a computer network 908 connecting one or more computing machines, in accordance with at least one embodiment. In at least one embodiment, network 908 may be any type of electronically connected group of computers including, for instance, the following networks: Internet, Intranet, Local Area Networks (LAN), Wide Area Networks (WAN) or an interconnected combination of these network types. In at least one embodiment, connectivity within a network 908 may be a remote modem, Ethernet (IEEE 802.3), Token Ring (IEEE 802.5), Fiber Distributed Datalink Interface (FDDI), Asynchronous Transfer Mode (ATM), or any other communication protocol. In at least one embodiment, computing devices linked to a network may be desktop, server, portable, handheld, set-top box, personal digital assistant (PDA), a terminal, or any other desired type or configuration. In at least one embodiment, depending on their functionality, network connected devices may vary widely in processing power, internal memory, and other performance aspects.[0181] In at least one embodiment, communications within a network and to or from computing devices connected to a network may be either wired or wireless. In at least one embodiment, network 908 may include, at least in part, the world-wide public Internet which generally connects a plurality of users in accordance with a client-server model in accordance with a transmission control protocol/internet protocol (TCP/TP) specification. In at least one embodiment, client-server network is a dominant model for communicating between two computers. In at least one embodiment, a client computer ("client") issues one or more commands to a server computer ("server"). In at least one embodiment, server fulfills client commands by accessing available network resources and returning information to a client pursuant to client commands. In at least one embodiment, client computer systems and network resources resident on network servers are assigned a network address for identification during communications between elements of a network. In at least one embodiment, communications from other network connected systems to servers will include a network address of a relevant server/network resource as part of communication so that an appropriate destination of a data/request is identified as a recipient. In at least one embodiment, when a network 908 comprises the global Internet, a network address is an IF address in a TCP/IP format which may, at least in part, route data to an e-mail account, a website, or other Internet tool resident on a server. In at least one embodiment, information and services which are resident on network servers may be available to a web browser of a client computer through a domain name (e.g. www.site.com) which maps to an IF address of a network server.[0182] In at least one embodiment, a plurality of clients 902, 904, and 906 are connected to a network 908 via respective communication links. In at least one embodiment, each of these clients may access a network 908 via any desired form of communication, such as via a dial-up modem connection, cable link, a digital subscriber line (DSL), wireless or satellite link, or any other form of communication. In at least one embodiment, each client may communicate using any machine that is compatible with a network 908, such as a personal computer (PC), work station, dedicated terminal, personal data assistant (PDA), or other similar equipment. In at least one embodiment, clients 902, 904, and 906 may or may not be located in a same geographical area.[0183] In at least one embodiment, a plurality of servers 910, 912, and 914 are connected to a network 918 to serve clients that are in communication with a network 918. in at least one embodiment, each server is typically a powerful computer or device that manages network resources and responds to client commands. In at least one embodiment, servers include computer readable data storage media such as hard disk drives and RAM memory that store program instructions and data. In at least one embodiment, servers 910, 912, 914 run application programs that respond to client commands. In at least one embodiment, server 910 may run a web server application for responding to client requests for HTML pages and may also run a mail server application for receiving and routing electronic mail. In at least one embodiment, other application programs, such as an FTP server or a media server for streaming audio/video data to clients may also be running on a server 910. In at least one embodiment, different servers may be dedicated to performing different tasks. In at least one embodiment, server 910 may be a dedicated web server that manages resources relating to web sites for various users, whereas a server 912 may be dedicated to provide electronic mail (email) management. In at least one embodiment, other servers may be dedicated for media (audio, video, etc.), file transfer protocol (FTP), or a combination of any two or more services that are typically available or provided over a network. In at least one embodiment, each server may be in a location that is the same as or different from that of other servers. In at least one embodiment, there may be multiple servers that perform mirrored tasks for users, thereby relieving congestion or minimizing traffic directed to and from a single server. In at least one embodiment, servers 910, 912, 914 are under control of a web hosting provider in a business of maintaining and delivering third party content over a network 918.[0184] In at least one embodiment, web hosting providers deliver services to two different types of clients. In at least one embodiment, one type, which may be referred to as a browser, requests content from servers 910, 912, 914 such as web pages, email messages, video clips, etc. In at least one embodiment, a second type, which may be referred to as a user, hires a web hosting provider to maintain a network resource such as a web site, and to make it available to browsers. In at least one embodiment, users contract with a web hosting provider to make memory space, processor capacity, and communication bandwidth available for their desired network resource in accordance with an amount of server resources a user desires to utilize.[0185] In at least one embodiment, in order for a web hosting provider to provide services for both of these clients, application programs which manage a network resources hosted by servers must be properly configured. In at least one embodiment, program configuration process involves defining a set of parameters which control, at least in part, an application program's response to browser requests and which also define, at least in part, a server resources available to a particular user.[0186] In one embodiment, an intranet server 916 is in communication with a network 908 via a communication link. In at least one embodiment, intranet server 916 is in communication with a server manager 918. In at least one embodiment, server manager 918 comprises a database of an application program configuration parameters which are being utilized in servers 910, 912, 914. In at least one embodiment, users modify a database 920 via an intranet 916, and a server manager 918 interacts with servers 910, 912, 914 to modify application program parameters so that they match a content of a database In at least one embodiment, a user logs onto an intranet server 916 by connecting to an intranet 916 via computer 902 and entering authentication information, such as a username and password.[0187] In at least one embodiment, when a user wishes to sign up for new service or modify an existing service, an intranet server 916 authenticates a user and provides a user with an interactive screen display/control panel that allows a user to access configuration parameters for a particular application program. In at least one embodiment, a user is presented with a number of modifiable text boxes that describe aspects of a configuration of a user's web site or other network resource. In at least one embodiment, if a user desires to increase memory space reserved on a server for its web site, a user is provided with a field in which a user specifies a desired memory space. In at least one embodiment, in response to receiving this information, an intranet server 916 updates a database 920. In at least one embodiment, server manager 918 forwards this information to an appropriate server, and a new parameter is used during application program operation. In at least one embodiment, an intranet server 916 is configured to provide users with access to configuration parameters of hosted network resources (e.g., web pages, email, FTP sites, media sites, etc.), for which a user has contracted with a web hosting service provider.[0188] Figure 10A illustrates a networked computer system 1000A, in accordance with at least one embodiment. In at least one embodiment, networked computer system 1000A comprises a plurality of nodes or personal computers ("PCs") 1002, 1018, 1020. In at least one embodiment, personal computer or node 1002 comprises a processor 1014, memory 1016, video camera 1004, microphone 1006, mouse 1008, speakers 1010, and monitor 1012. In at least one embodiment, PC's 1002, 1018, 1020 may each run one or more desktop servers of an internal network within a given company, for instance, or may be servers of a general network not limited to a specific environment. In at least one embodiment, there is one sewer per PC node of a network, so that each PC node of a network represents a particular network sewer, having a particular network URL address. In at least one embodiment, each server defaults to a default web page for that server's user, which may itself contain embedded URLs pointing to further subpages of that user on that server,or to other servers or pages on other servers on a network.[0189] In at least one embodiment, nodes 1002, 1018, 1020 and other nodes of a network are interconnected via medium 1022. In at least one embodiment, medium 1022 may be, a communication channel such as an integrated Services Digital Network ("ISDN"). In at least one embodiment, various nodes of a networked computer system may be connected through a variety of communication media, including local area networks ("LANs"), plain-old telephone lines ("POTS"), sometimes referred to as public switched telephone networks ("PSTN"), and/or variations thereof. In at least one embodiment, various nodes of a network may also constitute computer system users inter-connected via a network such as the Internet. In at least one embodiment, each server on a network (running from a particular node of a network at a given instance) has a unique address or identification within a network, which may be specifiable in terms of an URL.[0190] In at least one embodiment, a plurality of multi-point conferencing units ("MCUs") may thus be utilized to transmit data to and from various nodes or "endpoints" of a conferencing system. In at least one embodiment, nodes and/or MCUs may be interconnected via an ISDN link or through a local area network ("LAN"), in addition to various other communications media such as nodes connected through the Internet. In at least one embodiment, nodes of a conferencing system may, in general, be connected directly to a communications medium such as a LAN or through an MCU, and that a conferencing system may comprise other nodes or elements such as routers, sewers, and/or variations thereof [0191] In at least one embodiment, processor 1014 is a general-purpose programmable processor. In at least one embodiment, processors of nodes of networked computer system 1000A may also be special-purpose video processors. In at least one embodiment, various peripherals and components of a node such as those of node 1002 may vary from those of other nodes. In at least one embodiment, node 1018 and node 1020 may be configured identically to or differently than node 1002. In at least one embodiment, a node may be implemented on any suitable computer system in addition to PC systems.[0192] Figure 10B illustrates a networked computer system 1000B, in accordance with at least one embodiment. In at least one embodiment, system 1000B illustrates a network such as LAN 1024, which may be used to interconnect a variety of nodes that may communicate with each other. In at least one embodiment, attached to LAN 1024 are a plurality of nodes such as PC nodes 1026, 1028, 1030. In at least one embodiment, a node may also be connected to the LAN via a network server or other means. In at least one embodiment, system 1000B comprises other types of nodes or elements, for at least one embodiment including routers, servers, and nodes.[0193] Figure 10C illustrates a networked computer system 1000C, in accordance with at least one embodiment. In at least one embodiment, system 1000C illustrates a WWW system having communications across a backbone communications network such as Internet 1032, which may be used to interconnect a variety of nodes of a network. In at least one embodiment, WWW is a set of protocols operating on top of the Internet, and allows a graphical interface system to operate thereon for accessing information through the Internet. In at least one embodiment, attached to Internet 1032 in WWW are a plurality of nodes such as PCs 1040, 1042, 1044. In at least one embodiment, a node is interfaced to other nodes of WWW through a WWW HTTP server such as servers 1034, 1036. In at least one embodiment, PC 1044 may be a PC forming a node of network 1032 and itself running its server 1036, although PC 1044 and server 1036 are illustrated separately in Figure IOC for illustrative purposes.[0194] In at least one embodiment, WWW is a distributed type of application, characterized by WWW HTTP, WWW's protocol, which runs on top of the Internet's transmission control protocol/Internet protocol ("TCP/IP"). In at least one embodiment, WWW may thus be characterized by a set of protocols (i.e., HTTP) running on the Internet as its "backbone." [0195] In at least one embodiment, a web browser is an application running on a node of a network that, in WWW-compatible type network systems, allows users of a particular server or node to view such information and thus allows a user to search graphical and text-based files that are linked together using hypertext links that are embedded in documents or files available from servers on a network that understand HTTP. In at least one embodiment, when a given web page of a first server associated with a first node is retrieved by a user using another server on a network such as the Internet, a document retrieved may have various hypertext links embedded therein and a local copy of a page is created local to a retrieving user. In at least one embodiment, when a user clicks on a hypertext link, locally-stored information related to a selected hypertext link is typically sufficient to allow a user's machine to open a connection across the Internet to a server indicated by a hypertext link.[0196] In at least one embodiment, more than one user may be coupled to each HTTP server, through a LAN such as LAN 1038 as illustrated with respect to WWW HTTP server 1034. In at least one embodiment, system 1000C may also comprise other types of nodes or elements. In at least one embodiment, a WWW HTTP server is an application running on a machine, such as a PC. In at least one embodiment, each user may be considered to have a unique "server," as illustrated with respect to PC 1044. In at least one embodiment, a sewer may be considered to be a server such as WWW HTTP sewer 1034, which provides access to a network for a LAN or plurality of nodes or plurality of LANs. In at least one embodiment, there are a plurality of users, each having a desktop PC or node of a network, each desktop PC potentially establishing a server for a user thereof In at least one embodiment, each server is associated with a particular network address or URL, which, when accessed, provides a default web page for that user. In at least one embodiment, a web page may contain further links (embedded URLs) pointing to further subpages of that user on that sewer, or to other sewers on a network or to pages on other servers on a network.Cloud Computing and Services [0197] The following figures set forth, without limitation, exemplary cloud-based systems that can be used to implement at least one embodiment.[0198] In at least one embodiment, cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. In at least one embodiment, users need not have knowledge of, expertise in, or control over technology infrastructure, which can be referred to as "in the cloud," that supports them. In at least one embodiment, cloud computing incorporates infrastructure as a service, platform as a service, software as a service, and other variations that have a common theme of reliance on the Internet for satisfying computing needs of users. In at least one embodiment, a typical cloud deployment, such as in a private cloud (e.g., enterprise network), or a datacenter (DC) in a public cloud (e.g., Internet) can consist of thousands of sewers (or alternatively, VMs), hundreds of Ethernet, Fiber Channel or Fiber Channel over Ethernet (FCoE) ports, switching and storage infrastructure, etc. In at least one embodiment, cloud can also consist of network services infrastructure like IPsec VPN hubs, firewalls, load balancers, wide area network (WAN) optimizers etc. In at least one embodiment, remote subscribers can access cloud applications and services securely by connecting via a VPN tunnel, such as an IPsec VPN tunnel.[0199] In at least one embodiment, cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.[0200] In at least one embodiment, cloud computing is characterized by on-demand self-service, in which a consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human inter-action with each service's provider. In at least one embodiment, cloud computing is characterized by broad network access, in which capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). In at least one embodiment, cloud computing is characterized by resource pooling, in which a provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically as-signed and reassigned according to consumer demand. In at least one embodiment, there is a sense of location independence in that a customer generally has no control or knowledge over an exact location of provided resources, but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).[0201] In at least one embodiment, resources include storage, processing, memory, network bandwidth, and virtual machines. In at least one embodiment, cloud computing is characterized by rapid elasticity, in which capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. In at least one embodiment, to a consumer, capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. In at least one embodiment, cloud computing is characterized by measured service, in which cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to a type of service (e.g., storage, processing, bandwidth, and active user accounts). In at least one embodiment, resource usage can be monitored, controlled, and reported providing transparency for both a provider and consumer of a utilized service.[0202] In at least one embodiment, cloud computing may be associated with various services. In at least one embodiment, cloud Software as a Service (SaaS) may refer to as service in which a capability provided to a consumer is to use a provider's applications running on a cloud infrastructure. In at least one embodiment, applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). In at least one embodiment, consumer does not manage or control underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with a possible exception of limited user-specific application configuration settings.[0203] In at least one embodiment, cloud Platform as a Service (PaaS) may refer to a service in which a capability provided to a consumer is to deploy onto cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by a provider. In at least one embodiment, consumer does not manage or control underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over deployed applications and possibly application hosting environment configurations.[0204] In at least one embodiment, cloud Infrastructure as a Service (IaaS) may refer to a service in which a capability provided to a consumer is to provision processing, storage, networks, and other fundamental computing resources where a consumer is able to deploy and run arbitrary software, which can include operating systems and applications. In at least one embodiment, consumer does not manage or control underlying cloud infrastructure, but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).[0205] In at least one embodiment, cloud computing may be deployed in various ways In at least one embodiment, a private cloud may refer to a cloud infrastructure that is operated solely for an organization. In at least one embodiment, a private cloud may be managed by an organization or a third party and may exist on-premises or off-premises. In at least one embodiment, a community cloud may refer to a cloud infrastructure that is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). In at least one embodiment, a community cloud may be managed by organizations or a third party and may exist on-premises or off-premises. In at least one embodiment, a public cloud may refer to a cloud infrastructure that is made available to a general public or a large industry group and is owned by an organization providing cloud services. In at least one embodiment, a hybrid cloud may refer to a cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds), in at least one embodiment, a cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.[0206] Figure 11 illustrates one or more components of a system environment 1100 in which services may be offered as third party network services, in accordance with at least one embodiment. In at least one embodiment, a third party network may be referred to as a cloud, cloud network, cloud computing network, and/or variations thereof. In at least one embodiment, system environment I [00 includes one or more client computing devices 1104, I 106, and 1108 that may be used by users to interact with a third party network infrastructure system 1102 that provides third party network services, which may be referred to as cloud computing services. In at least one embodiment, third party network infrastructure system I 102 may comprise one or more computers and/or servers.[0207] It should be appreciated that third party network infrastructure system 1102 depicted in Figure I I may have other components than those depicted. Further, Figure I I depicts an embodiment of a third party network infrastructure system. In at least one embodiment, third party network infrastructure system 1102 may have more or fewer components than depicted in Figure 11, may combine two or more components, or may have a different configuration or arrangement of components.[0208] In at least one embodiment, client computing devices 1104, 1106, and 1108 may be configured to operate a client application such as a web browser, a proprietary client application, or some other application, which may be used by a user of a client computing device to interact with third party network infrastructure system 1102 to use services provided by third party network infrastructure system 1102. Although exemplary system environment 1100 is shown with three client computing devices, any number of client computing devices may be supported. In at least one embodiment, other devices such as devices with sensors, etc. may interact with third party network infrastructure system 1102. In at least one embodiment, network(s) 1110 may facilitate communications and exchange of data between client computing devices 1104, 1106, and 1108 and third party network infrastructure system 1102.[0209] In at least one embodiment, services provided by third party network infrastructure system 1102 may include a host of services that are made available to users of a third party network infrastructure system on demand. In at least one embodiment, various services may also be offered including without limitation online data storage and backup solutions, Web-based email services, hosted office suites and document collaboration services, database management and processing, managed technical support services, and/or variations thereof. In at least one embodiment, services provided by a third party network infrastructure system can dynamically scale to meet needs of its users.[0210] In at least one embodiment, a specific instantiation of a service provided by third party network infrastructure system 1102 may be referred to as a "service instance." In at least one embodiment, in general, any service made available to a user via a communication network, such as the Internet, from a third party network service provider's system is referred to as a "third party network service." In at least one embodiment, in a public third party network environment, servers and systems that make up a third party network service provider's system are different from a customer's own on-premises sewers and systems. In at least one embodiment, a third party network service provider's system may host an application, and a user may, via a communication network such as the Internet, on demand, order and use an application.[0211] In at least one embodiment, a service in a computer network third party network infrastructure may include protected computer network access to storage, a hosted database, a hosted web server, a software application, or other service provided by a third party network vendor to a user. In at least one embodiment, a service can include password-protected access to remote storage on a third party network through the Internet. In at least one embodiment, a service can include a web service-based hosted relational database and a script-language middleware engine for private use by a networked developer. In at least one embodiment, a service can include access to an email software application hosted on a third party network vendor's web site.[0212] In at least one embodiment, third party network infrastructure system 1102 may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. In at least one embodiment, third party network infrastructure system 1102 may also provide "big data" related computation and analysis services. In at least one embodiment, term "big data" is generally used to refer to extremely large data sets that can be stored and manipulated by analysts and researchers to visualize large amounts of data, detect trends, and/or otherwise interact with data. In at least one embodiment, big data and related applications can be hosted and/or manipulated by an infrastructure system on many levels and at different scales. In at least one embodiment, tens, hundreds, or thousands of processors linked in parallel can act upon such data in order to present it or simulate external forces on data or what it represents. In at least one embodiment, these data sets can involve structured data, such as that organized in a database or otherwise according to a structured model, and/or unstructured data (e.g., emails, images, data blobs (binary large objects), web pages, complex event processing). In at least one embodiment, by leveraging an ability of an embodiment to relatively quickly focus more (or fewer) computing resources upon an objective, a third party network infrastructure system may be better available to carry out tasks on large data sets based on demand from a business, government agency, research organization, private individual, group of like-minded individuals or organizations, or other entity.[0213] In at least one embodiment, third party network infrastructure system 1102 may be adapted to automatically provision, manage and track a customer's subscription to services offered by third party network infrastructure system 1102. In at least one embodiment, third party network infrastructure system 1102 may provide third party network services via different deployment models. In at least one embodiment, services may be provided under a public third party network model in which third party network infrastructure system 1102 is owned by an organization selling third party network services and services are made available to a general public or different industry enterprises. In at least one embodiment, services may be provided under a private third party network model in which third party network infrastructure system 1102 is operated solely for a single organization and may provide services for one or more entities within an organization. In at least one embodiment, third party network services may also be provided under a community third party network model in which third party network infrastructure system 1102 and services provided by third party network infrastructure system 1102 are shared by several organizations in a related community. In at least one embodiment, third party network services may also be provided under a hybrid third party network model, which is a combination of two or more different models.[0214] In at least one embodiment, services provided by third party network infrastructure system 1102 may include one or more services provided under Software as a Service (SaaS) category, Platform as a Service (PaaS) category, Infrastructure as a Service (IaaS) category, or other categories of services including hybrid services. In at least one embodiment, a customer, via a subscription order, may order one or more services provided by third party network infrastructure system 1102. In at least one embodiment, third party network infrastructure system 1102 then performs processing to provide services in a customer's subscription order.[0215] In at least one embodiment, services provided by third party network infrastructure system 1102 may include, without limitation, application services, platform services and infrastructure services. In at least one embodiment, application services may be provided by a third party network infrastructure system via a SaaS platform. In at least one embodiment, SaaS platform may be configured to provide third party network services that fall under a SaaS category. In at least one embodiment, SaaS platform may provide capabilities to build and deliver a suite of on-demand applications on an integrated development and deployment platform. In at least one embodiment, SaaS platform may manage and control underlying software and infrastructure for providing SaaS services. In at least one embodiment, by utilizing services provided by a SaaS platform, customers can utilize applications executing on a third party network infrastructure system. In at least one embodiment, customers can acquire an application services without a need for customers to purchase separate licenses and support. In at least one embodiment, various different SaaS services may be provided. In at least one embodiment, this may include, without limitation, services that provide solutions for sales performance management, enterprise integration, and business flexibility for large organizations.[0216] In at least one embodiment, platform services may be provided by third party network infrastructure system 1102 via a PaaS platform. In at least one embodiment, PaaS platform may be configured to provide third party network services that fall under a PaaS category. In at least one embodiment, platform services may include without limitation services that enable organizations to consolidate existing applications on a shared, common architecture, as well as an ability to build new applications that leverage shared services provided by a platform. In at least one embodiment, PaaS platform may manage and control underlying software and infrastructure for providing PaaS services. In at least one embodiment, customers can acquire PaaS services provided by third party network infrastructure system 1102 without a need for customers to purchase separate licenses and support.[0217] In at least one embodiment, by utilizing services provided by a PaaS platform, customers can employ programming languages and tools supported by a third party network infrastructure system and also control deployed services. In at least one embodiment, platform services provided by a third party network infrastructure system may include database third party network services, middleware third party network services and third party network services. In at least one embodiment, database third party network services may support shared service deployment models that enable organizations to pool database resources and offer customers a Database as a Service in a form of a database third party network. In at least one embodiment, middleware third party network services may provide a platform for customers to develop and deploy various business applications, and third party network services may provide a platform for customers to deploy applications, in a third party network infrastructure system.[0218] In at least one embodiment, various different infrastructure services may be provided by an IaaS platform in a third party network infrastructure system. In at least one embodiment, infrastructure services facilitate management and control of underlying computing resources, such as storage, networks, and other fundamental computing resources for customers utilizing services provided by a SaaS platform and a PaaS platform.[0219] In at least one embodiment, third party network infrastructure system 1102 may also include infrastructure resources 1130 for providing resources used to provide various services to customers of a third party network infrastructure system. In at least one embodiment, infrastructure resources 1130 may include pre-integrated and optimized combinations of hardware, such as servers, storage, and networking resources to execute services provided by a Paas platform and a Saas platform, and other resources.[0220] In at least one embodiment, resources in third party network infrastructure system 1102 may be shared by multiple users and dynamically re-allocated per demand. In at least one embodiment, resources may be allocated to users in different time zones. In at least one embodiment, third party network infrastructure system 1102 may enable a first set of users in a first time zone to utilize resources of a third party network infrastructure system for a specified number of hours and then enable a re-allocation of same resources to another set of users located in a different time zone, thereby maximizing utilization of resources.[0221] In at least one embodiment, a number of internal shared services 1132 may be provided that are shared by different components or modules of third party network infrastructure system 1102 to enable provision of services by third party network infrastructure system 1102. In at least one embodiment, these internal shared services may include, without limitation, a security and identity service, an integration service, an enterprise repository service, an enterprise manager service, a virus scanning and white list service, a high availability, backup and recovery service, service for enabling third party network support, an email service, a notification service, a file transfer service, and/or variations thereof.[0222] In at least one embodiment, third party network infrastructure system 1102 may provide comprehensive management of third party network services (e.g., SaaS, PaaS, and laaS services) in a third party network infrastructure system. In at least one embodiment, third party network management functionality may include capabilities for provisioning, managing and tracking a customer's subscription received by third party network infrastructure system 1102, and/or variations thereof.[0223] In at least one embodiment, as depicted in Figure 11, third party network management functionality may be provided by one or more modules, such as an order management module 1120, an order orchestration module 1122, an order provisioning module 1124, an order management and monitoring module 1126, and an identity management module 1128. In at least one embodiment, these modules may include or be provided using one or more computers and/or servers, which may be general purpose computers, specialized server computers, server farms, server clusters, or any other appropriate arrangement and/or combination [0224] In at least one embodiment, at step 1134, a customer using a client device, such as client computing devices 1104, 1106 or 1108, may interact with third party network infrastructure system 1102 by requesting one or more services provided by third party network infrastructure system 1102 and placing an order for a subscription for one or more services offered by third party network infrastructure system 1102. In at least one embodiment, a customer may access a third party network User Interface (UT) such as third party network UT 1112, third party network Ul 1114 and/or third party network Ul 1116 and place a subscription order via these Ills. In at least one embodiment, order information received by third party network infrastructure system 1102 in response to a customer placing an order may include information identifying a customer and one or more services offered by a third party network infrastructure system 1102 that a customer intends to subscribe to.[0225] In at least one embodiment, at step 1136, an order information received from a customer may be stored in an order database 1118. In at least one embodiment, if this is a new order, a new record may be created for an order. In at least one embodiment, order database 1118 can be one of several databases operated by third party network infrastructure system 1118 and operated in conjunction with other system elements.[0226] In at least one embodiment, at step 1138, an order information may be forwarded to an order management module 1120 that may be configured to perform billing and accounting functions related to an order, such as verifying an order, and upon verification, booking an order.[0227] In at least one embodiment, at step 1140, information regarding an order may be communicated to an order orchestration module 1122 that is configured to orchestrate provisioning of services and resources for an order placed by a customer. In at least one embodiment, order orchestration module 1122 may use services of order provisioning module 1124 for provisioning. In at least one embodiment, order orchestration module 1122 enables management of business processes associated with each order and applies business logic to determine whether an order should proceed to provisioning.[0228] In at least one embodiment, at step 1142, upon receiving an order for a new subscription, order orchestration module 1122 sends a request to order provisioning module 1124 to allocate resources and configure resources needed to fulfill a subscription order. In at least one embodiment, order provisioning module 1124 enables an allocation of resources for services ordered by a customer. In at least one embodiment, order provisioning module 1124 provides a level of abstraction between third party network services provided by third party network infrastructure system 1100 and a physical implementation layer that is used to provision resources for providing requested services. In at least one embodiment, this enables order orchestration module 1122 to be isolated from implementation details, such as whether or not services and resources are actually provisioned in real-time or pre-provisioned and only allocated/assigned upon request.[0229] In at least one embodiment, at step 1144, once services and resources are provisioned, a notification may be sent to subscribing customers indicating that a requested service is now ready for use. In at least one embodiment, information (e.g. a link) may be sent to a customer that enables a customer to start using requested services.[0230] In at least one embodiment, at step 1146, a customer's subscription order may be managed and tracked by an order management and monitoring module 1126. In at least one embodiment, order management and monitoring module 1126 may be configured to collect usage statistics regarding a customer use of subscribed services. In at least one embodiment, statistics may be collected for an amount of storage used, an amount data transferred, a number of users, and an amount of system up time and system down time, and/or variations thereof [0231] In at least one embodiment, third party network infrastructure system 1100 may include an identity management module 1128 that is configured to provide identity services, such as access management and authorization services in third party network infrastructure system 1100. In at least one embodiment, identity management module [128 may control information about customers who wish to utilize services provided by third party network infrastructure system 1102. In at least one embodiment, such information can include information that authenticates identities of such customers and information that describes which actions those customers are authorized to perform relative to various system resources (e.g., files, directories, applications, communication ports, memory segments, etc.). In at least one embodiment, identity management module 1128 may also include management of descriptive information about each customer and about how and by whom that descriptive information can be accessed and modified.[0232] Figure 12 illustrates a cloud computing environment 1202, in accordance with at least one embodiment. In at least one embodiment, cloud computing environment 1202 comprises one or more computer system/sewers 1204 with which computing devices such as, personal digital assistant (PDA) or cellular telephone 1206A, desktop computer 1206B, laptop computer 1206C, and/or automobile computer system 1206N communicate. In at least one embodiment, this allows for infrastructure, platforms and/or software to be offered as services from cloud computing environment 1202, so as to not require each client to separately maintain such resources. It is understood that types of computing devices 1206A-N shown in Figure 12 are intended to be illustrative only and that cloud computing environment 1202 can communicate with any type of computerized device over any type of network and/or network/addressable connection (e.g., using a web browser).[0233] In at least one embodiment, a computer system/sewer 1204, which can be denoted as a cloud computing node, is operational with numerous other general purpose or special purpose computing system environments or configurations. In at least one embodiment, computing systems, environments, and/or configurations that may be suitable for use with computer system/server 1204 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PC's, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and/or variations thereof.[0234] In at least one embodiment, computer system/server 1204 may be described in a general context of computer system-executable instructions, such as program modules, being executed by a computer system. In at least one embodiment, program modules include routines, programs, objects, components, logic, data structures, and so on, that perform particular tasks or implement particular abstract data types. In at least one embodiment, exemplary computer system/server 1204 may be practiced in distributed loud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In at least one embodiment, in a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.[0235] Figure 13 illustrates a set of functional abstraction layers provided by cloud computing environment 1202 (Figure 12), in accordance with at least one embodiment. It should be understood in advance that components, layers, and functions shown in Figure 13 are intended to be illustrative only, and components, layers, and functions may vary.[0236] In at least one embodiment, hardware and software layer 1302 includes hardware and software components. In at least one embodiment, hardware components include mainframes, various RISC (Reduced Instruction Set Computer) architecture based sewers, various computing systems, supercomputing systems, storage devices, networks, networking components, and/or variations thereof. In at least one embodiment, software components include network application sewer software, various application sewer software, various database software, and/or variations thereof.[0237] In at least one embodiment, virtual ization layer 1304 provides an abstraction layer from which following exemplary virtual entities may be provided: virtual servers, virtual storage, virtual networks, including virtual private networks, virtual applications, virtual clients, and/or variations thereof [0238] In at least one embodiment, management layer 1306 provides various functions. In at least one embodiment, resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within a cloud computing environment. In at least one embodiment, metering provides usage tracking as resources are utilized within a cloud computing environment, and billing or invoicing for consumption of these resources. In at least one embodiment, resources may comprise application software licenses. In at least one embodiment, security provides identity verification for users and tasks, as well as protection for data and other resources. In at least one embodiment, user interface provides access to a cloud computing environment for both users and system administrators. In at least one embodiment, service level management provides cloud computing resource allocation and management such that required service levels are met. In at least one embodiment, Service Level Agreement (SLA) management provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.[0239] In at least one embodiment, workloads layer 1308 provides functionality for which a cloud computing environment is utilized. In at least one embodiment, workloads and functions which may be provided from this layer include: mapping and navigation, software development and management, educational services, data analytics and processing, transaction processing, and service delivery.Supercomputing [0240] The following figures set forth, without limitation, exemplary supercomputer-based systems that can be used to implement at least one embodiment.[0241] In at least one embodiment, a supercomputer may refer to a hardware system exhibiting substantial parallelism and comprising at least one chip, where chips in a system are interconnected by a network and are placed in hierarchically organized enclosures. In at least one embodiment, a large hardware system filling a machine room, with several racks, each containing several boards/rack modules, each containing several chips, all interconnected by a scalable network, is at least one embodiment of a supercomputer. In at least one embodiment, a single rack of such a large hardware system is at least one other embodiment of a supercomputer. In at least one embodiment, a single chip exhibiting substantial parallelism and containing several hardware components can equally be considered to be a supercomputer, since as feature sizes may decrease, an amount of hardware that can be incorporated in a single chip may also increase.[0242] Figure 14 illustrates a supercomputer at a chip level, in accordance with at least one embodiment. In at least one embodiment, inside an FPGA or ASIC chip, main computation is performed within finite state machines (1404) called thread units. In at least one embodiment, task and synchronization networks (1402) connect finite state machines and are used to dispatch threads and execute operations in correct order. In at least one embodiment, a multi-level partitioned on-chip cache hierarchy (1408, 1412) is accessed using memory networks (1406, 1410). In at least one embodiment, off-chip memory is accessed using memory controllers (1416) and an off-chip memory network (1414). In at least one embodiment, I/O controller (1418) is used for cross-chip communication when a design does not fit in a single logic chip.[0243] Figure 15 illustrates a supercomputer at a rock module level, in accordance with at least one embodiment. In at least one embodiment, within a rack module, there are multiple FPGA or ASIC chips (1502) that are connected to one or more DRAM units (1504) which constitute main accelerator memory. In at least one embodiment, each FPGA/ASIC chip is connected to its neighbor FPGA/ASIC chip using wide busses on a board, with differential high speed signaling (1506). In at least one embodiment, each FPGA/ASIC chip is also connected to at least one high-speed serial communication cable.[0244] Figure 16 illustrates a supercomputer at a rack level, in accordance with at least one embodiment. Figure 17 illustrates a supercomputer at a whole system level, in accordance with at least one embodiment. In at least one embodiment, referring to Figure 16 and Figure 17, between rack modules in a rack and across racks throughout an entire system, high-speed serial optical or copper cables (1602, 1702) are used to realize a scalable, possibly incomplete hypercube network. In at least one embodiment, one of FPGA/A SIC chips of an accelerator is connected to a host system through a PCI-Express connection (1704). In at least one embodiment, host system comprises a host microprocessor (1708) that a software part of an application runs on and a memory consisting of one or more host memory DRAM units (1706) that is kept coherent with memory on an accelerator. In at least one embodiment, host system can be a separate module on one of racks, or can be integrated with one of a supercomputer's modules. In at least one embodiment, cube-connected cycles topology provide communication links to create a hypercube network for a large supercomputer. In at least one embodiment, a small group of FPGA/ASIC chips on a rack module can act as a single hypercube node, such that a total number of external links of each group is increased, compared to a single chip. In at least one embodiment, a group contains chips A, B, C and D on a rack module with internal wide differential busses connecting A, B, C and D in a torus organization. In at least one embodiment, there are 12 serial communication cables connecting a rack module to an outside world. In at least one embodiment, chip A on a rack module connects to serial communication cables 0, 1, 2. In at least one embodiment, chip B connects to cables 3, 4, 5. In at least one embodiment, chip C connects to 6, 7, 8. In at least one embodiment, chip D connects to 9, 10, II. Et at least one embodiment, an entire group {A, B, C, D} constituting a rack module can form a hypercube node within a supercomputer system, with up to 212=4096 rack modules (16384 FPGA/ASIC chips). In at least one embodiment, for chip A to send a message out on link 4 of group (A, B, C, DI., a message has to be routed first to chip B with an on-board differential wide bus connection. In at least one embodiment, a message arriving into a group (A, B, C, D} on link 4 (i.e., arriving at B) destined to chip A, also has to be routed first to a correct destination chip (A) internally within a group IA, B, C, DI. In at least one embodiment, parallel supercomputer systems of other sizes may also be implemented.Artificial Intelligence [0245] The following figures set forth, without limitation, exemplary artificial intelligence-based systems that can be used to implement at least one embodiment.[0246] Figure 18A illustrates inference and/or training logic 1815 used to perform inferenc ng and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 1815 are provided below in conjunction with Figures 18A and/or 18B.[0247] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, code and/or data storage 1801 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1801 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment code and/or data storage 1801 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 1801 may be included with other on-chip or off-chip data storage, including a processor' s Ll, L2, or L3 cache or system memory.[0248] In at least one embodiment, any portion of code and/or data storage 1801 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 1801 may be cache memory, dynamic randomly addressable memory ("DRAM"), static randomly addressable memory ("SRAM"), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or code and/or data storage 1801 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0249] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, a code and/or data storage 1805 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage 1805 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1805 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs).[0250] In at least one embodiment, code, such as graph code, causes loading of weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage 1805 may be included with other on-chip or off-chip data storage, including a processor' s Li, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 1805 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 1805 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or data storage 1805 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or sonic combination of these factors.[0251] In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be separate storage structures. In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be a combined storage structure. In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage 1801 and code and/or data storage 1805 may be included with other on-chip or off-chip data storage, including a processor' s Li, L2, or L3 cache or system memory.[0252] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, one or more arithmetic logic unit(s) ("ALU(s)") 1810, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 1820 that are functions of input/output and/or weight parameter data stored in code and/or data storage 1801 and/or code and/or data storage 1805. In at least one embodiment, activations stored in activation storage 1820 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 1810 in response to performing instructions or other code, wherein weight values stored in code and/or data storage 1805 and/or data storage 1801 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 1805 or code and/or data storage 1801 or another storage on or off-chip.[0253] In at least one embodiment, ALU(s) 1810 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 1810 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a coprocessor). In at least one embodiment, ALUs 1810 may be included within a processor' s execution units or otherwise within a bank of ALlis accessible by a processor' s execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, code and/or data storage 1801, code and/or data storage 1805, and activation storage 1820 may share a processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 1820 may be included with other on-chip or off-chip data storage, including a processor' s Li, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor' s fetch, decode, scheduling, execution, retirement and/or other logical circuits.[0254] In at least one embodiment, activation storage 1820 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, activation storage 1820 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, a choice of whether activation storage 1820 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0255] In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18A may be used in conjunction with an application-specific integrated circuit ("ASIC"), such as a TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from GraphcoreTM, or a Nervana® (e.g., "Lake Crest") processor from Intel Corp. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18A may be used in conjunction with central processing unit ("CPU") hardware, graphics processing unit ("GPU") hardware or other hardware, such as field programmable gate arrays ("FPGAs").[0256] Figure 18B illustrates inference and/or training logic 1815, according to at least one embodiment. In at least one embodiment, inference and/or training logic ISIS may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18B may be used in conjunction with an application-specific integrated circuit (ASIC), such as TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from GraphcoreTM, or a Nervana® (e.g., "Lake Crest") processor from Intel Corp. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 1815 includes, without limitation, code and/or data storage 1801 and code and/or data storage 1805, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in Figure 18B, each of code and/or data storage 1801 and code and/or data storage 1805 is associated with a dedicated computational resource, such as computational hardware I 802 and computational hardware 1806, respectively. In at least one embodiment, each of computational hardware 1802 and computational hardware 1806 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 1801 and code and/or data storage 1805, respectively, result of which is stored in activation storage 1820.[0257] In at least one embodiment, each of code and/or data storage 1801 and 1805 and corresponding computational hardware 1802 and 1806, respectively, correspond to different layers of a neural network, such that resulting activation from one storage/computational pair 1801/1802 of code and/or data storage 1801 and computational hardware I 802 is provided as an input to a next storage/computational pair 1805/1806 of code and/or data storage 1805 and computational hardware 1806, in order to mirror a conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 1801/1802 and I 805/1806 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage/computation pairs 1801/1802 and 1805/1806 may be included in inference and/or training logic 1815.[0258] Figure 19 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 1906 is trained using a training dataset 1902. In at least one embodiment, training framework 1904 is a PyTorch framework, whereas in other embodiments, training framework 1904 is a TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit'CNTK, MX'Net, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment, training framework 1904 trains an untrained neural network 1906 and enables it to be trained using processing resources described herein to generate a trained neural network 1908. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner.[0259] In at least one embodiment, untrained neural network 1906 is trained using supervised learning, wherein training dataset 1902 includes an input paired with a desired output for an input, or where training dataset 1902 includes input having a known output and an output of neural network 1906 is manually graded. In at least one embodiment, untrained neural network 1906 is trained in a supervised manner and processes inputs from training dataset 1902 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 1906. In at least one embodiment, training framework 1904 adjusts weights that control untrained neural network 1906. In at least one embodiment, training framework 1904 includes tools to monitor how well untrained neural network 1906 is converging towards a model, such as trained neural network 1908, suitable to generating correct answers, such as in result 1914, based on input data such as a new dataset 1912. In at least one embodiment, training framework 1904 trains untrained neural network 1906 repeatedly while adjust weights to refine an output of untrained neural network 1906 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 1904 trains untrained neural network 1906 until untrained neural network 1906 achieves a desired accuracy. In at least one embodiment, trained neural network 1908 can then be deployed to implement any number of machine learning operations.[0260] In at least one embodiment, untrained neural network 1906 is trained using unsupervised learning, wherein untrained neural network 1906 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 1902 will include input data without any associated output data or "ground truth" data. In at least one embodiment, untrained neural network 1906 can learn groupings within training dataset 1902 and can determine how individual inputs are related to untrained dataset 1902. In at least one embodiment, unsupervised training can be used to generate a self-organizing map in trained neural network 1908 capable of performing operations useful in reducing dimensionality of new dataset 191 2. in at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in new dataset 1912 that deviate from normal patterns of new dataset 1912.[02611 In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 1902 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 1904 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 1908 to adapt to new dataset 1912 without forgetting knowledge instilled within trained neural network 1408 during initial training.56 Networks [0262] The following figures set forth, without limitation, exemplary 5G network-based systems that can be used to implement at least one embodiment.[0263] Figure 20 illustrates an architecture of a system 2000 of a network, in accordance with at least one embodiment. In at least one embodiment, system 2000 is shown to include a user equipment (UE) 2002 and a UE 2004. In at least one embodiment, UEs 2002 and 2004 are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks) but may also comprise any mobile or non-mobile computing device, such as Personal Data Assistants (PDAs), pagers, laptop computers, desktop computers, wireless handsets, or any computing device including a wireless communications interface.[0264] In at least one embodiment, any of ULs 2002 and 2004 can comprise an Internet of Things (IoT) UE, which can comprise a network access layer designed for low-power IoT applications utilizing short-lived HE connections. In at least one embodiment, an IoT UE can utilize technologies such as machine-to-machine (M2M) or machine-type communications (MTC) for exchanging data with an MTC server or device via a public land mobile network (PLMN), Proximity-Based Service (ProSe) or device-to-device (D2D) communication, sensor networks, or IoT networks. In at least one embodiment, a M2M or MTC exchange of data may be a machine-initiated exchange of data. In at least one embodiment, an IoT network describes interconnecting IoT UEs, which may include uniquely identifiable embedded computing devices (within Internet infrastructure), with short-lived connections. In at least one embodiment, an loT UEs may execute background applications (e.g., keep alive messages, status updates, etc.) to facilitate connections of an IoT network.[0265] In at least one embodiment, UEs 2002 and 2004 may be configured to connect, e.g., communicatively couple, with a radio access network (RAN) 2016. In at least one embodiment, RAN 2016 may be, in at least one embodiment, an Evolved Universal Mobile Telecommunications System (UNITS) Terrestrial Radio Access Network (E-UTRAN), a NextGen RAN (NG RAN), or some other type of RAN. In at least one embodiment, UEs 2002 and 2004 utilize connections 2012 and 2014, respectively, each of which comprises a physical communications interface or layer. In at least one embodiment, connections 2012 and 2014 are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a Global System for Mobile Communications (GSM) protocol, a code-division multiple access (CDMA) network protocol, a Push-to-Talk (PTT) protocol, a PTT over Cellular (POC) protocol, a Universal Mobile Telecommunications System (UIVITS) protocol, a 3GPP Long Term Evolution (LTE) protocol, a fifth generation (5G) protocol, a New Radio (NR) protocol, and variations thereof.[0266] In at least one embodiment, UEs 2002 and 2004 may further directly exchange communication data via a ProSe interface 2006. In at least one embodiment, ProSe interface 2006 may alternatively be referred to as a sidelmk interface comprising one or more logical channels, including but not limited to a Physical Sidelink Control Channel (PSCCH), a Physical Sidelink Shared Channel (PSSCH), a Physical Sidelink Discovery Channel (PSDCH), and a Physical Sidelink Broadcast Channel (PSBCH).[0267] In at least one embodiment, UE 2004 is shown to be configured to access an access point (AP) 2010 via connection 2008. In at least one embodiment, connection 2008 can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein AP 2010 would comprise a wireless fidelity (WiFie) router. In at least one embodiment, AP 2010 is shown to be connected to an Internet without connecting to a core network of a wireless system.[0268] In at least one embodiment, RAN 2016 can include one or more access nodes that enable connections 2012 and 2014. In at least one embodiment, these access nodes (ANs) can be referred to as base stations (BSs), NodeBs, evolved NodeBs (eNBs), next Generation NodeBs (gNB), RAN nodes, and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). In at least one embodiment, RAN 2016 may include one or more RAN nodes for providing macrocells, e.g., macro RAN node 2018, and one or more RAN nodes for providing femtocells or picocells (e.g., cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells), e.g., low power (LP) RAN node 2020.[0269] In at least one embodiment, any of RAN nodes 2018 and 2020 can terminate an air interface protocol and can be a first point of contact for UEs 2002 and 2004. In at least one embodiment, any of RAN nodes 2018 and 2020 can fulfill various logical functions for RAN 2016 including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.[0270] In at least one embodiment, UEs 2002 and 2004 can be configured to communicate using Orthogonal Frequency-Division Multiplexing (OFDM) communication signals with each other or with any of RAN nodes 2018 and 2020 over a multi-carrier communication channel in accordance various communication techniques, such as, but not limited to, an Orthogonal Frequency Division Multiple Access (OFDMA) communication technique (e.g., for downlink communications) or a Single Carrier Frequency Division Multiple Access (SC-FDMA) communication technique (e.g., for uplink and ProSe or sidelink communications), and/or variations thereof. In at least one embodiment, OFDM signals can comprise a plurality of orthogonal sub-carriers.[0271] In at least one embodiment, a downlink resource grid can be used for downlink transmissions from any of RAN nodes 2018 and 2020 to UEs 2002 and 2004, while uplink transmissions can utilize similar techniques. In at least one embodiment, a grid can be a time frequency grid, called a resource grid or time-frequency resource grid, which is a physical resource in a downlink in each slot. In at least one embodiment, such a time frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation. In at least one embodiment, each column and each row of a resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. In at least one embodiment, a duration of a resource grid in a time domain corresponds to one slot in a radio frame. In at least one embodiment, a smallest time-frequency unit in a resource grid is denoted as a resource element. In at least one embodiment, each resource grid comprises a number of resource blocks, which describe a mapping of certain physical channels to resource elements. In at least one embodiment, each resource block comprises a collection of resource elements. In at least one embodiment, in a frequency domain, this may represent a smallest quantity of resources that currently can be allocated. In at least one embodiment, there are several different physical downlink channels that are conveyed using such resource blocks.[0272] In at least one embodiment, a physical downlink shared channel (PDSCH) may carry user data and higher-layer signaling to UEs 2002 and 2004. In at least one embodiment, a physical downlink control channel (PDCCH) may carry information about a transport format and resource allocations related to PDSCH channel, among other things. In at least one embodiment, it may also inform UEs 2002 and 2004 about a transport format, resource allocation, and HARQ (Hybrid Automatic Repeat Request) information related to an uplink shared channel. In at least one embodiment, typically, downlink scheduling (assigning control and shared channel resource blocks to UE 2002 within a cell) may be performed at any of RAN nodes 2018 and 2020 based on channel quality information fed back from any of UEs 2002 and 2004. In at least one embodiment, downlink resource assignment information may be sent on a PDCCH used for (e.g., assigned to) each of UEs 2002 and 2004.[0273] In at least one embodiment, a PDCCH may use control channel elements (CCEs) to convey control information. In at least one embodiment, before being mapped to resource elements, PDCCH complex valued symbols may first be organized into quadruplets, which may then be permuted using a sub-block interleaver for rate matching. In at least one embodiment, each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements known as resource element groups (REGs). In at least one embodiment, four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. In at least one embodiment, PDCCH can be transmitted using one or more CCEs, depending on a size of a downlink control information (DCI) and a channel condition. In at least one embodiment, there can be four or more different PDCCH formats defined in LTE with different numbers of CCEs (e.g., aggregation level, L=1, 2,4, or 8).[0274] In at least one embodiment, an enhanced physical downlink control channel (EPDCCH) that uses PDSCH resources may be utilized for control information transmission. In at least one embodiment, EPDCCH may be transmitted using one or more enhanced control channel elements (ECCEs). In at least one embodiment, each ECCE may correspond to nine sets of four physical resource elements known as an enhanced resource element groups (EREGs). In at least one embodiment, an ECCE may have other numbers of EREGs in some situations [0275] In at least one embodiment, RAN 2016 is shown to be communicatively coupled to a core network (CN) 2038 via an S1 interface 2022. In at least one embodiment, CN 2038 may be an evolved packet core (EPC) network, a NextGen Packet Core (NPC) network, or some other type of CN. In at least one embodiment, Si interface 2022 is split into two parts: Si-U interface 2026, which carries traffic data between RAN nodes 2018 and 2020 and serving gateway (SOW) 2030, and a Si-mobility management entity (MME) interface 2024, which is a signaling interface between RAN nodes 2018 and 2020 and MMEs 2028.[0276] In at least one embodiment, CN 2038 comprises MMEs 2028, S-GW 2030, Packet Data Network (PDN) Gateway (P-OW) 2034, and a home subscriber server (HSS) 2032. In at least one embodiment, MMEs 2028 may be similar in function to a control plane of legacy Serving General Packet Radio Service (GPRS) Support Nodes (SGSN). in at least one embodiment, MIMEs 2028 may manage mobility aspects in access such as gateway selection and tracking area list management. In at least one embodiment, HSS 2032 may comprise a database for network users, including subscription related information to support a network entities' handling of communication sessions. In at least one embodiment, CN 2038 may comprise one or several HSSs 2032, depending on a number of mobile subscribers, on a capacity of an equipment, on an organization of a network, etc. In at least one embodiment, HSS 2032 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. [0277] In at least one embodiment, S-GW 2030 may terminate a Si interface 2022 towards RAN 2016, and routes data packets between RAN 2016 and CN 2038. In at least one embodiment, S-GW 2030 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. In at least one embodiment, other responsibilities may include lawful intercept, charging, and some policy enforcement.[0278] In at least one embodiment, P-OW 2034 may terminate an SGi interface toward a PDN. In at least one embodiment, P-GW 2034 may route data packets between an EPC network 2038 and external networks such as a network including application server 2040 (alternatively referred to as application function (AF)) via an Internet Protocol (IP) interface 2042. In at least one embodiment, application server 2040 may be an element offering applications that use IP bearer resources with a core network (e.g., UMTS Packet Services (PS) domain, LTE PS data services, etc.). In at least one embodiment, P-GW 2034 is shown to be communicatively coupled to an application server 2040 via an LP communications interface 2042. In at least one embodiment, application server 2040 can also be configured to support one or more communication services (e.g., Voice-over-Internet Protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for UEs 2002 and 2004 via CN 2038.[0279] In at least one embodiment, P-GW 2034 may further be a node for policy enforcement and charging data collection. In at least one embodiment, policy and Charging Enforcement Function (PCRF) 2036 is a policy and charging control element of CN 2038. In at least one embodiment, in a non-roaming scenario, there may be a single PCRF in a Home Public Land Mobile Network (FIPLMN) associated with a LTE's Internet Protocol Connectivity Access Network (IP-CAN) session. In at least one embodiment, in a roaming scenario with local breakout of traffic, there may be two PCRFs associated with a UE's 1P-CAN session: a Home PCRF (H-PCRF) within a FIPLMIN-and a Visited PCRF (V-PCRF) within a Visited Public Land Mobile Network (VPLMN). In at least one embodiment, PCRF 2036 may be communicatively coupled to application server 2040 via P-GW 2034. In at least one embodiment, application server 2040 may signal PCRF 2036 to indicate a new service flow and select an appropriate Quality of Service (QoS) and charging parameters. In at least one embodiment, PCRF 2036 may provision this rule into a Policy and Charging Enforcement Function (PCEF) (not shown) with an appropriate traffic flow template (TFT) and QoS class of identifier (QCI), which commences a QoS and charging as specified by application server 2040.[0280] Figure 21 illustrates an architecture of a system 21 00 of a network in accordance with some embodiments. In at least one embodiment, system 2100 is shown to include a UE 2102, a 5G access node or RAN node (shown as (R)AN node 2108), a User Plane Function (shown as UPF 2104), a Data Network (DN 2106), which may be, in at least one embodiment, operator services, Internet access or 3rd party services, and a 50 Core Network (5GC) (shown as CN 2110).[0281] In at least one embodiment, CN 2110 includes an Authentication Server Function (AUSF 2114); a Core Access and Mobility Management Function (AMF 2112); a Session Management Function (SMF 2118); a Network Exposure Function (NEF 2116); a Policy Control Function (PCF 2122); a Network Function (NF) Repository Function (NRF 2120); a Unified Data Management (UDM 2124); and an Application Function (AF 2126). In at least one embodiment, CN 2110 may also include other elements that are not shown, such as a Structured Data Storage network function (SDSF), an Unstructured Data Storage network function (UDSF), and variations thereof.[0282] In at least one embodiment, UPF 2104 may act as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to DN 2106, and a branching point to support multi-homed PDU session. In at least one embodiment, UPF 2104 may also perform packet routing and forwarding, packet inspection, enforce user plane part of policy rules, lawfully intercept packets (UP collection); traffic usage reporting, perform QoS handling for user plane (e.g. packet filtering, gating, UL/DL rate enforcement), perform Uplink Traffic verification (e.g., SDF to QoS flow mapping), transport level packet marking in uplink and downlink, and downlink packet buffering and downlink data notification triggering. In at least one embodiment, UPF 2104 may include an uplink classifier to support routing traffic flows to a data network. In at least one embodiment, DN 2106 may represent various network operator services, Internet access, or third party services.[0283] In at least one embodiment, AUSF 2114 may store data for authentication of UE 2102 and handle authentication related functionality. In at least one embodiment, AUSF 2114 may facilitate a common authentication framework for various access types.[0284] In at least one embodiment, AMF 2112 may be responsible for registration management (e.g., for registering UE 2102, etc.), connection management, reachability management, mobility management, and lawful interception of AMF-related events, and access authentication and authorization. In at least one embodiment, AMF 211 2 may provide transport for SM messages for SMF 2118, and act as a transparent proxy for routing SM messages. In at least one embodiment, AMF 2112 may also provide transport for short message service (SMS) messages between HE 2102 and an SMS function (SMSF) (not shown by Figure 21). In at least one embodiment, AMP 2112 may act as Security Anchor Function (SEA), which may include interaction with AUSF 2114 and UE 2102 and receipt of an intermediate key that was established as a result of UE 2102 authentication process. In at least one embodiment, where USIM based authentication is used, AMF 2112 may retrieve security material from AUSF 2114. In at least one embodiment, AMF 2112 may also include a Security Context Management (SCM) function, which receives a key from SEA that it uses to derive access-network specific keys. In at least one embodiment, furthermore, AMF 2112 may be a termination point of RAN CP interface (N2 reference point), a termination point of NAS (NI) signaling, and perform NAS ciphering and integrity protection.[0285] In at least one embodiment, AMF 2112 may also support NAS signaling with a UE 2102 over an N3 interworking-function (IWF) interface. In at least one embodiment, N3IWF may be used to provide access to untrusted entities. In at least one embodiment, N3IWF may be a termination point for N2 and N3 interfaces for control plane and user plane, respectively, and as such, may handle N2 signaling from SW' and AMF for PDU sessions and QoS, encapsulate/de encapsulate packets for IPSec and N3 tunneling, mark N3 user-plane packets in uplink, and enforce QoS corresponding to N3 packet marking taking into account QoS requirements associated to such marking received over N2. In at least one embodiment, N3IWF may also relay uplink and downlink control-plane NAS (NT) signaling between HE 2102 and AMF 2112, and relay uplink and downlink user-plane packets between UE 2102 and UPF 2104. In at least one embodiment, N3IWF' also provides mechanisms for IPsec tunnel establishment with UE 2102.[0286] In at least one embodiment, SMF 2118 may be responsible for session management (e.g., session establishment, modify and release, including tunnel maintain between UPF and AN node); HE IP address allocation & management (including optional Authorization); Selection and control of UP function; Configures traffic steering at UPF to route traffic to proper destination; termination of interfaces towards Policy control functions; control part of policy enforcement and QoS; lawful intercept (for SM events and interface to LT System); termination of SM parts of NAS messages: downlink Data Notification; initiator of AN specific SM information, sent via AMF over N2 to AN; determine SSC mode of a session. In at least one embodiment, SMF 2118 may include following roaming functionality: handle local enforcement to apply QoS SLAB (VPLMN); charging data collection and charging interface (VPLMN); lawful intercept (in VPLMN for SM events and interface to LI System); support for interaction with external DN for transport of signaling for PDU session authorization/ authentication by external DN.[0287] In at least one embodiment, NEF 2116 may provide means for securely exposing services and capabilities provided by 3GPP network functions for third party, internal exposure/re-exposure, Application Functions (e.g., Al 2126), edge computing or fog computing systems, etc. In at least one embodiment, NEF 2116 may authenticate, authorize, and/or throttle AFs. In at least one embodiment, NEF 2116 may also translate information exchanged with AF 2126 and information exchanged with internal network functions. In at least one embodiment, NEF 2116 may translate between an AF-Service-Identifier and an internal 50C information. In at least one embodiment, NEF 2116 may also receive information from other network functions (NFs) based on exposed capabilities of other network functions. In at least one embodiment, this information may be stored at NEF 2116 as structured data, or at a data storage NF using a standardized interfaces, in at least one embodiment, stored information can then be re-exposed by NEF 2116 to other NFs and AFs, and/or used for other purposes such as analytics.[0288] In at least one embodiment, NRF 2120 may support service discovery functions, receive NF Discovery Requests from NF instances, and provide information of discovered NT instances to NY instances. In at least one embodiment, MT 2120 also maintains information of available NF instances and their supported services [0289] In at least one embodiment, PCF 2122 may provide policy rules to control plane function(s) to enforce them, and may also support unified policy framework to govern network behavior, in at least one embodiment, PCF 2122 may also implement a front end (FE) to access subscription information relevant for policy decisions in a UDR of UDM 2124.[0290] In at least one embodiment, UDM 2124 may handle subscription-related information to support a network entities' handling of communication sessions, and may store subscription data of UE 2102. In at least one embodiment, UDM 2124 may include two parts, an application FE and a User Data Repository (UDR). In at least one embodiment, UDM may include a UDM FE, which is in charge of processing of credentials, location management, subscription management and so on. In at least one embodiment, several different front ends may serve a same user in different transactions. In at least one embodiment, UDM-FE accesses subscription information stored in an UDR and performs authentication credential processing; user identification handling; access authorization; registration/mobility management; and subscription management. In at least one embodiment, UDR may interact with PCF 2122. In at least one embodiment, UDM 2124 may also support SMS management, wherein an SMS-FE implements a similar application logic as discussed previously.[0291] In at least one embodiment, AF 2126 may provide application influence on traffic routing, access to a Network Capability Exposure (NCE), and interact with a policy framework for policy control. In at least one embodiment, NCE may be a mechanism that allows a 5GC and AF 2126 to provide information to each other via NEF 2116, which may be used for edge computing implementations. In at least one embodiment, network operator and third party services may be hosted close to UE 2102 access point of attachment to achieve an efficient service delivery through a reduced end-to-end latency and load on a transport network. In at least one embodiment, for edge computing implementations, 5GC may select a UPF 2104 close to UE 2102 and execute traffic steering from UPF 2104 to DN 2106 via N6 interface. In at least one embodiment, this may be based on UE subscription data, UE location, and information provided by AF 2126. In at least one embodiment, AF 2126 may influence UPF (re)selection and traffic routing. In at least one embodiment, based on operator deployment, when AF 2126 is considered to be a trusted entity, a network operator may permit AF 2126 to interact directly with relevant NFs.[0292] In at least one embodiment, CN 2110 may include an SMSF, which may be responsible for SMS subscription checking and verification, and relaying SM messages to/from UE 2102 to/from other entities, such as an SMS-GMSCIWMSC/SMS-router. In at least one embodiment, SMS may also interact with AMF 2112 and UDM 2124 for notification procedure that UE 2102 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 2124 when UE 2102 is available for SMS).[0293] In at least one embodiment, system 2100 may include following service-based interfaces: Namf Service-based interface exhibited by ANIF; Nsmf: Service-based interface exhibited by SMF; Mel Service-based interface exhibited by NEF, Npcf Service-based interface exhibited by PCF; Nudm: Service-based interface exhibited by UDM; Nal': Service-based interface exhibited by AF; Nnrf: Service-based interface exhibited by NRF; and Nausf Service-based interface exhibited by AUSF.[0294] In at least one embodiment, system 2100 may include following reference points: Ni: Reference point between TIE and AMF; N2: Reference point between (R)AN and AMF; N3: Reference point between (R)AN and UPF; N4: Reference point between SIVTF and UPF; and N6: Reference point between UPF and a Data Network. In at least one embodiment, there may be many more reference points and/or service-based interfaces between a NF services in NFs, however, these interfaces and reference points have been omitted for clarity. In at least one embodiment, an NS reference point may be between a PCF and AF, an N7 reference point may be between PCF and SMF; an N11 reference point between AMF and SMF; etc. In at least one embodiment, CN 2110 may include an Nx interface, which is an inter-CN interface between MME and AMF 2112 in order to enable interworking between CN 2110 and CN 7221.[0295] In at least one embodiment, system 2100 may include multiple RAN nodes (such as (R)AN node 2108) wherein an Xn interface is defined between two or more (R)AN node 2108 (e.g., gNBs) that connecting to 56C 410, between a (R)AN node 2108 (e.g., gNB) connecting to CN 2110 and an eNB (e.g., a macro RAN node), and/or between two eNBs connecting to CN 2110.[0296] In at least one embodiment, Xn interface may include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. In at least one embodiment, Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. In at least one embodiment, Xn-C may provide management and error handling functionality, functionality to manage a Xn-C interface; mobility support for UE 2102 in a connected mode (e.g., CM-CONNECTED) including functionality to manage UE mobility for connected mode between one or more (R)AN node 2108. In at least one embodiment, mobility support may include context transfer from an old (source) serving (R)AN node 2108 to new (target) serving (R)AN node 2108; and control of user plane tunnels between old (source) serving (R)AN node 2108 to new (target) serving (R)AN node 2108.[0297] In at least one embodiment, a protocol stack of a Xn-U may include a transport network layer built on Internet Protocol (IP) transport layer, and a GTP-U layer on top of a UDP and/or IP layer(s) to carry user plane PDUs. In at least one embodiment, Xn-C protocol stack may include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP)) and a transport network layer that is built on an SCTP layer. In at least one embodiment, SCTP layer may be on top of an IP layer. In at least one embodiment, SCTP layer provides a guaranteed delivery of application layer messages. In at least one embodiment, in a transport IP layer point-to-point transmission is used to deliver signaling PDUs. In at least one embodiment, Xn-U protocol stack and/or a Xn-C protocol stack may be same or similar to an user plane and/or control plane protocol stack(s) shown and described herein.[0298] Figure 22 is an illustration of a control plane protocol stack in accordance with some embodiments. In at least one embodiment, a control plane 2200 is shown as a communications protocol stack between UE 2002 (or alternatively, HE 2004), RAN 2016, and MME(s) 2028.[0299] In at least one embodiment, PHY layer 2202 may transmit or receive information used by MAC layer 2204 over one or more air interfaces. In at least one embodiment, PHY layer 2202 may further perform link adaptation or adaptive modulation and coding (AMC), power control, cell search (e.g., for initial synchronization and handover purposes), and other measurements used by higher layers, such as an RRC layer 2210. In at least one embodiment, PHY layer 2202 may still further perform error detection on transport channels, forward error correction (FEC) coding/de-coding of transport channels, modulation/demodulation of physical channels, interleaving, rate matching, mapping onto physical channels, and Multiple Input Multiple Output (MIMO) antenna processing.[0300] In at least one embodiment, MAC layer 2204 may perform mapping between logical channels and transport channels, multiplexing of MAC service data units (SDUs) from one or more logical channels onto transport blocks (TB) to be delivered to PHY via transport channels, de-multiplexing MAC SDUs to one or more logical channels from transport blocks (TB) delivered from PHY via transport channels, multiplexing MAC SDUs onto TBs, scheduling information reporting, error correction through hybrid automatic repeat request (HARD), and logical channel prioritization.[0301] In at least one embodiment, RLC layer 2206 may operate in a plurality of modes of operation, including: Transparent Mode (TM), Unacknowledged Mode (TIM), and Acknowledged Mode (AM). In at least one embodiment, RLC layer 2206 may execute transfer of upper layer protocol data units (PDUs), error correction through automatic repeat request (ARQ) for AM data transfers, and concatenation, segmentation and reassembly of RLC SDUs for UM and AM data transfers. In at least one embodiment, RLC layer 2206 may also execute re-segmentation of RLC data PDIJs for AM data transfers, reorder RLC data PDIJs for UIVI and AM data transfers, detect duplicate data for UM and AM data transfers, discard RLC SDUs for UM and AM data transfers, detect protocol errors for AM data transfers, and perform RLC reestablishment.[0302] In at least one embodiment, PDCP layer 2208 may execute header compression and decompression of IP data, maintain PDCP Sequence Numbers (SNs), perform in-sequence delivery of upper layer PDUs at re-establishment of lower layers, eliminate duplicates of lower layer SDUs at re-establishment of lower layers for radio bearers mapped on RLC AM, cipher and decipher control plane data, perform integrity protection and integrity verification of control plane data, control timer-based discard of data, and perform security operations (e.g ciphering, deciphering, integrity protection, integrity verification, etc.).[0303] In at least one embodiment, main services and functions of a RRC layer 2210 may include broadcast of system information (e.g., included in Master Information Blocks (M1Bs) or System Information Blocks (SIBs) related to a non-access stratum (NAS)), broadcast of system information related to an access stratum (AS), paging, establishment, maintenance and release of an RRC connection between an UE and E-UTRAN (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), establishment, configuration, maintenance and release of point-to-point radio bearers, security functions including key management, inter radio access technology (RAT) mobility, and measurement configuration for TIE measurement reporting. In at least one embodiment, said MIBs and SIBs may comprise one or more information elements (IEs), which may each comprise individual data fields or data structures.[0304] In at least one embodiment, UE 2002 and RAN 2016 may utilize a Uu interface (e.g., an LIE-Uu interface) to exchange control plane data via a protocol stack comprising PITY layer 2202, MAC layer 2204, RLC layer 2206, PDCP layer 2208, and RRC layer 2210.[0305] In at least one embodiment, non-access stratum (NAS) protocols (NAS protocols 2212) form a highest stratum of a control plane between UE 2002 and MME(s) 2028. In at least one embodiment, NAS protocols 2212 support mobility of UE 2002 and session management procedures to establish and maintain IP connectivity between UE 2002 and P-GW 2034.[0306] In at least one embodiment, Si Application Protocol (S1-AP) layer (Si-AP layer 2222) may support functions of a Si interface and comprise Elementary Procedures (EPs). in at least one embodiment, an EP is a unit of interaction between RAN 2016 and CN 2028. in at least one embodiment, S1 -AP layer services may comprise two groups: UE-associated services and non UE-associated services. In at least one embodiment, these services perform functions including, but not limited to: E-UTRAN Radio Access Bearer (E-RAB) management, UE capability indication, mobility, NAS signaling transport, RAN Information Management (RIM), and configuration transfer.[0307] In at least one embodiment, Stream Control Transmission Protocol (SCTP) layer (alternatively referred to as a stream control transmission protocotinternet protocol (SCTP/IP) layer) (SCTP layer 2220) may ensure reliable delivery of signaling messages between RAN 2016 and MME(s) 2028 based, in part, on an IP protocol, supported by an IP layer 2218. In at least one embodiment, L2 layer 2216 and an Li layer 2214 may refer to communication links (e.g., wired or wireless) used by a RAN node and MME to exchange information.[0308] In at least one embodiment, RAN 2016 and MME(s) 2028 may utilize an S1 -MME interface to exchange control plane data via a protocol stack comprising a Li layer 2214, L2 layer 2216, IP layer 2218, SCTP layer 2220, and Si -AP layer 2222.[0309] Figure 23 is an illustration of a user plane protocol stack in accordance with at least one embodiment. In at least one embodiment, a user plane 2300 is shown as a communications protocol stack between a HE 2002, RAN 2016, S-GW 2030, and P-GW 2034. In at least one embodiment, user plane 2300 may utilize a same protocol layers as control plane 2200. In at least one embodiment, HE 2002 and RAN 2016 may utilize a Uu interface (e.g., an LTE-Uu interface) to exchange user plane data via a protocol stack comprising PHY layer 2202, MAC layer 2204, RLC layer 2206, PDCP layer 2208.[0310] In at least one embodiment, General Packet Radio Service (GPRS) Tunneling Protocol for a user plane (GTP-U) layer (GTP-U layer 2304) may be used for carrying user data within a GPRS core network and between a radio access network and a core network. In at least one embodiment, user data transported can be packets in any of TPv4, TPv6, or PPP formats. In at least one embodiment, UDP and IP security (UDRIP) layer (UDRIP layer 2302) may provide checksums for data integrity, port numbers for addressing different functions at a source and destination, and encryption and authentication on selected data flows. In at least one embodiment, RAN 2016 and S-GW 2030 may utilize an S 1 -U interface to exchange user plane data via a protocol stack comprising Li layer 2214, L2 layer 2216, UDR/IP layer 2302, and GTP-U layer 2304. In at least one embodiment, S-GW 2030 and P-GW 2034 may utilize an 55/58a interface to exchange user plane data via a protocol stack comprising LI layer 2214, L2 layer 2216, UDP/IP layer 2302, and GTP-U layer 2304. In at least one embodiment, as discussed above with respect to Figure 22, NAS protocols support a mobility of UE 2002 and session management procedures to establish and maintain IP connectivity between UE 2002 and P-GW 2034.[0311] Figure 24 illustrates components 2400 of a core network in accordance with at least one embodiment. In at least one embodiment, components of CN 2038 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In at least one embodiment, Network Functions Virtualization (NFV) is utilized to virtualize any or all of above described network node functions via executable instructions stored in one or more computer readable storage mediums (described in further detail below). In at least one embodiment, a logical instantiation of CN 2038 may be referred to as a network slice 2402 (e.g., network slice 2402 is shown to include HSS 2032, MME(s) 2028, and S-OW 2030). In at least one embodiment, a logical instantiation of a portion of CN 2038 may be referred to as a network sub-slice 2404 (e.g., network sub-slice 2404 is shown to include P-GW 2034 and PCRF 2036).[0312] In at least one embodiment, NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In at least one embodiment, NFV systems can be used to execute virtual or reconfigurable implementations of one or more EPC components/functions.[0313] Figure 25 is a block diagram illustrating components, according to at least one embodiment, of a system 2500 to support network function virtualization (NFV). In at least one embodiment, system 2500 is illustrated as including a virtualized infrastructure manager (shown as VIM 2502), a network function virtualization infrastructure (shown as NFVI 2504), a VNF manager (shown as VNFM 2506), virtualized network functions (shown as VNF 2508), an element manager (shown as EM 2510), an NFV Orchestrator (shown as NFVO 2512), and a network manager (shown as NM 2514).[0314] In at least one embodiment, VIM 2502 manages resources of NFVI 2504. In at least one embodiment, NFVI 2504 can include physical or virtual resources and applications (including hypervisors) used to execute system 2500. In at least one embodiment, VIM 2502 may manage a life cycle of virtual resources with NFVI 2504 (e.g., creation, maintenance, and tear down of virtual machines (VMs) associated with one or more physical resources), track VM instances, track performance, fault and security of VM instances and associated physical resources, and expose VM instances and associated physical resources to other management systems.[0315] In at least one embodiment, VNFM 2506 may manage VNF 2508. In at least one embodiment, VNF 2508 may be used to execute EPC components/ functions. In at least one embodiment, VNFM 2506 may manage a life cycle of VNF 2508 and track performance, fault and security of virtual aspects of VNF 2508. In at least one embodiment, EM 2510 may track performance, fault and security of functional aspects of VNF 2508. In at least one embodiment, tracking data from VNFM 2506 and EM 2510 may comprise, in at least one embodiment, performance measurement (PM) data used by VIM 2502 or NEVI 2504. In at least one embodiment, both VNFM 2506 and EM 2510 can scale up/down a quantity of VNFs of system 2500.[0316] In at least one embodiment, NFVO 2512 may coordinate, authorize, release and engage resources of NEVI 2504 in order to provide a requested service (e.g., to execute an EPC function, component, or slice). In at least one embodiment, NM 2514 may provide a package of end-user functions with responsibility for a management of a network, which may include network elements with VNFs, non-virtualized network functions, or both (management of VNFs may occur via an EM 2510), Computer-Based Systems [0317] The following figures set forth, without limitation, exemplary computer-based systems that can be used to implement at least one embodiment.[0318] Figure 26 illustrates a processing system 2600, in accordance with at least one embodiment. In at least one embodiment, processing system 2600 includes one or more processors 2602 and one or more graphics processors 2608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 2602 or processor cores 2607. In at least one embodiment, processing system 2600 is a processing platform incorporated within a system-on-a-chip ("SoC") integrated circuit for use in mobile, handheld, or embedded devices.[0319] In at least one embodiment, processing system 2600 can include, or be incorporated within a server-based gaming platform, a game console, a media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, processing system 2600 is a mobile phone, smart phone, tablet computing device or mobile Internet device. In at least one embodiment, processing system 2600 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In at least one embodiment, processing system 2600 is a television or set top box device having one or more processors 2602 and a graphical interface generated by one or more graphics processors 2608.[0320] In at least one embodiment, one or more processors 2602 each include one or more processor cores 2607 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 2607 is configured to process a specific instruction set 2609. In at least one embodiment, instruction set 2609 may facilitate Complex Instruction Set Computing ("CISC"), Reduced Instruction Set Computing ("RISC"), or computing via a Very Long Instruction Word ("VLIW"). In at least one embodiment, processor cores 2607 may each process a different instruction set 2609, which may include instructions to facilitate emulation of other instruction sets. In at least one embodiment, processor core 2607 may also include other processing devices, such as a digital signal processor ("DSP").[0321] In at least one embodiment, processor 2602 includes cache memory (cache") 2604. in at least one embodiment, processor 2602 can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor 2602. In at least one embodiment, processor 2602 also uses an external cache (e.g., a Level 3 ("L3") cache or Last Level Cache ("LLC")) (not shown), which may be shared among processor cores 2607 using known cache coherency techniques. In at least one embodiment, register file 2606 is additionally included in processor 2602 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file 2606 may include general-purpose registers or other registers.[0322] In at least one embodiment, one or more processor(s) 2602 are coupled with one or more interface bus(es) 2610 to transmit communication signals such as address, data, or control signals between processor 2602 and other components in processing system 2600. In at least one embodiment interface bus 2610, in one embodiment, can be a processor bus, such as a version of a Direct Media Interface ("DMI") bus. In at least one embodiment, interface bus 2610 is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., "PCI," PCI Express ("PCIe")), memory buses, or other types of interface buses. In at least one embodiment processor(s) 2602 include an integrated memory controller 2616 and a platform controller hub 2630. in at least one embodiment, memory controller 2616 facilitates communication between a memory device and other components of processing system 2600, while platform controller hub ("PCH") 2630 provides connections to Input/Output ("I/0") devices via a local I/0 bus.[0323] In at least one embodiment, memory device 2620 can be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 2620 can operate as system memory for processing system 2600, to store data 2622 and instructions 2621 for use when one or more processors 2602 executes an application or process. In at least one embodiment, memory controller 2616 also couples with an optional external graphics processor 2612, which may communicate with one or more graphics processors 2608 in processors 2602 to perform graphics and media operations. In at least one embodiment, a display device 2611 can connect to processor(s) 2602. In at least one embodiment display device 2611 can include one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device 2611 can include a head mounted display ("I-MD") such as a stereoscopic display device for use in virtual reality ("VR") applications or augmented reality ("AR") applications.[0324] In at least one embodiment, platform controller hub 2630 enables peripherals to connect to memory device 2620 and processor 2602 via a high-speed I/0 bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller 2646, a network controller 2634, a firmware interface 2628, a wireless transceiver 2626, touch sensors 2625, a data storage device 2624 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device 2624 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as PC1, or PCIe. In at least one embodiment, touch sensors 2625 can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver 2626 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution ("LTE") transceiver. In at least one embodiment, firmware interface 2628 enables communication with system firmware, and can be, in at least one embodiment, a unified extensible firmware interface ("LEFT"). In at least one embodiment, network controller 2634 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus 2610. In at least one embodiment, audio controller 2646 is a multi-channel high definition audio controller. In at least one embodiment, processing system 2600 includes an optional legacy I/O controller 2640 for coupling legacy (e.g., Personal System 2 ("PS/2")) devices to processing system 2600. In at least one embodiment, platform controller hub 2630 can also connect to one or more Universal Serial Bus ("USB") controllers 2642 connect input devices, such as keyboard and mouse 2643 combinations, a camera 2644, or other USB input devices.[0325] In at least one embodiment, an instance of memory controller 2616 and platform controller hub 2630 may be integrated into a discreet external graphics processor, such as external graphics processor 2612. In at least one embodiment, platform controller hub 2630 and/or memory controller 2616 may be external to one or more processor(s) 2602. In at least one embodiment, processing system 2600 can include an external memory controller 2616 and platform controller hub 2630, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s) 2602.[0326] Figure 27 illustrates a computer system 2700, in accordance with at least one embodiment. In at least one embodiment, computer system 2700 may be a system with interconnected devices and components, an SOC, or some combination. In at least on embodiment, computer system 2700 is formed with a processor 2702 that may include execution units to execute an instruction. In at least one embodiment, computer system 2700 may include, without limitation, a component, such as processor 2702 to employ execution units including logic to perform algorithms for processing data. In at least one embodiment, computer system 2700 may include processors, such as PENTIUM® Processor family, XeonTN/I, Itanium®, XScaleTM and/or StrongARN4TM, Intel® CoreTM, or Intel® NervanaTM microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 2700 may execute a version of WINDOWS' operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux in at least one embodiment), embedded software, and/or graphical user interfaces, may also be used.[0327] In at least one embodiment, computer system 2700 may be used in other devices such as handheld devices and embedded applications. Some ones of the at least one embodiments of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants ("PDAs"), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (DSP), an SoC, network computers ("NetPCs"), set-top boxes, network hubs, wide area network ("WAN") switches, or any other system that may perform one or more instructions.[0328] In at least one embodiment, computer system 2700 may include, without limitation, processor 2702 that may include, without limitation, one or more execution units 2708 that may be configured to execute a Compute Unified Device Architecture ("CUDA") (CUDA® is developed by NVIDIA Corporation of Santa Clara, CA) program. In at least one embodiment, a CUDA program is at least a portion of a software application written in a CUDA programming language. In at least one embodiment, computer system 2700 is a single processor desktop or server system. In at least one embodiment, computer system 2700 may be a multiprocessor system. In at least one embodiment, processor 2702 may include, without limitation, a CISC microprocessor, a RISC microprocessor, a VLIW microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, in at least one embodiment. In at least one embodiment, processor 2702 may be coupled to a processor bus 2710 that may transmit data signals between processor 2702 and other components in computer system 2700.[0329] In at least one embodiment, processor 2702 may include, without limitation, a Level I ("Ll") internal cache memory ("cache") 2704. In at least one embodiment, processor 2702 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor 2702. In at least one embodiment, processor 2702 may also include a combination of both internal and external caches. In at least one embodiment, a register file 2706 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer register.[0330] In at least one embodiment, execution unit 2708, including, without limitation, logic to perform integer and floating point operations, also resides in processor 2702. Processor 2702 may also include a microcode ("ucode") read only memory ("ROM") that stores microcode for certain macro instructions. In at least one embodiment, execution unit 2708 may include logic to handle a packed instruction set 2709. In at least one embodiment, by including packed instruction set 2709 in an instruction set of a general-purpose processor 2702, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in a general-purpose processor 2702. in at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across a processor's data bus to perform one or more operations one data element at a time.[0331] In at least one embodiment, execution unit 2708 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits, in at least one embodiment, computer system 2700 may include, without limitation, a memory 2720. In at least one embodiment, memory 2720 may be implemented as a DRAM device, an SRAM device, flash memory device, or other memory device. Memory 2720 may store instruction(s) 2719 and/or data 2721 represented by data signals that may be executed by processor 2702.[0332] In at least one embodiment, a system logic chip may be coupled to processor bus 2710 and memory 2720. In at least one embodiment, a system logic chip may include, without limitation, a memory controller hub ("MCH") 2716, and processor 2702 may communicate with MCH 2716 via processor bus 2710. In at least one embodiment, MCH 2716 may provide a high bandwidth memory path 2718 to memory 2720 for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH 2716 may direct data signals between processor 2702, memory 2720, and other components in computer system 2700 and to bridge data signals between processor bus 2710, memory 2720, and a system I/O 2722. In at least one embodiment, system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH 2716 may be coupled to memory 2720 through high bandwidth memory path 2718 and graphics/video card 2712 may be coupled to MCH 2716 through an Accelerated Graphics Port ("AGP") interconnect 2714.[0333] In at least one embodiment, computer system 2700 may use system I/O 2722 that is a proprietary hub interface bus to couple MCH 2716 to controller hub ("ICH') 2730. In at least one embodiment, ICH 2730 may provide direct connections to sonic I/O devices via a local I/O bus. In at least one embodiment, local I/0 bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory 2720, a chipset, and processor 2702. Examples may include, without limitation, an audio controller 2729, a firmware hub ("flash BIOS") 2728, a wireless transceiver 2726, a data storage 2724, a legacy I/O controller 2723 containing a user input interface 2725 and a keyboard interface, a serial expansion port 2777, such as a USB, and a network controller 2734. Data storage 2724 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0334] In at least one embodiment, Figure 27 illustrates a system, which includes interconnected hardware devices or "chips." In at least one embodiment, Figure 27 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in Figure 27 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of system 2700 are interconnected using compute express link ("CXL") interconnects.[0335] Figure 28 illustrates a system 2800, in accordance with at least one embodiment. In at least one embodiment, system 2800 is an electronic device that utilizes a processor 2810. In at least one embodiment, system 2800 may be, in at least one embodiment and without limitation, a notebook, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[0336] In at least one embodiment, system 2800 may include, without limitation, processor 2810 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor 2810 is coupled using a bus or interface, such as an I2C bus, a System Management Bus ("SMBus"), a Low Pin Count ("LPC") bus, a Serial Peripheral Interface ("SPI"), a High Definition Audio ("EIDA") bus, a Serial Advance Technology Attachment ("SATA") bus, a TJSB (versions 1, 2, 3), or a Universal Asynchronous Receiver/Transmitter ("UART") bus. In at least one embodiment, Figure 28 illustrates a system which includes interconnected hardware devices or "chips." In at least one embodiment, Figure 28 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in Figure 28 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of Figure 28 are interconnected using OCT interconnects.[0337] In at least one embodiment, FIG 28 may include a display 2824, a touch screen 2825, a touch pad 2830, a Near Field Communications unit ("NFC") 2845, a sensor hub 2840, a thermal sensor 2846, an Express Chipset ("EC") 2835, a Trusted Platform Module ("TPM") 2838, BIOS/firmware/flash memory ("BIOS, FW Flash") 2822, a DSP 2860, a Solid State Disk ("SSD") or Hard Disk Drive ("HDD") 2820, a wireless local area network unit ("WLAN") 2850, a Bluetooth unit 2852, a Wireless Wide Area Network unit ("WWAN") 2856, a Global Positioning System ("GPS") 2855, a camera ("USB 3.0 camera") 2854 such as a USB 3.0 camera, or a Low Power Double Data Rate ("LPDDR") memory unit ("LPDDR3") 2815 implemented, in at least one embodiment, LPDDR3 standard. These components may each be implemented in any suitable manner.[0338] In at least one embodiment, other components may be communicatively coupled to processor 2810 through components discussed above. In at least one embodiment, an accelerometer 2841, an Ambient Light Sensor ("ALS") 2842, a compass 2843, and a gyroscope 2844 may be communicatively coupled to sensor hub 2840. In at least one embodiment, a thermal sensor 2839, a fan 2837, a keyboard 2846, and a touch pad 2830 may be communicatively coupled to EC 2835. In at least one embodiment, a speaker 2863, a headphones 2864, and a microphone ("mic") 2865 may be communicatively coupled to an audio unit ("audio codec and class d amp") 2864, which may in turn be communicatively coupled to DSP 2860. In at least one embodiment, audio unit 2864 may include, without limitation, an audio coder/decoder ("codec") and a class D amplifier. In at least one embodiment, a STM card ("SIM") 2857 may be communicatively coupled to WWAN unit 2856. In at least one embodiment, components such as WLAN unit 2850 and Bluetooth unit 2852, as well as WWAN unit 2856 may be implemented in a Next Generation Form Factor ("NGFF").[0339] Figure 29 illustrates an exemplary integrated circuit 2900, in accordance with at least one embodiment. In at least one embodiment, exemplary integrated circuit 2900 is an SoC that may be fabricated using one or more IF cores. In at least one embodiment, integrated circuit 2900 includes one or more application processor(s) 2905 (e.g., CPUs), at least one graphics processor 2910, and may additionally include an image processor 2915 and/or a video processor 2920, any of which may be a modular IP core. In at least one embodiment, integrated circuit 2900 includes peripheral or bus logic including a USB controller 2925, a UART controller 2930, an SPISDIO controller 2935, and an 12S/12C controller 2940. In at least one embodiment, integrated circuit 2900 can include a display device 2945 coupled to one or more of a high-definition multimedia interface ("IMMI") controller 2950 and a mobile industry processor interface ("MIPI") display interface 2955. In at least one embodiment, storage may be provided by a flash memory subsystem 2960 including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller 2965 for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine 2970.[0340] Figure 30 illustrates a computing system 3000, according to at least one embodiment; In at least one embodiment, computing system 3000 includes a processing subsystem 3001 having one or more processor(s) 3002 and a system memory 3004 communicating via an interconnection path that may include a memory hub 3005. In at least one embodiment, memory hub 3005 may be a separate component within a chipset component or may be integrated within one or more processor(s) 3002. In at least one embodiment, memory hub 3005 couples with an I/O subsystem 3011 via a communication link 3006. In at least one embodiment, I/O subsystem 3011 includes an I/O hub 3007 that can enable computing system 3000 to receive input from one or more input device(s) 3008. In at least one embodiment, I/O hub 3007 can enable a display controller, which may be included in one or more processor(s) 3002, to provide outputs to one or more display device(s) 301 OA. In at least one embodiment, one or more display device(s) 3010A coupled with I/O hub 3007 can include a local, internal, or embedded display device.[0341] In at least one embodiment, processing subsystem 3001 includes one or more parallel processor(s) 301 2 coupled to memory hub 3005 via a bus or other communication link 3013 In at least one embodiment, communication link 3013 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCle, or may be a vendor specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s) 3012 form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core processor. In at least one embodiment, one or more parallel processor(s) 3012 form a graphics processing subsystem that can output pixels to one of one or more display device(s) 3010A coupled via I/0 Hub 3007. In at least one embodiment, one or more parallel processor(s) 3012 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 3010B.[0342] In at least one embodiment, a system storage unit 3014 can connect to VO hub 3007 to provide a storage mechanism for computing system 3000. In at least one embodiment, an I/O switch 3016 can be used to provide an interface mechanism to enable connections between I/0 hub 3007 and other components, such as a network adapter 3018 and/or wireless network adapter 3019 that may be integrated into a platform, and various other devices that can be added via one or more add-in device(s) 3020. In at least one embodiment, network adapter 3018 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 3019 can include one or more of a Wi-Fi, Bluetooth, NFC, or other network device that includes one or more wireless radios.[0343] In at least one embodiment, computing system 3000 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and/or variations thereof, that may also be connected to I/0 hub 3007. In at least one embodiment, communication paths interconnecting various components in Figure 30 may be implemented using any suitable protocols, such as PCI based protocols (e.g., PCIe), or other bus or point-to-point communication interfaces and/or protocol(s), such as NVLink high-speed interconnect, or interconnect protocols.[0344] In at least one embodiment, one or more parallel processor(s) 3012 incorporate circuitry optimized for graphics and video processing, including, in at least one embodiment, video output circuitry, and constitutes a graphics processing unit ("GPU''). In at least one embodiment, one or more parallel processor(s) 3012 incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system 3000 may be integrated with one or more other system elements on a single integrated circuit. In at least one embodiment, one or more parallel processor(s) 3012, memory hub 3005, processor(s) 3002, and I/O hub 3007 can be integrated into a SoC integrated circuit. In at least one embodiment, components of computing system 3000 can be integrated into a single package to form a system in package ("SIP") configuration. In at least one embodiment, at least a portion of components of computing system 3000 can be integrated into a multi-chip module ("MCM"), which can be interconnected with other multi-chip modules into a modular computing system. In at least one embodiment, I/O subsystem 3011 and display devices 3010B are omitted from computing system 3000.Processing Systems [0345] The following figures set forth, without limitation, exemplary processing systems that can be used to implement at least one embodiment.[0346] Figure 31 illustrates an accelerated processing unit ("APU") 3100, in accordance with at least one embodiment. In at least one embodiment, APU 3100 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, APU 3100 can be configured to execute an application program, such as a CUDA program. In at least one embodiment, APU 3100 includes, without limitation, a core complex 3110, a graphics complex 3140, fabric 3160, I/O interfaces 3170, memory controllers 3180, a display controller 3192, and a multimedia engine 3194. In at least one embodiment, APU 3100 may include, without limitation, any number of core complexes 3110, any number of graphics complexes 3150, any number of display controllers 3192, and any number of multimedia engines 3194 in any combination. For explanatory purposes, multiple instances of like objects are denoted herein with reference numbers identifying an object and parenthetical numbers identifying an instance where needed.[0347] In at least one embodiment, core complex 3110 is a CPU, graphics complex 3140 is a GPU, and APU 3100 is a processing unit that integrates, without limitation, 3110 and 3140 onto a single chip. In at least one embodiment, some tasks may be assigned to core complex 3110 arid other tasks may be assigned to graphics complex 3140. In at least one embodiment, core complex 3110 is configured to execute main control software associated with APU 3100, such as an operating system. In at least one embodiment, core complex 3110 is a master processor of APU 3100, controlling and coordinating operations of other processors. In at least one embodiment, core complex 3110 issues commands that control an operation of graphics complex 3140. In at least one embodiment, core complex 3110 can be configured to execute host executable code derived from CUDA source code, and graphics complex 3140 can be configured to execute device executable code derived from CUDA source code.[0348] In at least one embodiment, core complex 3110 includes, without limitation, cores 3120(1)-3120(4) and an L3 cache 3130. In at least one embodiment, core complex 3110 may include, without limitation, any number of cores 3120 and any number and type of caches in any combination. In at least one embodiment, cores 3120 are configured to execute instructions of a particular instruction set architecture ("ISA"). In at least one embodiment each core 3120 is a CPU core.[0349] In at least one embodiment, each core 3120 includes, without limitation, a fetch/decode unit 3 I 22, an integer execution engine 3124, a floating point execution engine 3126, and an L2 cache 3128. In at least one embodiment, fetch/decode unit 3122 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 3124 and floating point execution engine 3126. In at least one embodiment, fetch/decode unit 3122 can concurrently dispatch one micro-instruction to integer execution engine 3124 and another micro-instruction to floating point execution engine 3126. In at least one embodiment, integer execution engine 3124 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 3126 executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit 3122 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 3124 and floating point execution engine 3126.[0350] In at least one embodiment, each core 3120(i), where i is an integer representing a particular instance of core 3120, may access L2 cache 312810 included in core 3120(0. In at least one embodiment, each core 3120 included in core complex 3110(j), where j is an integer representing a particular instance of core complex 3110, is connected to other cores 3120 included in core complex 3110(j) via L3 cache 31300) included in core complex 3110(j). In at least one embodiment, cores 3120 included in core complex 31100), where j is an integer representing a particular instance of core complex 3110, can access all of L3 cache 31300) included in core complex 3110(j). In at least one embodiment, L3 cache 3130 may include, without limitation, any number of slices.[0351] In at least one embodiment, graphics complex 3140 can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, graphics complex 3140 is configured to execute graphics pipeline operations such as draw commands, pixel operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, graphics complex 3140 is configured to execute operations unrelated to graphics. In at least one embodiment, graphics complex 3140 is configured to execute both operations related to graphics and operations unrelated to graphics.[0352] In at least one embodiment, graphics complex 3140 includes, without limitation, any number of compute units 3150 and an L2 cache 3142. In at least one embodiment, compute units 3150 share L2 cache 3142. In at least one embodiment, L2 cache 3142 is partitioned. In at least one embodiment, graphics complex 3140 includes, without limitation, any number of compute units 3150 and any number (including zero) and type of caches. In at least one embodiment, graphics complex 3140 includes, without limitation, any amount of dedicated graphics hardware.[0353] In at least one embodiment, each compute unit 3150 includes, without limitation, any number of SIMD units 3152 and a shared memory 3154. In at least one embodiment, each SIMD unit 3152 implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each compute unit 3150 may execute any number of thread blocks, but each thread block executes on a single compute unit 3150. In at least one embodiment, a thread block includes, without limitation, any number of threads of execution. In at least one embodiment, a workgroup is a thread block. In at least one embodiment, each STMD unit 3152 executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in a warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory 3154.[0354] In at least one embodiment, fabric 3160 is a system interconnect that facilitates data and control transmissions across core complex 3110, graphics complex 3140, I/O interfaces 3170, memory controllers 3180, display controller 3192, and multimedia engine 3194. In at least one embodiment, APU 3100 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 3160 that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to APU 3100. In at least one embodiment, I/O interfaces 3170 are representative of any number and type of interfaces (e.g., PCI, PCI-Extended ("PCI-X"), PCIe, gigabit Ethernet ("GBE"), USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces 3170 In at least one embodiment, peripheral devices that are coupled to I/O interfaces 3170 may include, without limitation, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0355] In at least one embodiment, display controller A1'v1D92 displays images on one or more display device(s), such as a liquid crystal display ("LCD") device. In at least one embodiment, multimedia engine 240 includes, without limitation, any amount and type of circuitry that is related to multimedia, such as a video decoder, a video encoder, an image signal processor, etc. In at least one embodiment, memory controllers 3180 facilitate data transfers between APU 3100 and a unified system memory 3190. In at least one embodiment, core complex 3110 and graphics complex 3140 share unified system memory 3190.[0356] In at least one embodiment, APU 3100 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 3180 and memory devices (e.g., shared memory 3154) that may be dedicated to one component or shared among multiple components. In at least one embodiment, APU 3100 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches 2728, L3 cache 3130, and L2 cache 3142) that may each be private to or shared between any number of components (e.g., cores 3120, core complex 3110, SIMD units 3152, compute units 3150, and graphics complex 3140).[0357] Figure 32 illustrates a CPU 3200, in accordance with at least one embodiment. In at least one embodiment, CPU 3200 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, CPU 3200 can be configured to execute an application program. In at least one embodiment, CPU 3200 is configured to execute main control software, such as an operating system. In at least one embodiment, CPU 3200 issues commands that control an operation of an external GPU (not shown). In at least one embodiment, CPU 3200 can be configured to execute host executable code derived from CUDA source code, and an external GPU can be configured to execute device executable code derived from such CUDA source code. In at least one embodiment, CPU 3200 includes, without limitation, any number of core complexes 3210, fabric 3260, VID interfaces 3270, and memory controllers 3280.[0358] In at least one embodiment, core complex 3210 includes, without limitation, cores 3220(1)-3220(4) and an L3 cache 3230. In at least one embodiment, core complex 3210 may include, without limitation, any number of cores 3220 and any number and type of caches in any combination. In at least one embodiment, cores 3220 are configured to execute instructions of a particular ISA. In at least one embodiment, each core 3220 is a CPU core.[0359] In at least one embodiment, each core 3220 includes, without limitation, a fetch/decode unit 3222, an integer execution engine 3224, a floating point execution engine 3226, and an L2 cache 3228. In at least one embodiment, fetch/decode unit 3222 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 3224 and floating point execution engine 3226. In at least one embodiment, fetch/decode unit 3222 can concurrently dispatch one micro-instruction to integer execution engine 3224 and another micro-instruction to floating point execution engine 3226. In at least one embodiment, integer execution engine 3224 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 3226 executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit 3222 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 3224 and floating point execution engine 3226.[0360] In at least one embodiment, each core 32200), where i is an integer representing a particular instance of core 3220, may access L2 cache 3228(i) included in core 3220(i). In at least one embodiment, each core 3220 included in core complex 3210(j), where j is an integer representing a particular instance of core complex 3210, is connected to other cores 3220 in core complex 3210(j) via L3 cache 3230(j) included in core complex 3210(j). In at least one embodiment, cores 3220 included in core complex 3210(j), where j is an integer representing a particular instance of core complex 3210, can access all of L3 cache 3230(j) included in core complex 3210(j). In at least one embodiment, L3 cache 3230 may include, without limitation, any number of slices.[0361] In at least one embodiment, fabric 3260 is a system interconnect that facilitates data and control transmissions across core complexes 3210(I)-3210(N) (where N is an integer greater than zero), I/O interfaces 3270, and memory controllers 3280. In at least one embodiment, CPU 3200 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 3260 that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to CPU 3200. In at least one embodiment, I/O interfaces 3270 are representative of any number and type of I/O interfaces (e.g., PCT, PCI-X, PCIe, GBE, USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to PO interfaces 3270 In at least one embodiment, peripheral devices that are coupled to 110 interfaces 3270 may include, without limitation, displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0362] In at least one embodiment, memory controllers 3280 facilitate data transfers between CPU 3200 and a system memory 3290. In at least one embodiment, core complex 3210 and graphics complex 3240 share system memory 3290. In at least one embodiment, CPU 3200 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 3280 and memory devices that may be dedicated to one component or shared among multiple components. In at least one embodiment, CPU 3200 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches 3228 and L3 caches 3230) that may each be private to or shared between any number of components (e.g., cores 3220 and core complexes 3210).[0363] Figure 33 illustrates an exemplary accelerator integration slice 3390, in accordance with at least one embodiment. As used herein, a "slice" comprises a specified portion of processing resources of an accelerator integration circuit. In at least one embodiment, an accelerator integration circuit provides cache management, memory access, context management, and interrupt management services on behalf of multiple graphics processing engines included in a graphics acceleration module. Graphics processing engines may each comprise a separate GPU. Alternatively, graphics processing engines may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, a graphics acceleration module may be a GPU with multiple graphics processing engines. In at least one embodiment, graphics processing engines may be individual GPUs integrated on a common package, line card, or chip.[0364] An application effective address space 3382 within system memory 33 14 stores process elements 3383. In one embodiment, process elements 3383 are stored in response to GPU invocations 3381 from applications 3380 executed on processor 3307. A process element 3383 contains process state for corresponding application 3380. A work descriptor ("WD") 3384 contained in process element 3383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 3384 is a pointer to a job request queue in application effective address space 3382.[0365] Graphics acceleration module 3346 and/or individual graphics processing engines can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process state and sending WD 3384 to graphics acceleration module 3346 to start a job in a virtualized environment may be included.[0366] In at least one embodiment, a dedicated-process programming model is implementation-specific. In this model, a single process owns graphics acceleration module 3346 or an individual graphics processing engine. Because graphics acceleration module 3346 is owned by a single process, a hypervisor initializes an accelerator integration circuit for an owning partition and an operating system initializes accelerator integration circuit for an owning process when graphics acceleration module 3346 is assigned.[0367] In operation, a WD fetch unit 3391 in accelerator integration slice 3390 fetches next WD 3384 which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 3346. Data from WD 3384 may be stored in registers 3345 and used by a memory management unit ("MMU") 3339, interrupt management circuit 3347 and/or context management circuit 3348 as illustrated. In at least one embodiment of MMU 3339 includes segment/page walk circuitry for accessing segment/page tables 3386 within OS virtual address space 3385. Interrupt management circuit 3347 may process interrupt events ("INT") 3392 received from graphics acceleration module 3346. When performing graphics operations, an effective address 3393 generated by a graphics processing engine is translated to a real address by MMU 3339.[0368] In one embodiment, a same set of registers 3345 are duplicated for each graphics processing engine and/or graphics acceleration module 3346 and may be initialized by a hypervisor or operating system. Each of these duplicated registers may be included in accelerator integration slice 3390. Exemplary registers that may be initialized by a hypervisor are shown in Table 1.Table 1 -Hypervisor Initialized Registers 1 Slice Control Register 2 Real Address (RA) Scheduled Processes Area Pointer 3 Authority Mask Override Register 4 Interrupt Vector Table Entry Offset Interrupt Vector Table Entry Limit 6 State Register 7 Logical Partition ID 8 Real address (RA) Hypervisor Accelerator Utilization Record PointerStorage Description Register[0369] Exemplary registers that may be initialized by an operating system are shown in Table 2.Table 2 -Operating System Initialized Registers 1 Process and Thread Identification 2 Effective Address (EA) Context Save/Restore Pointer 3 Virtual Address (VA) Accelerator Utilization Record Pointer 4 Virtual Address (VA) Storage Segment Table Pointer Authority Mask 6 Work descriptor [0370] In one embodiment, each WD 3384 is specific to a particular graphics acceleration module 3346 and/or a particular graphics processing engine. It contains all information required by a graphics processing engine to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed.[0371] Figures 34A-34B illustrate exemplary graphics processors, in accordance with at least one embodiment. In at least one embodiment, any of the exemplary graphics processors may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. In at least one embodiment, the exemplary graphics processors are for use within an SoC.[0372] Figure 34A illustrates an exemplary graphics processor 34 10 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. Figure 34B illustrates an additional exemplary graphics processor 3440 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. In at least one embodiment, graphics processor 3410 of Figure 34A is a low power graphics processor core. In at least one embodiment, graphics processor 3440 of Figure 34B is a higher performance graphics processor core. In at least one embodiment, each of graphics processors 3410, 3440 can be variants of graphics processor 510 of Figure 5.[0373] In at least one embodiment, graphics processor 3410 includes a vertex processor 3405 and one or more fragment processor(s) 3415A-3415N (e.g., 3415A, 3415B, 3415C, 3415D, through 3415N-1, and 341 5N). In at least one embodiment, graphics processor 3410 can execute different shader programs via separate logic, such that vertex processor 3405 is optimized to execute operations for vertex shader programs, while one or more fragment processor(s) 3415A-3415N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 3405 performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s) 3415A-3415N use primitive and vertex data generated by vertex processor 3405 to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s) 3415A-3415N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API.[0374] In at least one embodiment, graphics processor 3410 additionally includes one or more MMU(s) 3420A-3420B, cache(s) 3425A-3425B, and circuit interconnect(s) 3430A-3430B In at least one embodiment, one or more MMU(s) 3420A-3420B provide for virtual to physical address mapping for graphics processor 3410, including for vertex processor 3405 and/or fragment processor(s) 3415A-3415N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s) 3425A-3425B. In at least one embodiment, one or more 1V1A4U(s) 3420A-3420B may be synchronized with other MIVIUs within a system, including one or more MN4Us associated with one or more application processor(s) 505, image processors 515, and/or video processors 520 of Figure 5, such that each processor 505-520 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s) 3430A-3430B enable graphics processor 3410 to interface with other IP cores within an SoC, either via an internal bus of an SoC or via a direct connection.[0375] In at least one embodiment, graphics processor 3440 includes one or more MMU(s) 3420A-3420B, caches 3425A-3425B, and circuit interconnects 3430A-3430B of graphics processor 3410 of Figure 34A. In at least one embodiment, graphics processor 3440 includes one or more shader core(s) 3455A-3455N (e.g., 3455A, 3455B, 3455C, 3455D, 3455E, 3455F, through 3455N-1, and 3455N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor 3440 includes an inter-core task manager 3445, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 3455A-3455N and a tiling unit 3458 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, in at least one embodiment to exploit local spatial coherence within a scene or to optimize use of internal caches.[0376] Figure 35A illustrates a graphics core 3500, in accordance with at least one embodiment. In at least one embodiment, graphics core 3500 may be included within graphics processor 2410 of Figure 24. In at least one embodiment, graphics core 3500 may be a unified shader core 3455A-3455N as in Figure 34B. In at least one embodiment, graphics core 3500 includes a shared instruction cache 3502, a texture unit 3518, and a cache/shared memory 3520 that are common to execution resources within graphics core 3500. In at least one embodiment, graphics core 3500 can include multiple slices 3501A-3501N or partition for each core, and a graphics processor can include multiple instances of graphics core 3500. Slices 3501A-3501N can include support logic including a local instruction cache 3504A-3504N, a thread scheduler 3506A-3506N, a thread dispatcher 3508A-3508N, and a set of registers 3510A-3510N. In at least one embodiment, slices 3501A-3501N can include a set of additional function units ("AFUs") 351 2A-35 I 2N, floating-point units ("FPUs") 351 4A-35 I 4N, integer arithmetic logic units ("ALUs") 3516-3516N, address computational units ("ACUs") 3513A-3513N, double-precision floating-point units ("DPFPUs") 3515A-3515N, and matrix processing units ("MPUs") 351 7A-3517N.[0377] In at least one embodiment, FPUs 3514A-3514N can perform single-precision (32-b and half-precision (16-bit) floating point operations, while DPFPUs 3515A-3515N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs 3516A35 I 6N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs 3517A-3517N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs 3517- 3517N can perform a variety of matrix operations to accelerate CUDA programs, including enabling support for accelerated general matrix to matrix multiplication ("GENUVI"). In at least one embodiment, AFUs 3512A-3512N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.).[0378] Figure 35B illustrates a general-purpose graphics processing unit ("GPGPU") 3530, in accordance with at least one embodiment. In at least one embodiment, GPGPU 3530 is highly-parallel and suitable for deployment on a multi-chip module. In at least one embodiment, GPGPU 3530 can be configured to enable highly-parallel compute operations to be performed by an array of CPUs. In at least one embodiment, GPGPU 3530 can be linked directly to other instances of GPGPU 3530 to create a multi-GPU cluster to improve execution time for CUDA programs. In at least one embodiment, GPGPU 3530 includes a host interface 3532 to enable a connection with a host processor. In at least one embodiment, host interface 3532 is a PCIe interface. In at least one embodiment, host interface 3532 can be a vendor specific communications interface or communications fabric. In at least one embodiment, GPGPU 3530 receives commands from a host processor and uses a global scheduler 3534 to distribute execution threads associated with those commands to a set of compute clusters 3536A-3536H. In at least one embodiment, compute clusters 3536A-3536H share a cache memory 3538. In at least one embodiment, cache memory 3538 can serve as a higher-level cache for cache memories within compute clusters 3536A-35361-I.[0379] In at least one embodiment, GPGPU 3530 includes memory 3544A-3544B coupled with compute clusters 3536A-3536H via a set of memory controllers 3542A-3542B. In at least one embodiment, memory 3544A-3544B can include various types of memory devices including DRAM or graphics random access memory, such as synchronous graphics random access memory ("SGRAM"), including graphics double data rate ("GDDR") memory.[0380] In at least one embodiment, compute clusters 3536A-3536H each include a set of graphics cores, such as graphics core 3500 of Figure 35A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for computations associated with CUDA programs. In at least one embodiment, at least a subset of floating point units in each of compute clusters 3536A-3536H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations.[0381] In at least one embodiment, multiple instances of GPGPU 3530 can be configured to operate as a compute cluster. In at least one embodiment, compute clusters 3536A-3536H may implement any technically feasible communication techniques for synchronization and data exchange. In at least one embodiment, multiple instances of GPGPU 3530 communicate over host interface 3532. In at least one embodiment, GPGPU 3530 includes an1/0 hub 3539 that couples GPGPU 3530 with a GPU link 3540 that enables a direct connection to other instances of GPGPU 3530. In at least one embodiment, CPU link 3540 is coupled to a dedicated GPU-toGPU bridge that enables communication and synchronization between multiple instances of GPGPU 3530. In at least one embodiment CPU link 3540 couples with a high speed interconnect to transmit and receive data to other GPGPUs 3530 or parallel processors. In at least one embodiment, multiple instances of GPGPU 3530 are located in separate data processing systems and communicate via a network device that is accessible via host interface 3532. In at least one embodiment GPU link 3540 can be configured to enable a connection to a host processor in addition to or as an alternative to host interface 3532. In at least one embodiment, GPGPU 3530 can be configured to execute a CUDA program.[0382] Figure 36A illustrates a parallel processor 3600, in accordance with at least one embodiment. In at least one embodiment, various components of parallel processor 3600 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits ("ASICs"), or FPGAs.[0383] In at least one embodiment, parallel processor 3600 includes a parallel processing unit 3602. In at least one embodiment, parallel processing unit 3602 includes an PO unit 3604 that enables communication with other devices, including other instances of parallel processing unit 3602. In at least one embodiment, I/O unit 3604 may be directly connected to other devices. In at least one embodiment, I/O unit 3604 connects with other devices via use of a hub or switch interface, such as memory hub 605. In at least one embodiment, connections between memory hub 605 and I/O unit 3604 form a communication link. In at least one embodiment, I/O unit 3604 connects with a host interface 3606 and a memory crossbar 3616, where host interface 3606 receives commands directed to performing processing operations and memory crossbar 3616 receives commands directed to performing memory operations.[0384] In at least one embodiment, when host interface 3606 receives a command buffer via I/O unit 3604, host interface 3606 can direct work operations to perform those commands to a front end 3608. In at least one embodiment, front end 3608 couples with a scheduler 3610, which is configured to distribute commands or other work items to a processing array 3612 In at least one embodiment, scheduler 3610 ensures that processing array 3612 is properly configured and in a valid state before tasks are distributed to processing array 3612. In at least one embodiment, scheduler 3610 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler 3610 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array 3612. In at least one embodiment, host software can prove workloads for scheduling on processing array 3612 via one of multiple graphics processing doorbells. In at least one embodiment, workloads can then be automatically distributed across processing array 3612 by scheduler 3610 logic within a microcontroller including scheduler 3610.[0385] In at least one embodiment, processing array 3612 can include up to "N' clusters (e.g., cluster 3614A, cluster 3614B, through cluster 3614N). In at least one embodiment, each cluster 3614A-36I4N of processing array 3612 can execute a large number of concurrent threads. In at least one embodiment, scheduler 3610 can allocate work to clusters 3614A-3614N of processing array 3612 using various scheduling and/or work distribution algorithms, which may vary depending on a workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler 3610, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing array 3612. In at least one embodiment, different clusters 3614A-3614N of processing array 3612 can be allocated for processing different types of programs or for performing different types of computations.[0386] In at least one embodiment, processing array 3612 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing array 3612 is configured to perform general-purpose parallel compute operations. In at least one embodiment, processing array 3612 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.[0387] In at least one embodiment, processing array 3612 is configured to perform parallel graphics processing operations. In at least one embodiment, processing array 3612 can include additional logic to support execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing array 3612 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit 3602 can transfer data from system memory via 1/0 unit 3604 for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., a parallel processor memory 3622) during processing, then written back to system memory.[0388] In at least one embodiment, when parallel processing unit 3602 is used to perform graphics processing, scheduler 3610 can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters 3614A-3614N of processing array 3612. In at least one embodiment, portions of processing array 3612 can be configured to perform different types of processing. In at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters 3614A-3614N may be stored in buffers to allow intermediate data to be transmitted between clusters 3614A-3614N for further processing.[0389] In at least one embodiment, processing array 3612 can receive processing tasks to be executed via scheduler 3610, which receives commands defining processing tasks from front end 3608. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler 3610 may be configured to fetch indices corresponding to tasks or may receive indices from front end 3608. In at least one embodiment, front end 3608 can be configured to ensure processing array 3612 is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.[0390] In at least one embodiment, each of one or more instances of parallel processing unit 3602 can couple with parallel processor memory 3622. In at least one embodiment, parallel processor memory 3622 can be accessed via memory crossbar 3616, which can receive memory requests from processing array 3612 as well as I/O unit 3604. In at least one embodiment, memory crossbar 3616 can access parallel processor memory 3622 via a memory interface 3618. In at least one embodiment, memory interface 3618 can include multiple partition units (e.g., a partition unit 3620A, partition unit 3620B, through partition unit 3620N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 3622. In at least one embodiment, a number of partition units 3620A-3620N is configured to be equal to a number of memory units, such that a first partition unit 3620A has a corresponding first memory unit 3624A, a second partition unit 3620B has a corresponding memory unit 3624B, and an Nth partition unit 3620N has a corresponding Nth memory unit 3624N. In at least one embodiment, a number of partition units 3620A-3620N may not be equal to a number of memory devices.[0391] In at least one embodiment, memory units 3624A-3624N can include various types of memory devices, including DRAM or graphics random access memory, such as SGRAM, including GDDR memory. In at least one embodiment, memory units 3624A-3624N may also include 3D stacked memory, including but not limited to high bandwidth memory ("HBM"). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units 3624A-3624N, allowing partition units 3620A-3620N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory 3622. In at least one embodiment, a local instance of parallel processor memory 3622 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.[0392] In at least one embodiment, any one of clusters 3614A-3614N of processing array 3612 can process data that will be written to any of memory units 3624A-3624N within parallel processor memory 3622. In at least one embodiment, memory crossbar 3616 can be configured to transfer an output of each cluster 3614A-3614N to any partition unit 3620A-3620N or to another cluster 3614A-3614N, which can perform additional processing operations on an output. In at least one embodiment, each cluster 3614A-3614N can communicate with memory interface 3618 through memory crossbar 3616 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 3616 has a connection to memory interface 3618 to communicate with I/O unit 3604, as well as a connection to a local instance of parallel processor memory 3622, enabling processing units within different clusters 3614A-3614N to communicate with system memory or other memory that is not local to parallel processing unit 3602. In at least one embodiment, memory crossbar 3616 can use virtual channels to separate traffic streams between clusters 3614A-3614N and partition units 3620A-3620N.[0393] In at least one embodiment, multiple instances of parallel processing unit 3602 can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit 3602 can be configured to inter-operate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. In at least one embodiment, some instances of parallel processing unit 3602 can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit 3602 or parallel processor 3600 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.[0394] Figure 36B illustrates a processing cluster 3694, in accordance with at least one embodiment. In at least one embodiment, processing cluster 3694 is included within a parallel processing unit. In at least one embodiment, processing cluster 3694 is one of processing clusters 3614A-3614N of Figure 36. In at least one embodiment, processing cluster 3694 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single instruction, multiple data ("SIMD") instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single instruction, multiple thread ("SIMT") techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster 3694.[0395] In at least one embodiment, operation of processing duster 3694 can be controlled via a pipeline manager 3632 that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 3632 receives instructions from scheduler 3610 of Figure 36 and manages execution of those instructions via a graphics multiprocessor 3634 and/or a texture unit 3636. In at least one embodiment, graphics multiprocessor 3634 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMI parallel processors of differing architectures may be included within processing cluster 3694. In at least one embodiment, one or more instances of graphics multiprocessor 3634 can be included within processing cluster 3694. In at least one embodiment, graphics multiprocessor 3634 can process data and a data crossbar 3640 can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager 3632 can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar 3640.[0396] In at least one embodiment, each graphics multiprocessor 3634 within processing cluster 3694 can include an identical set of functional execution logic (e.g., arithmetic logic units, load/store units ("LSUs"), etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present.[0397] In at least one embodiment, instructions transmitted to processing cluster 3694 constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within graphics multiprocessor 3634. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor 3634. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor 3634. In at least one embodiment, when a thread group includes more threads than a number of processing engines within graphics multiprocessor 3634, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on graphics multiprocessor 3634.[0398] In at least one embodiment, graphics multiprocessor 3634 includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor 3634 can forego an internal cache and use a cache memory (e.g., Li cache 3648) within processing cluster 3694. In at least one embodiment, each graphics multiprocessor 3634 also has access to Level 2 ("L2") caches within partition units (e.g., partition units 3620A-3620N of Figure 36A) that are shared among all processing clusters 3694 and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor 3634 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 3602 may be used as global memory. In at least one embodiment, processing cluster 3694 includes multiple instances of graphics multiprocessor 3634 that can share common instructions and data, which may be stored in LT cache 3648.[0399] In at least one embodiment, each processing cluster 3694 may include an MMIJ 3645 that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU 3645 may reside within memory interface 3618 of Figure 36. In at least one embodiment, MMU 3645 includes a set of page table entries ("PTEs") used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MMU 3645 may include address translation lookaside buffers ("TLBs") or caches that may reside within graphics multiprocessor 3634 or Li cache 3648 or processing cluster 3694. In at least one embodiment, a physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss.[0400] In at least one embodiment, processing cluster 3694 may be configured such that each graphics multiprocessor 3634 is coupled to a texture unit 3636 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture LI cache (not shown) or from an Ll cache within graphics multiprocessor 3634 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor 3634 outputs a processed task to data crossbar 3640 to provide a processed task to another processing cluster 3694 for further processing or to store a processed task in an L2 cache, a local parallel processor memory, or a system memory via memory crossbar 3616. In at least one embodiment, a pre-raster operations unit ("preROPT) 3642 is configured to receive data from graphics multiprocessor 3634, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 3620A-3620N of Figure 36). In at least one embodiment, PreROP 3642 can perform optimizations for color blending, organize pixel color data, and perform address translations.[0401] Figure 36C illustrates a graphics multiprocessor 3696, in accordance with at least one embodiment. In at least one embodiment, graphics multiprocessor 3696 is graphics multiprocessor 3634 of Figure 36B. In at least one embodiment, graphics multiprocessor 3696 couples with pipeline manager 3632 of processing cluster 3694. In at least one embodiment, graphics multiprocessor 3696 has an execution pipeline including but not limited to an instruction cache 3652, an instruction unit 3654, an address mapping unit 3656, a register file 3658, one or more GPGPU cores 3662, and one or more LSUs 3666. GPGPU cores 3662 and LSUs 3666 are coupled with cache memory 3672 and shared memory 3670 via a memory and cache interconnect 3668.[0402] In at least one embodiment, instruction cache 3652 receives a stream of instructions to execute from pipeline manager 3632. In at least one embodiment, instructions are cached in instruction cache 3652 and dispatched for execution by instruction unit 3654. In at least one embodiment, instruction unit 3654 can dispatch instructions as thread groups (e.g., warps), with each thread of a thread group assigned to a different execution unit within GPGPU core 3662. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 3656 can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by LSUs 3666.[0403] In at least one embodiment, register file 3658 provides a set of registers for functional units of graphics multiprocessor 3696. In at least one embodiment, register file 3658 provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores 3662, LSUs 3666) of graphics multiprocessor 3696. In at least one embodiment, register file 3658 is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file 3658. In at least one embodiment, register file 3658 is divided between different thread groups being executed by graphics multiprocessor 3696.[0404] In at least one embodiment, GPGPU cores 3662 can each include FPUs and/or integer ALUs that are used to execute instructions of graphics multiprocessor 3696. GPGPU cores 3662 can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores 3662 include a single precision FPU and an integer ALU while a second portion of GPGPU cores 3662 include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor 3696 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment one or more of GPGPU cores 3662 can also include fixed or special function logic.[0405] In at least one embodiment, GPGPU cores 3662 include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment GPGPU cores 3662 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute S1MD1, SIMD2, and SIMD32 instructions. In at least one embodiment, S1MD instructions for GPGPU cores 3662 can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data ("SPMD") or SIMI architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single S1MD instruction. In at least one embodiment, eight STMT threads that perform the same or similar operations can be executed in parallel via a single SLMD8 logic unit.[0406] In at least one embodiment, memory and cache interconnect 3668 is an interconnect network that connects each functional unit of graphics multiprocessor 3696 to register file 3658 and to shared memory 3670. In at least one embodiment, memory and cache interconnect 3668 is a crossbar interconnect that allows LSU 3666 to implement load and store operations between shared memory 3670 and register file 3658. In at least one embodiment, register file 3658 can operate at a same frequency as GPGPU cores 3662, thus data transfer between GPGPU cores 3662 and register file 3658 is very low latency. In at least one embodiment, shared memory 3670 can be used to enable communication between threads that execute on functional units within graphics multiprocessor 3696. In at least one embodiment, cache memory 3672 can be used as a data cache in at least one embodiment, to cache texture data communicated between functional units and texture unit 3636. In at least one embodiment, shared memory 3670 can also be used as a program managed cached. Tn at least one embodiment, threads executing on GPGPU cores 3662 can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory 3672.[0407] In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as POO or NVL nk). In at least one embodiment, a GPU may be integrated on a same package or chip as cores and communicatively coupled to cores over a processor bus/interconnect that is internal to a package or a chip. In at least one embodiment, regardless of a manner in which a GPU is connected, processor cores may allocate work to a GPU in a form of sequences of commands/instructions contained in a WD. In at least one embodiment, a GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.General Computing [0408] The following figures set forth, without limitation, exemplary software constructs within general computing that can be used to implement at least one embodiment.[0409] Figure 37 illustrates a software stack of a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform is a platform for leveraging hardware on a computing system to accelerate computational tasks. A programming platform may be accessible to software developers through libraries, compiler directives, and/or extensions to programming languages, in at least one embodiment. In at least one embodiment, a programming platform may be, but is not limited to, CUDA, Radeon Open Compute Platform ("ROCm"), OpenCL (OpenCLTM is developed by Khronos group), SYCL, or Intel One API.[0410] In at least one embodiment, a software stack 3700 of a programming platform provides an execution environment for an application 3701. In at least one embodiment, application 3701 may include any computer software capable of being launched on software stack 3700 In at least one embodiment, application 3701 may include, but is not limited to, an artificial intelligence ("Al")/machine learning ("ML") application, a high performance computing ("HPC") application, a virtual desktop infrastructure ("VDT"), or a datacenter workload.[0411] In at least one embodiment, application 3701 and software stack 3700 run on hardware 3707. Hardware 3707 may include one or more GPUs, CPUs, FPGAs, Al engines, and/or other types of compute devices that support a programming platform, in at least one embodiment. In at least one embodiment, such as with CUBA, software stack 3700 may be vendor specific and compatible with only devices from particular vendor(s). In at least one embodiment, such as in with OpenCL, software stack 3700 may be used with devices from different vendors. In at least one embodiment, hardware 3707 includes a host connected to one more devices that can be accessed to perform computational tasks via application programming interface ("APT) calls. A device within hardware 3707 may include, but is not limited to, a GPU, FPGA, AT engine, or other compute device (but may also include a CPU) and its memory, as opposed to a host within hardware 3707 that may include, but is not limited to, a CPU (but may also include a compute device) and its memory, in at least one embodiment [0412] In at least one embodiment, software stack 3700 of a programming platform includes, without limitation, a number of libraries 3703, a runtime 3705, and a device kernel driver 3706. Each of libraries 3703 may include data and programming code that can be used by computer programs and leveraged during software development, in at least one embodiment. In at least one embodiment, libraries 3703 may include, but are not limited to, pre-written code and subroutines, classes, values, type specifications, configuration data, documentation, help data, and/or message templates. In at least one embodiment, libraries 3703 include functions that are optimized for execution on one or more types of devices. In at least one embodiment, libraries 3703 may include, but are not limited to, functions for performing mathematical, deep learning, and/or other types of operations on devices. In at least one embodiment, libraries 3803 are associated with corresponding APIs 3802, which may include one or more APIs, that expose functions implemented in libraries 3803.[0413] In at least one embodiment, application 37W is written as source code that is compiled into executable code, as discussed in greater detail below in conjunction with Figure 42. Executable code of application 3701 may run, at least in part, on an execution environment provided by software stack 3700, in at least one embodiment. In at least one embodiment, during execution of application 3701, code may be reached that needs to run on a device, as opposed to a host. In such a case, runtime 3705 may be called to load and launch requisite code on a device, in at least one embodiment. In at least one embodiment, runtime 3705 may include any technically feasible runtime system that is able to support execution of application SO1.[0414] In at least one embodiment, runtime 3705 is implemented as one or more runtime libraries associated with corresponding APIs, which are shown as API(s) 3704. One or more of such runtime libraries may include, without limitation, functions for memory management, execution control, device management, error handling, and/or synchronization, among other things, in at least one embodiment. In at least one embodiment, memory management functions may include, but are not limited to, functions to allocate, deallocate, and copy device memory, as well as transfer data between host memory and device memory. In at least one embodiment, execution control functions may include, but are not limited to, functions to launch a function (sometimes referred to as a "kernel" when a function is a global function callable from a host) on a device and set attribute values in a buffer maintained by a runtime library for a given function to be executed on a device.[0415] Runtime libraries and corresponding API(s) 3704 may be implemented in any technically feasible manner, in at least one embodiment. In at least one embodiment, one (or any number of) API may expose a low-level set of functions for fine-grained control of a device, while another (or any number of) API may expose a higher-level set of such functions. In at least one embodiment, a high-level runtime API may be built on top of a low-level API. In at least one embodiment, one or more of runtime APIs may be language-specific APIs that are layered on top of a language-independent runtime API [0416] In at least one embodiment, device kernel driver 3706 is configured to facilitate communication with an underlying device. In at least one embodiment, device kernel driver 3706 may provide low-level functionalities upon which APIs, such as API(s) 3704, and/or other software relies. In at least one embodiment, device kernel driver 3706 may be configured to compile intermediate representation ("IR") code into binary code at runtime. For CUDA, device kernel driver 3706 may compile Parallel Thread Execution ("PTX") IR code that is not hardware specific into binary code for a specific target device at runtime (with caching of compiled binary code), which is also sometimes referred to as "finalizing" code, in at least one embodiment. Doing so may permit finalized code to run on a target device, which may not have existed when source code was originally compiled into PTX code, in at least one embodiment. Alternatively, in at least one embodiment, device source code may be compiled into binary code offline, without requiring device kernel driver 3706 to compile IR code at runtime.[0417] Figure 38 illustrates a CUDA implementation of software stack 3700 of Figure 37, in accordance with at least one embodiment. In at least one embodiment, a CUDA software stack 3800, on which an application 3801 may be launched, includes CUDA libraries 3803, a CUDA runtime 3805, a CUDA driver 3807, and a device kernel driver 3808. Tn at least one embodiment, CUDA software stack 3800 executes on hardware 3809, which may include a GPU that supports CUDA and is developed by NV1D1A Corporation of Santa Clara, CA.[0418] In at least one embodiment, application 3801, CUDA runtime 3805, and device kernel driver 3808 may perform similar functionalities as application 3701, runtime 3705, and device kernel driver 3706, respectively, which are described above in conjunction with Figure 37. In at least one embodiment, CUDA driver 3807 includes a library (libcuda.so) that implements a CUDA driver API 3806. Similar to a CUDA runtime API 3804 implemented by a CUDA runtime library (cudart), CUDA driver API 3806 may, without limitation, expose functions for memory management, execution control, device management, error handling, synchronization, and/or graphics interoperability, among other things, in at least one embodiment. In at least one embodiment, CUDA driver API 3806 differs from CUDA runtime APT 3804 in that CUDA runtime API 3804 simplifies device code management by providing implicit initialization, context (analogous to a process) management, and module (analogous to dynamically loaded libraries) management. In contrast to high-level CUDA runtime API 3804, CUDA driver APT 3806 is a low-level API providing more fine-grained control of a device, particularly with respect to contexts and module loading, in at least one embodiment. in at least one embodiment, CUDA driver API 3806 may expose functions for context management that are not exposed by CUDA runtime API 3804. In at least one embodiment, CUDA driver API 3806 is also language-independent and supports, e.g., OpenCL in addition to CUDA runtime API 3804. Further, in at least one embodiment, development libraries, including CUDA runtime 3805, may be considered as separate from driver components, including user-mode CUDA driver 3807 and kernel-mode device driver 3808 (also sometimes referred to as a "display" driver).[0419] In at least one embodiment, CUDA libraries 3803 may include, but are not limited to, mathematical libraries, deep learning libraries, parallel algorithm libraries, and/or signal/image/video processing libraries, which parallel computing applications such as application 3801 may utilize. In at least one embodiment, CUDA libraries 3803 may include mathematical libraries such as a cuBLAS library that is an implementation of Basic Linear Algebra Subprograms ("BLAS") for performing linear algebra operations, a cuFFT library for computing fast Fourier transforms ("FFTs"), and a cuRAND library for generating random numbers, among others. In at least one embodiment, CUDA libraries 3803 may include deep learning libraries such as a cuDNN library of primitives for deep neural networks and a TensorRT platform for high-performance deep learning inference, among others.[0420] Figure 39 illustrates a ROCm implementation of software stack 3700 of Figure 37, in accordance with at least one embodiment. In at least one embodiment, a ROCm software stack 3900, on which an application 3901 may be launched, includes a language runtime 3903, a system runtime 3905, a thunk 3907, a ROCm kernel driver 3908, and a device kernel driver 3909. In at least one embodiment, ROCm software stack 3900 executes on hardware 3910, which may include a GPU that supports ROCm and is developed by AMID Corporation of Santa Clara, CA.[042 I] In at least one embodiment, application 3901 may perform similar functionalities as application 3701 discussed above in conjunction with Figure 37. In addition, language runtime 3903 and system runtime 3905 may perform similar functionalities as runtime 3705 discussed above in conjunction with Figure 37, in at least one embodiment, in at least one embodiment, language runtime 3903 and system runtime 3905 differ in that system runtime 3905 is a language-independent runtime that implements a ROCr system runtime API 3904 and makes use of a Heterogeneous System Architecture ("HAS") Runtime API. HAS runtime API is a thin, user-mode API that exposes interfaces to access and interact with an AMID GPU, including functions for memory management, execution control via architected dispatch of kernels, error handling, system and agent information, and runtime initialization and shutdown, among other things, in at least one embodiment. In contrast to system runtime 3905, language runtime 3903 is an implementation of a language-specific runtime API 3902 layered on top of ROCr system runtime API 3904, in at least one embodiment. In at least one embodiment, language runtime API may include, but is not limited to, a Heterogeneous compute Interface for Portability ("HIP") language runtime API, a Heterogeneous Compute Compiler ("HCC") language runtime API, or an OpenCL API, among others. HIP language in particular is an extension of C++ programming language with functionally similar versions of CUDA mechanisms, and, in at least one embodiment, a HIP language runtime API includes functions that are similar to those of CUDA runtime APT 3804 discussed above in conjunction with Figure 38, such as functions for memory management, execution control, device management, error handling, and synchronization, among other things.[0422] In at least one embodiment, thunk (ROCt) 3907 is an interface that can be used to interact with underlying ROCm driver 3908. In at least one embodiment, ROCm driver 3908 is a ROCk driver, which is a combination of an AMDGPU driver and a HAS kernel driver (amdkfd). In at least one embodiment, AMDGPU driver is a device kernel driver for GPUs developed by AMD that performs similar functionalities as device kernel driver 3706 discussed above in conjunction with Figure 37. In at least one embodiment, HAS kernel driver is a driver permitting different types of processors to share system resources more effectively via hardware features.[0423] In at least one embodiment, various libraries (not shown) may be included in ROCm software stack 3900 above language runtime 3903 and provide functionality similarity to CUDA libraries 3803, discussed above in conjunction with Figure 38. In at least one embodiment, various libraries may include, but are not limited to, mathematical, deep learning, and/or other libraries such as a hipBLAS library that implements functions similar to those of CUDA cuBLAS, a rocFFT library for computing FFTs that is similar to CUDA cuFFT, among others.[0424] Figure 40 illustrates an OpenCL implementation of software stack 3700 of Figure 37 in accordance with at least one embodiment. In at least one embodiment, an OpenCL software stack 4000, on which an application 4001 may be launched, includes an OpenCL framework 4005, an OpenCL runtime 4006, and a driver 4007. In at least one embodiment, OpenCL software stack 4000 executes on hardware 3809 that is not vendor-specific. As OpenCL is supported by devices developed by different vendors, specific OpenCL drivers may be required to interoperate with hardware from such vendors, in at least one embodiment.[0425] In at least one embodiment, application 4001, OpenCL runtime 4006, device kernel driver 4007, and hardware 4008 may perform similar functionalities as application 3701, runtime 3705, device kernel driver 3706, and hardware 3707, respectively, that are discussed above in conjunction with Figure 37. In at least one embodiment, application 4001 further includes an OpenCL kernel 4002 with code that is to be executed on a device.[0426] In at least one embodiment, OpenCL defines a "platform" that allows a host to control devices connected to a host. In at least one embodiment, an OpenCL framework provides a platform layer API and a runtime API, shown as platform API 4003 and runtime API 4005 In at least one embodiment, runtime API 4005 uses contexts to manage execution of kernels on devices. In at least one embodiment, each identified device may be associated with a respective context, which runtime API 4005 may use to manage command queues, program objects, and kernel objects, share memory objects, among other things, for that device. In at least one embodiment, platform API 4003 exposes functions that permit device contexts to be used to select and initialize devices, submit work to devices via command queues, and enable data transfer to and from devices, among other things. In addition, OpenCL framework provides various built-in functions (not shown), including math functions, relational functions, and image processing functions, among others, in at least one embodiment.[0427] In at least one embodiment, a compiler 4004 is also included in OpenCL frame-work 4005. Source code may be compiled offline prior to executing an application or online during execution of an application, in at least one embodiment. In contrast to CUDA and ROCm, OpenCL applications in at least one embodiment may be compiled online by compiler 4004, which is included to be representative of any number of compilers that may be used to compile source code and/or IR code, such as Standard Portable Intermediate Representation ("SPIR-V") code, into binary code. Alternatively, in at least one embodiment, OpenCL applications may be compiled offline, prior to execution of such applications.[0428] Figure 41 illustrates software that is supported by a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform 4104 is configured to support various programming models 4103, middlewares and/or libraries 4102, and frameworks 4101 that an application 4100 may rely upon. In at least one embodiment, application 4100 may be an Al/ML application implemented using, in at least one embodiment, a deep learning framework such as MXNet, PyTorch, or TensorFlow, which may rely on libraries such as cuDNN, NVIDIA Collective Communications Library ("NCCL"), and/or NVIDA Developer Data Loading Library ("DALI") CUDA libraries to provide accelerated computing on underlying hardware.[0429] In at least one embodiment, programming platform 4104 may be one of a CUDA, ROCm, or OpenCL platform described above in conjunction with Figure 33, Figure 34, and Figure 40, respectively. In at least one embodiment, programming platform 4104 supports multiple programming models 4103, which are abstractions of an underlying computing system permitting expressions of algorithms and data structures. Programming models 4103 may expose features of underlying hardware in order to improve performance, in at least one embodiment. In at least one embodiment, programming models 4103 may include, but are not limited to, CUDA, HIP, OpenCL, C++ Accelerated Massive Parallelism ("C++AMP"), Open Multi-Processing ("OpenMP"), Open Accelerators ("OpenACC"), and/or Vulcan Compute.[0430] In at least one embodiment, libraries and/or middlewares 4102 provide implementations of abstractions of programming models 4104. In at least one embodiment, such libraries include data and programming code that may be used by computer programs and leveraged during software development. In at least one embodiment, such middlewares include software that provides services to applications beyond those available from programming platform 4104. In at least one embodiment, libraries and/or middlewares 4102 may include, but are not limited to, cuBLAS, cuFFT, cuRAND, and other CUDA libraries, or rocBLAS, rocFFT, rocRAND, and other ROCm libraries. In addition, in at least one embodiment, libraries and/or middlewares 4102 may include NCCL and ROCm Communication Collectives Library ("RCCL") libraries providing communication routines for GPUs, a MIOpen library for deep learning acceleration, and/or an Eigen library for linear algebra, matrix and vector operations, geometrical transformations, numerical solvers, and related algorithms.[0431] In at least one embodiment, application frameworks 4101 depend on libraries and/or middlewares 4102. In at least one embodiment, each of application frameworks 4101 is a software framework used to implement a standard structure of application software. An AI/ML application may be implemented using a framework such as Caffe, Caffe2, TensorFlow, Keras, PyTorch, or MxNet deep learning frameworks, in at least one embodiment.[0432] Figure 42 illustrates compiling code to execute on one of programming platforms of Figures 37 -40, in accordance with at least one embodiment. In at least one embodiment, a compiler 4201 receives source code 4200 that includes both host code as well as device code. In at least one embodiment, complier 4201 is configured to convert source code 4200 into host executable code 4202 for execution on a host and device executable code 4203 for execution on a device. In at least one embodiment, source code 4200 may either be compiled offline prior to execution of an application, or online during execution of an application [0433] In at least one embodiment, source code 4200 may include code in any programming language supported by compiler 4201, such as C-FE, C, Fortran, etc. In at least one embodiment, source code 4200 may be included in a single-source file having a mixture of host code and device code, with locations of device code being indicated therein. In at least one embodiment, a single-source file may be a.cu file that includes CUDA code or a.hip.cpp file that includes HIP code. Alternatively, in at least one embodiment, source code 4200 may include multiple source code files, rather than a single-source file, into which host code and device code are separated.[0434] In at least one embodiment, compiler 4201 is configured to compile source code 4200 into host executable code 4202 for execution on a host and device executable code 4203 for execution on a device. In at least one embodiment, compiler 4201 performs operations including parsing source code 4200 into an abstract system tree (AST), performing optimizations, and generating executable code. In at least one embodiment in which source code 4200 includes a single-source file, compiler 4201 may separate device code from host code in such a single-source file, compile device code and host code into device executable code 4203 and host executable code 4202, respectively, and link device executable code 4203 and host executable code 4202 together in a single file, as discussed in greater detail below with respect to Figure 26.[0435] In at least one embodiment, host executable code 4202 and device executable code 4203 may be in any suitable format, such as binary code and/or ER code. In a case of CUDA, host executable code 4202 may include native object code and device executable code 4203 may include code in PTX intermediate representation, in at least one embodiment. In a case of ROCm, both host executable code 4202 and device executable code 4203 may include target binary code, in at least one embodiment.[0436] Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail.It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims.[0437] Use of terms "a" and "an" and "the" and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (meaning "including, but not limited to,") unless otherwise noted. term "connected," when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. In at least one embodiment, use of term "set" (e.g., "a set of items") or -subset" unless otherwise noted or contradicted by context, is to be construed as a nonempt-y collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term "subset" of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal.[0438] Conjunctive language, such as phrases of form "at least one of A, B, and C," or "at least one of A, B and C," unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. In at least one embodiment of a set having three members, conjunctive phrases "at least one of A, B, and C" and "at least one of A, B and C" refer to any of following sets: {A}, {B}, {C}, {A, B}, (A, C), (B, C), {A, B, CI. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term "plurality" indicates a state of being plural (e.g., -a plurality of items" indicates multiple items). In at least one embodiment, a number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase "based on-means "based at least in part on" and not "based solely on." [0439] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof In at least one embodiment, code is stored on a computer-readable storage medium. In at least one embodiment, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. A set of non-transitory computer-readable storage media, in at least one embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors in at least one embodiment, a non-transitory computer-readable storage medium store instructions and a main central processing unit ("CPU") executes some of instructions while a graphics processing unit ("GPU") executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions.[0440] Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations.[0441] Use of any and all of the at least one embodiments, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure.[0442] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.[0443] In description and claims, terms "coupled" and "connected," along with their derivatives, may be used It should be understood that these terms may be not intended as synonyms for each other. Rather, in ones of at least one embodiments, "connected" or "coupled" may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. "Coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.[0444] Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as "processing," "computing," "calculating," "determining," or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system' s registers and/or memories into other data similarly represented as physical quantities within computing system' s memories, registers or other such information storage, transmission or display devices.[0445] In a similar manner, term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting ones of the at least one embodiments, "processor" may be a CPU or a GPU. A "computing platform" may comprise one or more processors. As used herein, "software" processes may include, in at least one embodiment, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. Terms "system" and "method" are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system.[0446] In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer-implemented machine. In at least one embodiment, process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In some implementations, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In another implementation, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. References may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various ones of the at least one embodiments, process of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism.[0447] Although discussion above sets forth ones of the at least one embodiments having implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances.[0448] Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims.[0449] It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention [0450] Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Methods, systems and devices are provided for blocking spoiler content from being presented by a content presenting device to a user of a mobile computing device. The content presenting device and the mobile computing device may communicate using a networking framework. One or more spoiler alert events received by the content presenting device via the communication networking framework from the mobile computing device include information associated with content that has not been viewed by a user of the mobile computing device. The information associated with the content that has not been viewed is compared with the content to be presented. It may be determined whether the content to be presented by the content presenting device includes the spoiler content, and, if so, the presentation of the spoiler content by the content presenting device is restricted.
CLAIMSWhat is claimed is:1. In a communication networking framework having a content presenting device and a mobile computing device configured to communicate with each other, a method for blocking spoiler content from being presented by the content presenting device comprising:receiving one or more spoiler alert events from the mobile computing device, wherein each of the one or more spoiler alert events comprises information associated with content that has not been viewed by a user of the mobile computing device; comparing the information with content to be presented by the content presenting device;determining whether the content to be presented by the content presenting device includes the spoiler content based on the comparison; andrestricting a presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content.2. The method of claim 1, further comprising:determining whether the content associated with the one or more spoiler alert events has been viewed by the user of the mobile computing device; andclearing the one or more spoiler alert events when the content has been viewed by the user of the mobile computing device based on the determination.3. The method of claim 1, wherein receiving one or more spoiler alert events from the mobile computing device comprises:discovering a presence of the mobile computing device using thecommunication networking framework;receiving from the mobile computing device the one or more spoiler alert events and the information about the content that has not been viewed; and storing the one or more spoiler alert events and the information about the content that has not been viewed in a storage location accessible to the content presenting device.4. The method of claim 1, wherein:the information associated with the content that has not been viewed comprises one or more of a name of the content, a broadcast time of the content, a blocking release time of the content, and one or more keywords associated with the content; andcomparing the information with content to be presented by the content presenting device comprises scanning the content to be presented for one or more of the name of the content, the broadcast time of the content, the blocking release time of the content, and the one or more keywords associated with the content.5. The method of claim 1, wherein restricting the presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content comprises restricting the presentation of the spoiler content based on one or more rules.6. The method of claim 5, wherein the one or more rules comprise a rule restricting presentation of the spoiler content based on a location zone of the mobile computing device, wherein:when the location zone comprises a first location zone closest to the content presenting device, a video portion and an audio portion of the spoiler content are fully restricted;when the location zone comprises a second location zone farther from the content presenting device than the first location zone, the video portion and the audio portion of the spoiler content are partially restricted; and when the location zone comprises a third location zone farther from the content presenting device than the first location zone and the second location zone, the video portion of the spoiler content is not restricted and the audio portion of the spoiler content is restricted.7. The method of claim 5, wherein the one or more rules comprise one or more of: a rule restricting a presentation of the spoiler content based on a majority count of mobile computing devices providing spoiler alert events for the same content compared to a total count of a plurality of mobile computing devices in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on a relative weight of a first one of the one or more spoiler alert events for a first one of the plurality of mobile computing devices in proximity to the content presenting device compared to a relative weight of a second one of the one or more spoiler alert events for a second one of the plurality of mobile computing devices in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on an age of the one or more spoiler alert events, wherein the presentation of the spoiler content is not restricted when the age of the one or more spoiler alert events is older than a threshold age;a rule restricting a presentation of the spoiler content based on a content type of the content to be presented by the content presenting device;a rule restricting a presentation of the spoiler content based on a maximum blocking time, wherein the presentation of the spoiler content is not restricted after the maximum blocking time expires;a rule restricting a presentation of the spoiler content based on a paidsubscription service, wherein the presentation of the spoiler content associated with one of the one or more spoiler alert events for a given mobile computing device is restricted when a subscription fee associated with the given mobile computing device has been paid;a rule enabling a presentation of the spoiler content when the presentation of the spoiler content is otherwise restricted by one or more other rules, and providing a spoiler alert indication that indicates to the given mobile computing device that the presentation of the spoiler content will not be restricted; anda rule that presents the content that has not been viewed based on determining whether a sufficient number of the plurality of mobile computing devices that have registered one of the one or more spoiler alert events for the same content are in proximity to the presenting device and when a sufficient number of the plurality of mobile computing devices are present, presenting an offer to be displayed on each of the plurality of mobile computing devices to present the content that has not been viewed, and presenting the content that has not been viewed when a sufficient number of the plurality of mobile computing devices accept the offer.8. A content presenting device, comprising:a transceiver; anda processor coupled to the transceiver, wherein the processor and the transceiver are configured to communicate with other devices using a communication networking framework, the processor configured with processor executable instructions to perform operations comprising:receiving one or more spoiler alert events from a mobile computing device, wherein each of the one or more spoiler alert events comprises information associated with spoiler content, wherein the spoiler content comprises content that has not yet been viewed by a user of the mobile computing device;comparing the information with content to be presented by the content presenting device; determining whether the content to be presented by the content presenting device includes the spoiler content based on the comparison; and restricting a presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content.9. The content presenting device of claim 8, wherein the processor is configured with processor executable instructions to perform operations further comprising:determining whether the content associated with the one or more spoiler alert events has been viewed by the user of the mobile computing device; and clearing the one or more spoiler alert events when the content has been viewed by the user of the mobile computing device based on the determination.10. The content presenting device of claim 8, wherein the processor is configured with processor executable instructions to perform operations such that receiving one or more spoiler alert events from the mobile computing device comprises:discovering a presence of the mobile computing device using the communication networking framework;receiving from the mobile computing device the one or more spoiler alert events and the information about the content that has not been viewed; andstoring the one or more spoiler alert events and the information about the content that has not been viewed in a storage location accessible to the content presenting device.1 1. The content presenting device of claim 8, wherein:the information associated with the content that has not been viewed comprises one or more of a name of the content, a broadcast time of the content; a blocking release time of the content; and one or more keywords associated with the content; and the processor is configured with processor executable instructions to perform operations such that comparing the information with content to be presented by the content presenting device comprises scanning the content to be presented for one or more of the name of the content; the broadcast time of the content; the blocking release time of the content; and the one or more keywords associated with the content.12. The content presenting device of claim 8, wherein the processor is configured with processor executable instructions to perform operations such that restricting the presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content comprises restricting the presentation of the spoiler content based on one or more rules.13. The content presenting device of claim 12, wherein the one or more rules comprise a rule restricting the presentation of the spoiler content based on a location zone of the mobile computing device, wherein:when the location zone comprises a first location zone closest to the content presenting device, a video portion and an audio portion of the spoiler content are fully restricted;when the location zone comprises a second location zone farther from the content presenting device than the first location zone, the video portion and the audio portion of the spoiler content are partially restricted; andwhen the location zone comprises a third location zone farther from the content presenting device than the first location zone and the second location zone, the video portion of the spoiler content is not restricted and the audio portion of the spoiler content is restricted.14. The content presenting device of claim 12, wherein the one or more rules comprise one or more of: a rule restricting a presentation of the spoiler content based on a majority count of mobile computing devices providing spoiler alert events for the same content compared to a total count of a plurality of mobile computing devices in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on a relative weight of a first one of the one or more spoiler alert events for a first mobile computing device in proximity to the content presenting device compared to a relative weight of a second one of the one or more spoiler alert events for a second mobile computing device in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on an age of the one or more spoiler alert events, wherein the presentation of the spoiler content is not restricted when the age of the one or more spoiler alert events is older than a threshold age;a rule restricting a presentation of the spoiler content based on a content type of the content to be presented by the content presenting device;a rule restricting a presentation of the spoiler content based on a maximum blocking time, wherein the presentation of the spoiler content is not restricted after the maximum blocking time expires;a rule restricting a presentation of the spoiler content based on a paidsubscription service, wherein the presentation of the spoiler content associated with one of the one or more spoiler alert events for a given mobile computing device is restricted when a subscription fee associated with the given mobile computing device has been paid;a rule enabling a presentation of the spoiler content when the presentation of the spoiler content is otherwise restricted by one or more other rules, and providing a spoiler alert indication that indicates to the given mobile computing device that the presentation of the spoiler content will not be restricted; anda rule that presents the content that has not been viewed based on determining whether a sufficient number of the plurality of mobile computing devices that have registered one of the one or more spoiler alert events for the same content are in proximity to the presenting device and when a sufficient number of the plurality of mobile computing devices are present, presenting an offer to be displayed on each of the plurality of mobile computing devices to present the content that has not been viewed, and presenting the content that has not been viewed when a sufficient number of the plurality of mobile computing devices accept the offer.15. A content presenting device configured to block spoiler content from being presented to a user of a mobile computing device in proximity to the content presenting device, the content presenting device and the mobile computing device configured to communicate with each other using a communication networking framework, the content presenting device comprising:means for receiving one or more spoiler alert events from the mobile computing device, wherein each of the one or more spoiler alert events comprises information associated with content that has not been viewed by the user of the mobile computing device;means for comparing the information with content to be presented by the content presenting device;means for determining whether the content to be presented by the content presenting device includes the spoiler content based on the comparison; andmeans for restricting a presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content.16. The content presenting device of claim 15, further comprising:means for determining whether the content associated with the one or more spoiler alert events has been viewed by the user of the mobile computing device; and means for clearing the one or more spoiler alert events when the content has been viewed by the user of the mobile computing device based on the determination.17. The content presenting device of claim 15, wherein means for receiving one or more spoiler alert events from the mobile computing device comprises:means for discovering a presence of the mobile computing device using the communication networking framework;means for receiving from the mobile computing device the one or more spoiler alert events and the information about the content that has not been viewed; andmeans for storing the one or more spoiler alert events and the information about the content that has not been viewed in a storage location accessible to the content presenting device.18. The content presenting device of claim 15, wherein:the information associated with the content that has not been viewed comprises one or more of a name of the content, a broadcast time of the content; a blocking release time of the content; and one or more keywords associated with the content; andmeans for comparing the information with content to be presented by the content presenting device comprises means for scanning the content to be presented for one or more of the name of the content; the broadcast time of the content; the blocking release time of the content; and the one or more keywords associated with the content.19. The content presenting device of claim 15, wherein means for restricting the presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content comprises means for restricting the presentation of the spoiler content based on one or more rules.20. The content presenting device of claim 19, wherein the one or more rules comprise a rule restricting the presentation of the spoiler content based on a location zone of the mobile computing device, wherein: when the location zone comprises a first location zone closest to the content presenting device, a video portion and an audio portion of the spoiler content are fully restricted;when the location zone comprises a second location zone farther from the content presenting device than the first location zone, the video portion and the audio portion of the spoiler content are partially restricted; andwhen the location zone comprises a third location zone farther from the content presenting device than the first location zone and the second location zone, the video portion of the spoiler content is not restricted and the audio portion of the spoiler content is restricted.21. The content presenting device of claim 19, wherein the one or more rules comprise one or more of:a rule restricting a presentation of the spoiler content based on a majority count of mobile computing devices providing spoiler alert events for the same content compared to a total count of a plurality of mobile computing devices in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on a relative weight of a first one of the one or more spoiler alert events for a first mobile computing device in proximity to the content presenting device compared to a relative weight of a second one of the one or more spoiler alert events for a second mobile computing device in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on an age of the one or more spoiler alert events, wherein the presentation of the spoiler content is not restricted when the age of the one or more spoiler alert events is older than a threshold age;a rule restricting a presentation of the spoiler content based on a content type of the content to be presented by the content presenting device; a rule restricting a presentation of the spoiler content based on a maximum blocking time, wherein the presentation of the spoiler content is not restricted after the maximum blocking time expires;a rule restricting a presentation of the spoiler content based on a paid subscription service, wherein the presentation of the spoiler content associated with one of the one or more spoiler alert events for a given mobile computing device is restricted when a subscription fee associated with the given mobile computing device has been paid;a rule enabling a presentation of the spoiler content when the presentation of the spoiler content is otherwise restricted by one or more other rules, and providing a spoiler alert indication that indicates to the given mobile computing device that the presentation of the spoiler content will not be restricted; anda rule that presents the content that has not been viewed based on determining whether a sufficient number of the plurality of mobile computing devices that have registered one of the one or more spoiler alert events for the same content are in proximity to the presenting device and when a sufficient number of the plurality of mobile computing devices are present, presenting an offer to be displayed on each of the plurality of mobile computing devices to present the content that has not been viewed, and presenting the content that has not been viewed when a sufficient number of the plurality of mobile computing devices accept the offer.22. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a content presenting device to perform operations comprising:receiving one or more spoiler alert events from a mobile computing device using a communication networking framework, wherein each of the one or more spoiler alert events comprises information associated with spoiler content, wherein the spoiler content comprises content that has not yet been viewed by a user of the mobile computing device; comparing the information with content to be presented by the content presenting device;determining whether the content to be presented by the content presenting device includes the spoiler content based on the comparison; andrestricting a presentation of the spoiler content in the content to be presented if the content to be presented includes the spoiler content.23. The non- transitory processor-readable storage medium of claim 22, wherein the processor-executable instructions are configured to cause the processor of the content presenting device to perform operations further comprising:determining whether the content associated with the one or more spoiler alert events has been viewed by the user of the mobile computing device; and clearing the one or more spoiler alert events when the content has been viewed by the user of the mobile computing device based on the determination.24. The non-transitory processor-readable storage medium of claim 22, wherein the processor-executable instructions are configured to cause the processor of the content presenting device to perform operations such that receiving one or more spoiler alert events from the mobile computing device using the communication networking framework comprises:discovering a presence of the mobile computing device using the communication networking framework;receiving from the mobile computing device the one or more spoiler alert events and the information about the content that has not been viewed using the communication networking framework; andstoring the one or more spoiler alert events and the information about the content that has not been viewed in a storage location accessible to the content presenting device using the communication networking framework.25. The non-transitory processor-readable storage medium of claim 22, wherein: the information associated with the content that has not been viewed comprises one or more of a name of the content, a broadcast time of the content; a blocking release time of the content; and one or more keywords associated with the content; andthe processor-executable instructions are configured to cause the processor of the content presenting device to perform operations such that comparing the information with content to be presented by the content presenting device comprises scanning the content to be presented for one or more of the name of the content; the broadcast time of the content; the blocking release time of the content; and the one or more keywords associated with the content.26. The non-transitory processor-readable storage medium of claim 22, wherein the processor-executable instructions are configured to cause the processor of the content presenting device to perform operations such that restricting the presentation of the spoiler content in the content to be presented by the content presenting device based on determining that the content to be presented includes the spoiler content comprises restricting the presentation of the spoiler content based on one or more rules.27. The non-transitory processor-readable storage medium of claim 26, wherein the one or more rules comprise a rule restricting the presentation of the spoiler content based on a location zone of the mobile computing device, wherein:when the location zone comprises a first location zone closest to the content presenting device, a video portion and an audio portion of the spoiler content are fully restricted;when the location zone comprises a second location zone farther from the content presenting device than the first location zone, the video portion and the audio portion of the spoiler content are partially restricted; and when the location zone comprises a third location zone farther from the content presenting device than the first location zone and the second location zone, the video portion of the spoiler content is not restricted and the audio portion of the spoiler content is restricted.28. The non- transitory processor-readable storage medium of claim 26, wherein the one or more rules comprise one or more of:a rule restricting a presentation of the spoiler content based on a majority count of mobile computing devices providing spoiler alert events for the same content compared to a total count of a plurality of mobile computing devices in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on a relative weight of a first one of the one or more spoiler alert events for a first mobile computing device in proximity to the content presenting device compared to a relative weight of a second one of the one or more spoiler alert events for a second mobile computing device in proximity to the content presenting device;a rule restricting a presentation of the spoiler content based on an age of the one or more spoiler alert events, wherein the presentation of the spoiler content is not restricted when the age of the one or more spoiler alert events is older than a threshold age;a rule restricting a presentation of the spoiler content based on a content type of the content to be presented by the content presenting device;a rule restricting a presentation of the spoiler content based on a maximum blocking time, wherein the presentation of the spoiler content is not restricted after the maximum blocking time expires;a rule restricting a presentation of the spoiler content based on a paidsubscription service, wherein the presentation of the spoiler content associated with one of the one or more spoiler alert events for a given mobile computing device is restricted when a subscription fee associated with the given mobile computing device has been paid;a rule enabling a presentation of the spoiler content when the presentation of the spoiler content is otherwise restricted by one or more other rules, and providing a spoiler alert indication that indicates to the given mobile computing device that the presentation of the spoiler content will not be restricted; anda rule that presents the content that has not been viewed based on determining whether a sufficient number of the plurality of mobile computing devices that have registered one of the one or more spoiler alert events for the same content are in proximity to the presenting device and when a sufficient number of the plurality of mobile computing devices are present, presenting an offer to be displayed on each of the plurality of mobile computing devices to present the content that has not been viewed, and presenting the content that has not been viewed when a sufficient number of the plurality of mobile computing devices accept the offer.
TITLEMethods, Systems and Devices for Spoiler Alert and Prevention Using Networking FrameworkCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of priority to U.S. Provisional Patent Application No. 62/017,882, entitled "Methods, Systems and Devices for Spoiler Alert and Prevention Using Networking Framework," filed on June 27, 2014, the entire contents of which are incorporated herein by reference.BACKGROUND[0002] As technological capabilities have evolved and users have become more sophisticated and mobile, the culture of stationary, fixed presentation schedule media consumption is rapidly evolving. The era of digital recording systems (e.g., DV , Tivo, etc.) to record, or "time-shift" content is on the increase. In many cases, a user will time shift when broadcast times conflict with work or family obligations, or other schedule constraints and priorities. A user may consolidate viewing into a few sessions where all or part of the time shifted content is viewed at a later time. In some cases, though time-shifted, the user may never actually consume the content that is recorded.[0003] A user generally time-shifts with an expectation that the same "userexperience" will be enjoyed during time-shifted viewing as would be enjoyed with an actual scheduled broadcast. However, as media content is intertwined with the user's larger day to day cultural experience, it is likely that aspects of the time shifted content will be revealed with the passage of time. Thus, the user experience is vulnerable to being "spoiled" for big events, like live sporting events (e.g., Olympics, etc.), because the outcome (winners, scores, plot twists) may be reported on media such as broadcast media, internet media, and so on, by journalists before the user can view the recorded program. [0004] Journalists do not account for time-shifting when deciding on programming content, and most of a journalist's audience are probably not time-shifting programs, and therefore, journalists frequently provide sports scores, comment on popular programs and provide other content that may spoil the experience of those time- shifting particular programs. In other words, programming content is arranged based on an assumption that all content is watched when it is broadcasted. It would be difficult for broadcasters to accommodate users with time-shifted content. The challenge for broadcasters becomes more evident when considering that certain programming is dedicated to reporting on, evaluating, or at least mentioning aspects of previously broadcasted content including previously played sports event, previously run movies, television series, and so on. Examples include the broadcast of sporting event results (e.g., football game scores) that appear in other sports-orientedbroadcasts such as end-of-day sports wrap ups. Further examples include shows that provide reviews of movies or TV series. Such shows may provide detailed critical commentary on movies or TV series that recently opened or were broadcasted. Even social media contacts for a particular individual cannot always know when a user has recorded content for later viewing and may send messages to the user that include spoiler content.[0005] In some instances conventional spoiler alerts may be provided, such as when broadcasters report on sporting events taking place in distant time zones (e.g., the Olympics). The conventional "spoiler alert" may involve the voluntary manual placement of an audible (e.g., spoken) or textual warning. The conventional voluntary spoiler alert may be manually placed at the beginning of a critical review of a movie, a social media post, or an oral message from a broadcast announcer. However, such conventional "spoiler alerts" are often ineffective at providing enough notice for the user to leave the room, mute the volume, and so on. Further, conventional spoiler alerts do not take into account the need to provide such an alert, to whom such alert should be provided, or may not cover all potential spoiler content. SUMMARY[0006] The various embodiments include in a communication networking framework having a content presenting device and a mobile computing device configured to communicate with each other, methods and devices implementing the methods for blocking spoiler content from being presented by the content presenting device. An embodiment method may include operations performed by the content presenting device. The operations may include receiving one or more spoiler alert events from the mobile computing device, each of the one or more spoiler alert events including information associated with content that has not been viewed by a user of the mobile computing device. The operations may further include comparing the information with content to be presented by the content presenting device. The operations may further include determining whether the content to be presented by the content presenting device includes the spoiler content based on the comparison and restricting a presentation of the spoiler content in the content to be presented by the content presenting device based on determining that the content to be presented includes the spoiler content.[0007] Operations of an embodiment method may further include determining whether the content associated with the one or more spoiler alert events has been viewed by the user of the mobile computing device and clearing the one or more spoiler alert events associated with the content that has not been viewed by the user of the mobile computing device when the content has been viewed by the user of the mobile computing device based on the determination.[0008] In an embodiment method, receiving one or more spoiler alert events from the mobile computing device to the content presenting device using the communication networking framework may include discovering a presence of the mobile computing device using the communication networking framework, receiving from the mobile computing device the one or more spoiler alert events and the information about the content that has not been viewed, and storing the one or more spoiler alert events and the information about the content that has not been viewed in a storage location accessible to the content presenting device.[0009] In an embodiment method, the information associated with the content that has not been viewed may include one or more of a name of the content, a broadcast time of the content; a blocking release time of the content; and one or more keywords associated with the content. Further in the embodiment method, comparing the information with content to be presented by the content presenting device may include scanning the content to be presented for one or more of the name of the content; the broadcast time of the content; the blocking release time of the content; and the one or more keywords associated with the content.[0010] Further in the embodiment method, restricting the presentation of the spoiler content in the content to be presented by the content presenting device based on determining that the content to be presented includes the spoiler content may include restricting the presentation of the spoiler content based on one or more rules.[0011] Further in an embodiment method, the one or more rules may include a rule restricting a presentation of the spoiler content based on a location zone of the mobile computing device such that: when the location zone includes a first location zone closest to the content presenting device, a video portion and an audio portion of the spoiler content are fully restricted; when the location zone includes a second location zone farther from the content presenting device than the first location zone, the video portion and the audio portion of the spoiler content are partially restricted; and when the location zone includes a third location zone farther from the content presenting device than the first location zone and the second location zone, the video portion of the spoiler content is not restricted and the audio portion of the spoiler content is restricted.[0012] Further in an embodiment method, the one or more rules may include one or more of the following: a rule restricting a presentation of the spoiler content based on a majority count of mobile computing devices providing spoiler alert events for the same content compared to a total count of a plurality of mobile computing devices in proximity to the content presenting device; a rule restricting a presentation of the spoiler content based on a relative weight of a first one of the one or more spoiler alert events for a first one of the plurality of mobile computing devices in proximity to the content presenting device compared to a relative weight of a second one of the one or more spoiler alert events for a second one of the plurality of mobile computing devices in proximity to the content presenting device; a rule restricting a presentation of the spoiler content based on an age of the one or more spoiler alert events, wherein the presentation of the spoiler content is not restricted when the age of the one or more spoiler alert events is older than a threshold age; a rule restricting a presentation of the spoiler content based on a content type of the content to be presented by the content presenting device; a rule restricting a presentation of the spoiler content based on a maximum blocking time, wherein the presentation of the spoiler content is not restricted after the maximum blocking time expires; a rule restricting a presentation of the spoiler content based on a paid subscription service, wherein the presentation of the spoiler content associated with one of the one or more spoiler alert events for a given mobile computing device is restricted when a subscription fee associated with the given mobile computing device has been paid; a rule enabling a presentation of the spoiler content when the presentation of the spoiler content is otherwise restricted by one or more other rules and providing a spoiler alert indication that indicates to the given mobile computing device that the presentation of the spoiler content will not be restricted; and a rule that presents the content that has not been viewed based on determining whether a sufficient number of mobile computing devices that have registered one of the one or more spoiler alert events for the same content are in proximity to the presenting device and when a sufficient number of the plurality of mobile computing devices are present, presenting an offer to be displayed on each of the plurality of mobile computing devices to present the content that has not been viewed, and presenting the content that has not been viewed when a sufficient number of the plurality of mobile computing devices accept the offer. [0013] Further embodiments include a content presenting device having a transceiver and a processor configured with processor-executable instructions to perform operations of the embodiment methods described above. In some embodiments, a content presenting device may include means for performing operations of the embodiment methods described above.[0014] Further embodiments may include a non-transitory processor-readable storage medium on which are stored processor-executable instructions to perform operations of the embodiment methods described above.BRIEF DESCRIPTION OF THE DRAWINGS[0015] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention.[0016] FIG. 1A is a system diagram illustrating components of a spoiler alert system suitable for use in the various embodiments.[0017] FIG. IB is a diagram illustrating program content time shifting suitable for use in the various embodiments.[0018] FIG. 1C is a graph illustrating program content time shifting suitable for use in the various embodiments.[0019] FIG. ID is a graph further illustrating program content time shifting suitable for use in the various embodiments.[0020] FIG. 2A is a diagram illustrating a content presenting device and a mobile computing devices positioned in various alert zones and blocking zones in the various embodiments.[0021] FIG. 2B is a diagram illustrating additional embodiment content presenting devices and embodiment mobile computing devices in public mass consumption instances in the various embodiments. [0022] FIG. 3 is a message flow diagram illustrating networking framework messages between content presenting devices and mobile computing devices for device discovery and spoiler alert Event information in the various embodiments.[0023] FIG. 4A is a process flow diagram illustrating an embodiment method for consumption device discovery by a content presenting device and spoiler alert and content blocking.[0024] FIG. 4B is a process flow diagram illustrating an embodiment method for applying zone -based rules for spoiler alert and content blocking.[0025] FIG. 4C is a process flow diagram illustrating an embodiment method for generating, updating and clearing Spoiler Alert Event listings for a mobile computing device.[0026] FIG. 4D is a process flow diagram illustrating an embodiment method for generating, updating and clearing Spoiler Alert Event listings for a content presenting device.[0027] FIG. 5A - FIG. 51 are process flow diagrams illustrating embodiment methods for the application of spoiler alert and content blocking rules.[0028] FIG. 5J is a process flow diagram illustrating an embodiment method for the conditional presentation of recorded content associated with the spoiler alert.[0029] FIG.6 is a component diagram of an example mobile computing device suitable for use with the various embodiments.[0030] FIG. 7 is a component diagram of an example mobile computing device suitable for use with the various embodiments.DETAILED DESCRIPTION[0031] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.[0032] The terms "networking framework" and "communication networking framework" as used herein refer interchangeably to a communications framework, an applications framework, and organized systems of communication and application- interaction protocols and commands for facilitating device-to-device (e.g., peer-to- peer or "P2P") and application-to-application communications, interaction and control. A networking framework may be implemented as a collection of Application Programming Interfaces (APIs), Software Development Kits (SDSKs), and other application or system software that collectively provide standard mechanisms and interface definitions to enable interfacing between controlling and controlled devices coupled through a communication network that may be an ad-hoc network. The various APIs and SDKs may provide high level access (e.g., from an application layer) to functions that would normally be accessed or controlled at a lower layer in a software architecture. Such functions may include, but are not limited to, ad-hoc networking, security, pairing, device discovery, service discovery, platformtransparency, radio access control, message formatting, message transmission, message reception and decoding, and so on. An example of a comprehensive networking framework is the AllJoyn®Core Framework initially developed by Qualcomm Innovation Center and presently hosted by the Allseen Alliance.[0033] An AllJoyn®Core Framework may include a set of service frameworks that are simple and enable users to interact with nearby similarly configured computing devices. An example of a set of service frameworks may include: Device Information & Configuration -the device broadcasts information such as device type,manufacturer and serial numbers; also allows the user to assign a name and password to the device; Onboarding - allows objects to be easily connected (e.g., via an intermediary such as an access point) to the user's network; Notifications - objects may broadcast and receive basic communications (e.g., text, image/video, audio, control, status); Control Panel -a control device such as a smartphone or tablet may control another object via a graphical interface (e.g., GUI or UI); and Audio -audio source objects may stream to selected AllJoyn®-enabled speakers, audio receivers and other audio playback smart objects.[0034] The various embodiments enable distributed, dispersed or ad-hoc networks among various communication devices to work in a collaborative manner in order to avoid displaying or playing content that would "spoil" the experience of user who has recorded one or more programming events, regardless of the user's location. The embodiments may utilize a networking framework (e.g., an Alljoyn®framework) that enables various communication devices to inform other devices in an ad hoc manner about programs that could be spoiled by broadcasts and perform other functions in order to enable communication devices to display or present audio spoilers. Spoilers are avoided by a mobile computing device of the user leveraging distributed communication networks to identify to content displaying devices (e.g., smart televisions, radios, tickertape marquees, computers with large displays, etc.) an ID, programming information, or a characterization of the recorded program. The content displaying devices may monitor current programming for potential spoiler events in a variety of ways. The content being displayed may be parsed for content related to the reported program in order to determine whether the content contains spoilers. In some embodiments, the currently displayed content may have tags associated with embedded potential spoiler content that can be parsed and read by the content displaying device. In some embodiments, data associated with the current broadcast may be buffered to facilitate parsing. For example, the audio data may be parsed by the content displaying devices to determine whether spoiler content is present. In some embodiments, social media posts, SMS text/audio/video messages, emails associated with the user's computing device may be blocked when they contain spoiler content.[0035] As discussed, a spoiler may be considered to be content that discloses revealing information (e.g., score, movie ending, critical detail) about the time-shifted event, having the effect of "spoiling" the expected user experience of the user recording the event. Spoilers may appear in many forms including on other media devices that the user may encounter (e.g., office break room television), and within the visual or audible content that appear within the user's view or hearing. Spoilers may be in the form of a ticker on a news broadcast, a "Tweet", a tickertape marquee, a Facebook post, an advertisement, an SMS message, an email, an audio broadcast, or virtually any media transmission.[0036] The term "Spoiler Alert Event," is used herein refer to an event that indicates the existence of time shifted content. A Spoiler Alert Event may include issuing a command to record content using a digital television recorder (DT ) or other media recording device. Other examples of spoiler events may include an email or message broadcast to social media contacts indicating that a command to record content has been given or that content has been recorded. In various embodiments, Spoiler Alert Events may be "registered" with a networking framework in a number of ways that may including providing a list of all time-shifted content that has not been viewed. A Spoiler Alert Event may contain information about the time shifted content including the time and date that the event will air, and information about the time-shifted content allows a content presenting device or a system or networking framework component to identify content associated with the current broadcast that may be relevant to the time shifted content. When a mobile communication device registers with a networking framework, such as an Alljoyn®framework, the list of time-shifted content and relevant information about the time- shifted content may form the basis for providing spoiler alerts and content blocking in the various embodiments.[0037] The term "spoiler alert" is used herein to refer to a message that is displayed on a content presenting device, such as a television screen, or on the mobile communication device of a user, such as in the form of a message, email, or other notification indicating that upcoming content that may be displayed in a content displaying device may have a spoiling effect on items associated with a particular Spoiler Alert Event (e.g., time shifted content).[0038] The term "content blocking," which may in some instances be used together with a spoiler alert, is used to refer to the blocking of content related to time-shifted content or programming that has not yet been viewed. Content blocking may be carried out to prevent viewing of the related content, such as by a user who has registered a Spoiler Alert Event, and therefore "spoiling" of a user experience by the viewing of the content. Content blocking may include blanking of tickers, blanking of close captioned text, muting all or portions of audio, blanking the entire display screen, blocking social media messages, blocking emails, or blanking or blocking other media presentation mechanisms that include information relevant to the time- shifted programming.[0039] When possible, and in accordance with various blocking rules, the spoiling content may be blocked. For example, the content presenting device or devices may mute audio and/or blank their screens when the spoiling content would otherwise be viewable or audible to a user recording a program (i.e., time-shifting the program). If the spoiling content cannot be blocked (e.g., the content presenting devices are public and the conditions of a blocking rule are not satisfied), the content presenting device may send a message (e.g., a spoiler alert) to the user's mobile device so that an audible and/or visual alert may be issued. The alert may enable the user to look away or walk away from the content presenting device.[0040] In some embodiments, a Spoiler Alert Event may be detected or reported for a user by the user's mobile communication device (e.g., a smartphone). Other devices in the ad hoc network that may be capable of presenting potential spoiler content may be informed of the Spoiler Alert Event. Devices owned by the user (e.g., smartphone, iPad, Smart TV, etc) and any networking framework-enabled, and/or spoiler alert service-enabled devices that are in proximity of the user are able to talk to each other via an ad hoc communication network, such as a peer-to-peer networking framework (e.g., an AllJoyn framework), that is supported using WiFi, NAN (Neighborhood Area Network), LTE Direct, Bluetooth, Bluetooth Low Energy, etc. A spoiler alert application may be installed on the user mobile communication device and on content display devices that display content or receive communications that could include spoilers (e.g., email). The spoiler alert application on the user mobile communication device may track Spoiler Alert Events (e.g., recorded programming). The spoiler alert application on the other devices may receive the information regarding the spoiler alert condition (e.g., Spoiler Alert Event). An application programming interface (API) call may be used to create or register the Spoiler Alert Event, signaling that an event which may require a spoiler alert has been registered.[0041] Information about the Spoiler Alert Event, such as the program that may give rise to a spoiler alert, may be obtained from a source such as from a program guide. The information may be provided in the Spoiler Alert Event. Alternatively or in addition, the Spoiler Alert Event may contain sufficient information for a receiving device to look up program guide information for the program contained in the Spoiler Alert Event. The content presenting device may further monitor currently broadcast content for spoiler related content. The content presenting device or devices may parse the information being received or information about programming that will be received. The content presenting device or devices may compare that information to the Spoiler Alert Event, and perform actions to help the user avoid being exposed to spoiler information such as by providing spoiler alerts and/or content blocking.Content presenting devices configured with a spoiler alert application may selectively block spoiler content; when content cannot be blocked, notify the user's device to issue an alert so the user can avoid seeing/hearing content; or some combination of spoiler alerts and content blocking, spoiler alert generation may also be conditioned based on the proximity of the user/device to the content presenting device such that the spoiler content may be blocked only when the user is close enough to see/hear the spoiler content. [0042] In some embodiments, the content presenting device may establish content blocking zones based on proximity. The content presenting device may receive information identifying the mobile computing devices that are close by and the distance of each device to the content presenting device. Mobile computing devices may be classified based on various distance zones using their determined proximity. The zones may be established based on factors related perceptibility, such as the size of the display screen, the level of the audio, and so on. These factors may influence the ability of users within the different distance zones to be able to hear and/or see the content being presented by the content presenting device. Thus, a large display (e.g., a tickertape marquees or a large screen TV) that a person could see from a relatively long distance may have a large visual blocking distance zone and a moderate (or no) audio blocking zone, while a small television or laptop computer may have a relatively small visual blocking zone but a larger audio blocking zone because a person may not be able to make out scores or tickers from far away but may be able to hear the audio if it is turned up to a high volume.[0043] In response to receiving content that may be associated with a registered Spoiler Alert Event, a content presenting device may perform various actions to parse or filter content to determine whether a spoiler is embedded in the content. Examples of such actions in various embodiments include: intercepting TCP/IP packets and examining the contents for spoiler text; performing optical character recognition (OCR) on graphics renderings, which may be stored in a display buffer, and blocking any spoiler text; executing a word filtering application; interceptingTwitter/Facebook/email/SMS messages and parsing for those that contain spoiler text; and so on.[0044] In various embodiments, content blocking by content displaying devices may be performed or inhibited based on rules. Examples of rules may include voting rules, priority rules, time since Spoiler Alert Event was registered, type/importance of current content, and so on, examples of which are described in greater detail herein after.[0045] The various embodiments may be implemented within a variety of electronic communication environments, such as private networks, public networks, ad hoc (i.e., device-to-device) networks, or combinations of private, public and ad hoc networks. An example communication network 100 is illustrated in FIG. 1A. In an embodiment, the communication network 100 may include computing devices 140a-140f, which may be mobile computing devices. The computing devices 140a- 140f may be content displaying devices and/or mobile computing devices carried by a user to a location where content is being presented by content presenting devices 120. The computing devices 140a-140f may consume various content in the form of social media posts, email messages, or in some instances, programming content itself. Those computing devices 140a- 140f displaying content, which are referred to herein as content displaying devices, may block programming content presented on themselves when near a user 1 15 who has or is recording a program and whose mobile computing device is configured to inform content displaying devices 140a-140f of the desire to block spoiler events.[0046] Blocking may be conducted by content displaying devices based on the presence of one or more of the computing devices 140a- 140f requesting blocking of spoiler events and the registration of information that allows a content presenting device to determine the content to block. Indications may be provided to the computing devices 140a- 140f about the blocked content including spoiler alerts as described in greater detail hereinafter. The computing devices 140a- 140f may be carried by a user and thus be with the user when he/she is in a position to observe content on a content presenting device, such as broadcast content including movie content, television content, audio content, and so on. The computing devices 140a- 140f may be configured to receive or consume social media content, including email messages, SMS messages, Instant Messages, Facebook posts, Twitter posts, and so on. In the various embodiments, the computing devices 140a-140f may include without limitation, a laptop computing device 140a, a desktop computing device 140b, a tablet computing device 140c, a smart phone 140d, a wireless earpiece 140e, a wireless media-capable watch 140f, and so on. The computing devices 140a-140f may be coupled wirelessly to an access point 130, through wireless links 14 la- 14 If. In some instances, a wired link, such as wireless links 142a and 142b may be established between the laptop computing device 140a, the desktop computing device 140b and the access point 130.[0047] In various embodiments, alternatively or in addition to content being presented on the computing devices 140a-140f, content may be presented on a content presenting device 120. The content presenting device 120 may be any device configured to present content, such as television screen in a home, or a public place, such as in a workplace break room, electronic billboard, mobile electronic billboard, and so on. In some instances, the content presenting device 120 may be a display device of a user that is viewable by others, such as a display in an automobile. The content presenting device 120 may be coupled to an access point 130 through a wireless link 121 and/or a wired link 122. The content presenting device 120 may further be connected to a content distribution unit 1 10, such as a set top box, cable box, and so on. The content distribution unit 1 10 may be connected to the access point 130 through a wireless link 1 1 1 and/or a wired link 1 12.[0048] In some embodiments, the content distribution unit 1 10 may provide content selected by a user 1 15 through interaction with the content distribution unit 1 10 using a remote control device 1 16, which may provide commands to the content distribution unit 1 10 over a control link 1 17. The content may be provided to the content distribution unit 1 10 by an independent service provider 127, such as through the access point 130. The independent service provider 127 may be a provider of subscription based content such as a telephone, cable, or internet provider, or combinations of these subscription based services. The content may be provided to the content presenting device 120 from the content distribution unit 1 10 through a direct link 1 13, which may be a wired or wireless link.[0049] The communication network 100 may further include a server 123, such as a private server for a private portion of the communication network 100. The server 123 may be coupled to the access point 130 through a wireless link 125 and/or a wired link 126. The server 123 may have a mass storage element 124 or elements.Alternatively or in addition, the mass storage element 124 may be external to the server 123. The communication network 100 may further include a connection to a public network, such as the Internet 129, such as through the independent service provider 127. In some embodiments, the content may include content obtained through the Internet 129. The communication network 100 may further include servers 152a and 152b that may be coupled to or be integrated with one or more mass storage elements 153. The servers 152a and 152b may be remote servers associated with providing content of the independent service provider 127, or may be remote servers associated with third party content that is accessible by the user 1 15 using Universal Resource Locators (URLs). In some embodiments, the user 1 15 may view content on the content presenting device 120 along a content consumption line or a view 1 18, which may represent generally a line of sight, a line of hearing, and so on.[0050] In the various embodiments, communication, control and interaction between the various mobile computing devices and content displaying devices may be facilitated using a networking framework 150, which may be a networking framework such as an Alljoyn®framework. The networking framework 150 may provide platform independent API calls that allow the computing devices to advertise presence, communicate capabilities, give and receive control commands, status messages, and so on. For example, based on information received via the networking framework 150 from one of the computing devices 140a- 140f, the content presenting device 120 may determine whether content should be blocked to avoid spoiler events. [0051] In the various embodiments, the user 1 15 may program a content recording device 1 14 or recording capability associated with the content distribution unit 1 10 through interaction with the remote control device 1 16 having a control link 1 17 with the content recording device 1 14. In some embodiments, the content recording device 1 14 may be incorporated into the content distribution unit 1 10. Alternatively or in addition, the content recording device 1 14 may be an external recording device, which may be coupled to the content distribution unit 1 10. For example, the content recording device 1 14 may receive content from the content distribution unit 1 10 or may be independently coupled to a source of content. The content recording device 1 14 may be compatible with the networking framework 150.[0052] In various embodiments, as illustrated in FIG. IB, FIG. 1C and FIG. ID, the content distribution unit 1 10 and/or the content recording device 1 14 may be configured as a set top box, such as to provide control over channel selection for content display of content presenting device 120. Content may be selected by a user for recording from a program guide 160 as illustrated in FIG. IB. The program guide 160 may display several selections for viewing or recording. The program guide 160 may display an event 161a, which may be a sporting event (e.g., "UCLA vs. Notre Dame"), an event 161b, which may be a Television Series 1 , and an event 161c, which may be a Television Series 2. Each of the events may be accompanied by an air time and date. For example, the event 161a may be aired at an Air Date/Time 1 163a, the event 161b may be aired at an Air Date/Time 2 163b, and the event 161c may be aired at an Air Date/Time 1 163c. Commands may be given to record the events 161a- 161c, such as through interaction between the user 1 15 and the remote 1 16. For example, the user 1 15 may give a command EC1 162a to record the event 161a at the Air Date/Time 163 a, the user 1 15 may give a command REC2 162b to record the event 161b at the Air Date/Time 163b, the user 1 15 may further give a command REC3 162c to record the event 161c at the Air Date/Time 163c. The above events and recording commands are representative, additional or fewer commands for events to be recorded may be generated. However, for a Spoiler Alert Event to be generated, a command for recording at least one program event may be necessary.[0053] As further illustrated in FIG. 1C, as commands are generated to record events, spoiler periods may be established between the time the event airs and the time the content is consumed. In some embodiments, such as illustrated in graph A, a user may issue the command EC1 162a, the command REC2 162b and the command REC3 162c at approximately the same command time 165 for the event 161a, the event 161b and the event 161c. A spoiler alert time period 170a may indicate the start and end time during which spoiler alerts and content blocking may be generated for spoiler content on the mobile computing device 140 of the user 1 15 or on any content presenting device 120 (or devices) to which the user 1 15 may come into proximity. In some embodiments, the spoiler alert time period 170a may begin for all events at or near the beginning of the Air Date/Time 163a of the first recorded event, such as the event 161a. The spoiler alert time period 170a may end when the user 1 15 has consumed the content at a watch time 171a, when the event 161a, the event 161b, and the event 161c are watched. During the spoiler alert time period 170a, the air date and time of the other events may also transpire such as the Air Date/Time 163b for the event 161b, and the Air Date/Time 163c for the event 161c. When the spoiler alert time period 170a ends at the watch time 171a, no further spoiler alerts and content blocking may be generated for the user 1 15 since all time-shifted content has been consumed, or at least has begun to be consumed. In the various embodiments, spoiler alerts and content blocking may continue until watching or consumption has been completed. Alternatively, a separate spoiler alert time period 170a may begin at different times for different content as show by the dotted lines corresponding to the beginning of the Air Date/Time 163b for event 161b, and the Air Date/Time 163c for the event 161c.[0054] As illustrated in FIG. ID, commands may be generated to record events, content may be viewed, and multiple spoiler periods may be established between the time the event airs and the time the content is consumed. In some embodiments, such as illustrated in graph B, a user may issue the command EC3 162a, the command REC2 162b and the command REC3 162c at approximately the same command time 165 for the event 161a, the event 161b and the event 161c. In some embodiments, recording commands may be issued at different times. A spoiler alert time period 170b may indicate the start and end time during which spoiler alerts and content blocking may be generated for spoiler content on the mobile computing device 140 of the user 1 15 or on any content presenting device 120 (or devices) to which the user 1 15 may come into proximity. The spoiler alert time period 170b may begin at or near the beginning of the Air Date/Time 163a of the first recorded event, such as the event 161a. The spoiler alert time period 170a may end when the user 1 15 has consumed the content at a watch time 171b, when the event 161a and the event 161b are watched. During the spoiler alert time period 170b, the air date and time of the other events may also transpire such as the Air Date/Time 163b for the event 161b. When the spoiler alert time period 170b ends at the watch time 171b, no further spoiler alerts and content blocking may be generated for the user 1 15 for the event 161a and the event 161b since that content has been consumed. Alternatively, separate spoiler alert time periods 170b may begin at different times for different content as show by the dotted line corresponding to the beginning of the AirDate/Time 163b for event 161b. An additional spoiler alert time period 170c may be started for the event 161c. The additional spoiler alert time period 170c may end when the user 1 15 has consumed, or at least has begun to consume the content at the watch time 171c, such as when the event 161c is watched.[0055] In some embodiments, a content presenting device, such as the content presenting device 220 illustrated in FIG. 2A, may be in a setting such as a public or semi-public (e.g., workplace) area. The content presenting device may have a basic presentation area, such as a display area 221 on which a main program is being presented. The content presenting device 220 may further include a ticker display area, such as a portion 222, on which information that is supplementary or completely unrelated to the main program may be displayed in a cycling fashion, such as informative text that scrolls through the ticker display area or the portion 222. The content presenting device 220 may play sound from a speaker based on a received audio signal 223 associated with the main content.[0056] The content presenting device 220 may be located in an area where various ones of the mobile computing devices 140 may come into and move out of proximity thereto. When a mobile computing device 140 moves within radio range of the content presenting device 220, the mobile computing device 140 may be discovered by the content presenting device 220 and/or the mobile computing device 140 may discover the content presenting device 220, such as through commands, requests, messages, and so on, associated with the networking framework 150. The networking framework commands, requests, and messages may be transmitted, received, exchanged between the mobile computing device 140 and the content presenting device 220 to facilitate communication, control, and interaction between the computing devices. For example, the content presenting device 220 may receive an advertisement of presence of the mobile computing device 140 through a message associated with the networking framework 150.[0057] The content presenting device 220 may further obtain information about the proximity of the mobile computing device 140, such as through calculating the range of the radio signals received from the mobile computing device 140, from location coordinates (e.g., GPS) associated with the mobile computing device 140 and so on. Based on the proximity information, the content presenting device may associate the mobile computing device 140 with one of several zones, such as a Zone 1 230, a Zone 2 231 , or a Zone 3 233. The Zone 1 230 may represent the zone that is closest to the content presenting device 220, and therefore the zone in which users associated with the mobile computing devices 140 may be able to clearly see presentation content and hear presentation content audio of the content presenting device 220, even at a reduced volume. The Zone 2 231 may represent the zone that is farther away from the content presenting device 220 than the Zone 1 230. Users of the mobile computing devices 140 within the Zone 2 231 may still have a clear view of the content presenting device 220, but the ability of the audio to be detected by the users of the mobile computing devices 140 may be diminished. The Zone 3 233 may represent the zone that is farthest away but that may still require some degree of spoiler alert or content blocking action. Any users of mobile computing devices 140 within the Zone 3 233 may have a limited view of the presentation content and the limited hearing of the presentation content audio.[0058] In the various embodiments, the content presenting device 220 may determine the relative position or at least the distance of the mobile computing devices 140 and thus the relative positions of the respective users 1 15, and may conduct spoiler alert and content blocking for the respective users 1 15 based on the position of the mobile computing devices 140 within the zones. For example, the users 1 15 of the mobile computing devices 140 within the Zone 1 230 will require complete display and audio blocking when a spoiler alert event is detected and the rules allow for content blocking. For the users 1 15 of the mobile computing devices 140 in other zones, the content blocking associated with content blocking for the Zone 1 230 will effectively block content for all zones. Therefore, for the users 1 15 of the mobile computing devices 140 in the other zones which do not have spoiler alerts for the blocked content associated with the mobile computing devices 140 in the Zone 1 230, the users 1 15 of the mobile computing devices 140 in the other zones will nevertheless be subjected to content blocking.[0059] The users 1 15 of the mobile computing devices 140 in the Zone 2 231 may require a reduced degree of blocking. Thus, for spoiler alert events that are exclusive to the users 1 15 of the mobile computing devices 140 in the Zone 2 231 , for example, presentation content and presentation content audio may be effectively blocked by a reduction in brightness or similar display characteristic, a reduced volume, or other less than total blocking. For example, the display 221 of the content presenting device 220 or the portion 222 of the display 221 (or both) may be adjusted such that a brightness level, gray level, contrast level, or other parameter, is raised or lowered such that only viewers within a certain distance of the content presenting device 220, such as the users of the mobile computing devices 140 in the Zone 1 230, may be able to see the content. Further, the content presenting device 220 may reduce the volume or frequency characteristics of the audio such that it cannot be clearly heard by viewers in the Zone 2 231 or the Zone 3 233, but can be heard in the Zone 1 230.[0060] The users 1 15 of the mobile computing devices 140 in the Zone 3 233 may require a further reduced degree of blocking. Thus, for spoiler alert events that are exclusive to the users 1 15 of the mobile computing devices 140 in the Zone 3 233, presentation content and presentation content audio may be effectively blocked by a smaller degree of reduction in brightness or similar display characteristic, a reduced volume, or other less than total blocking than the reduction required for the Zone 2 231. For example, the display 221 of the content presenting device 220 or the portion 222 of the display 221 (or both) may be adjusted such that a brightness level, gray level, contrast level, or other parameter, is raised or lowered such that only viewers within a certain distance of the content presenting device 220, such as the users of the mobile computing devices 140 in the Zone 1 230, and the Zone 2 231 may be able to see the content. Further, the content presenting device 220 may reduce the volume or frequency characteristics of the audio such that it can be heard by viewers in the Zone 1 230, and the Zone 2 231 but cannot be clearly heard in the Zone 3 233.[0061] In some embodiments, the mobile computing devices 140 may play a masking sound to obfuscate the content from a content presenting device 220. For example, the mobile computing device 140 may play an audio stream that is timed to coincide with the presentation of the spoiler content on the content presenting device 220. As another example, the mobile computing device 140 may generate a ring tone that causes a user 1 15 to engage with the mobile computing device 140, such as raise the device to the ear of the user 1 15, at which point the computing device may play an audio stream that overcomes the content of the content presenting device or warns the user about the spoiler content playing on the content presenting device 220.[0062] In the various embodiments, the users 1 15a- 1 15d of the computing devices 140a-140d may be in a public setting such as walking outside or driving, as illustrated in FIG. 2B. When the users 1 15a-l 15d are walking or driving, a public content presenting device 220a, such as an electronic billboard or tickertape marquees on the side of a building may be encountered. The users 1 15a- 1 15d will have their own respective views 1 18 of the public content presenting device 220a and the content being displayed depending on their location, intervening structures, intervening noise, etc. When walking or driving, the users 1 15a- l 15d may encounter a public content presenting device 220b such as a mobile billboard attached to a vehicle such as a truck or other vehicle and may have respective views 1 18 of the displayed content of the public content presenting device 220b. In some embodiments, spoiler content that may be displayed on the public content presenting devices 220a and 220b may be blocked and/or spoiler alerts may be generated by receiving Spoiler Alert Event information from some or all of the computing devices 140a-140d of the users 1 15a- 1 15d.[0063] In some situations, the vehicle may be a private passenger vehicle, such as a vehicle 240a with a content presenting device 220c that is visible to a pedestrian user 1 15b or a user 1 15e in an adjacent vehicle 240b through views 1 18. In some embodiments, spoiler content that may be displayed on the content presenting device 220c of the private passenger vehicle may be blocked and/or spoiler alerts may be generated by receiving Spoiler Alert Event information from at least the computing devices 140b and 140e of the users 1 15b and 1 15e. In embodiments, additional position and motion information may be processed with regard to the positions of the vehicles 240a and 240b to determine whether spoiler blocking should be performed. For example, if private vehicles, such as the vehicles 240a and 240b are approaching each other, and at least one of the vehicles is equipped with a content presenting device 220c, capable of presenting spoiler content, additional measures, such as GPS location and movement direction and speed may be used to predict when the vehicles 240a and 240b may be in proximity to each other as illustrated. Spoiler Events may be registered with the content presenting device 220c by the approaching vehicle 240b in the manner described herein so as to provide spoiler related actions to prevent the spoiler content on the content presenting device 220c. Alternatively or in addition, spoiler alerts may be provided to the user 1 15e with computing device 140e in the approaching vehicle 240b. As another example, the vehicle 240a having the content presenting device 220c may be approaching the vehicle 240b (or other vehicles), or the vehicles 240a and 240b may be travelling adjacent to each other along the same route. As a further example, both vehicles 240a and 240b may include content presenting devices and perform different content blocking actions depending upon the time-shifting actions taken by the respective vehicle occupants.[0064] In some embodiments, the public content presenting device 220a may be coupled to one or more of the remote server 123, 152 through links 122, 151 to the Internet 129. The public content presenting device 220b may be coupled to one or more of the remote server 123, 152 through a wireless connection 221b, such as a cellular connection to a cellular infrastructure component 230. Further, the computing devices 140a-140d may be coupled to one or more remote servers 123, 152 through respective wireless links 142a-142d to the cellular infrastructure component 230.[0065] When computing devices 140a-140d and the public content presenting devices 220a, 220b are executing the networking framework 150, the computing devices 140a-140d may be discovered by the public content presenting devices 220a, 220b through networking framework mechanisms. Alternatively or in addition, the computing devices 140a-140d may discover the public content presenting devices 220a, 220b. Upon discovery, the computing devices 140a- 140d may transfer any Spoiler Alert Event information to the public content presenting devices 220a, 220b. The public content presenting devices 220a, 220b may perform content blocking and/or provide spoiler alerts to the computing devices 140a-140d as described herein. The content blocking and spoiler alerts may be subject to the application of rules. In some embodiments, the public content presenting devices 220a and 220b may encounter a large number of the computing devices 140a- 140d during operation due to being located in public spaces. Therefore, management of the spoiler alert and content blocking may be facilitated by external servers that are networking framework compatible.[0066] In some embodiments, the networking framework 150 may provide an API or suitable messaging mechanism for the computing devices to perform discovery, Spoiler Alert Event information transfer, and other functions to enable content blocking and spoiler alert generation as illustrated in FIG. 3. For example, when the computing devices 140a- 140c enter communication range of the content presenting device 220, networking framework discovery messages 341a-341c may be sent to the content presenting device 220, facilitated by the networking framework 150. The networking framework discovery messages 341a-341c may advertise the presence of the computing devices 140a- 140c to the content presenting device 220. The networking framework discovery messages 341a-341c may include information about the computing devices such as device type, device capabilities, device ID, and/or other information. The content presenting device 220 may also make at least a preliminary determination of the range of the computing devices 140a- 140c based on the radio communication parameters associated with the messages (e.g., SSI). Thenetworking framework discovery messages 341a-341c may further contain location information associated with respective ones of the computing devices 140a- 140c.[0067] When the content presenting device 220 receives the networking framework discovery messages 341a-341c, networking framework query messages 321a-321c may be sent from the content presenting device 220 to the mobile computing device 140a- 140c. In some embodiments, the networking framework query messages 321a- 321c may specifically inquire whether the computing devices 140a- 140c have any Spoiler Alert Event information. In the illustrated example the networking framework query messages 321a-321c are sent as separate messages to the computing devices 140a- 140c. In other examples, a networking framework broadcast message (not shown) may be sent to alert any devices to send spoiler related information to the content presenting device 220.[0068] The computing devices 140a- 140c may send networking framework response messages in response to the networking framework query messages 321a-321c. For example, the mobile computing device 140b may send a networking framework response message 343b indicating that no Spoiler Alert Event information is available for the device. The mobile computing device 140a may send a networking framework response message 343a indicating that no Spoiler Alert Event information is available for the device. The mobile computing device 140c may send a networking framework response message 343c that contains Spoiler Alert Event information. When the networking framework response message 343c containing the Spoiler Alert Event information is received by the content presenting device 220, a processor of the content presenting device 220 may determine whether rules, such as a voting rule, allow content to be blocked. Embodiment voting rules and other rules are described in greater detail herein below in connection with FIG. 5A-5 J. In response to determining that the spoiler alert vote is passed (i.e., determination block 350 = "Yes"), the processor may implement spoiler controls in block 351. Spoiler controls may include controls based on the zone in which the mobile computing device 140c is located, or other considerations. In response to determining that the spoiler alert vote does not pass (i.e., determination block 350 = "No"), the processor may cause a networking framework message 345c, such as a spoiler alert message, to be sent from the content presenting device 220 to the mobile computing device 140c indicating that content associated with the Spoiler Alert Event will not be blocked.[0069] An embodiment method 400 for device discovery and content blocking is illustrated in FIGs. 4A-4B. Referring to FIG. 4A, in block 402, a processor or processors, such as may be associated with a content presenting device (and one or more mobile computing devices), may perform networking framework discovery. As described herein, networking framework discovery may include the transmission and reception by the processor or processors of networking framework discovery messages. The framework discovery may inform devices of each other's presence and may allow devices to share information, such as information about each device's capabilities, identity, and other information.[0070] In determination block 404, the processor of a content presenting device may determine whether any networking framework compatible devices are present. In response to determining that networking framework compatible devices are present (i.e., determination block 404 = "Yes"), the processor may determine whether any Spoiler Alert Events are present in determination block 406. The processor may determine the presence of Spoiler Alert Events as described herein by sending networking framework query messages and receiving networking framework responses from the discovered devices and obtaining Spoiler Alert Event information from the devices.[0071] In response to determining that no networking framework devices are present (i.e., determination block 404 = "No"), the processor may present content without blocking in block 422. In various embodiments, the processor may present the content subject to the application of rules, such as rules applied in connection with any one or more of blocks 428, 432, 436 of the method 400 (FIG. 4B), and blocks 505, 513, 517, 523, 527, 531 , 537, 539, 542, 551 , and/or block 553 of the methods described below with reference to FIGs. 5A-5J.[0072] In response to determining that Spoiler Alert Events are present (i.e., determination block 406 = "Yes"), the processor may create a log entry associated with the device and the Spoiler Alert Event or events in block 408. In response to determining that no Spoiler Alert Events are present (i.e., determination block 406 = "No"), the processor may present content without blocking in block 422. As noted, the processor may present the content subject to the application of rules, such as rules applied in connection with blocks 428, 432, 436 of the method 400 (FIG. 4B), and blocks 505, 513, 517, 523, 527, 531 , 537, 539, 542, 551 , and/or block 553 of the methods described below with reference to FIGs. 5A-5J.[0073] In block 410, in an optional zone -based implementation , the processor of the content presenting device may create or update a log of Spoiler Alert Event information for each device in each zone. Further, in block 412, in the optional zone- based implementation, the processor may check rules associated with spoiler alert and content blocking for each device in each zone. Alternatively, when a zone -based implementation is not being used, the processor may check the rules for each device. In determination block 414, the processor may determine whether any rules apply to spoiler alert and content blocking. In response to determining that one or more rules apply (i.e., determination block 414 = "Yes"), the processor of the content presenting device may apply those rules for spoiler alert and content blocking in block 416. In the various embodiments, the rules may be applied as described below with reference to one or more of FIGs. 4B, 5A-5J. For example, if a voting rule applies, the processor may check for a vote count for blocking particular content. When the vote totals in favor of blocking, the processor may block content associated with the Spoiler Alert Event log information. Other examples of rules are described herein. The application of rules for content blocking and spoiler alert generation is further described with reference to FIG. 4B, and FIG. 5A-5J. In response to determining that the rules do not apply (i.e., determination block 414 = "No"), the processor may present the content without blocking in block 422.[0074] In determination block 418, the processor may determine whether rules have been applied for all logged devices. The logged devices may include devices for which spoiler content has been logged. In some embodiments, a logged device may be a device that is logged as being present (e.g., in proximity to/discovered by the content presenting device) but may not include any listed spoiler content. Such devices may nevertheless be considered in determining the application of rules because such devices may participate in or become subject to the application of rules without logging spoiler content. For example, such devices may be considered in content voting rules without having logged spoiler content.[0075] In response to determining that rules have been applied for all the discovered devices, such as by determining that rules have been applied or accounted for in connection with all of the logged Spoiler Alert Event information (i.e., determination block 418 = "Yes"), the processor may determine whether any new devices have entered within range in determination block 420. For example, the processor may determine whether the networking framework discovery process has discovered the presences of any new devices, such as detected within proximity to the content presenting device. In response to determining that the rules have not been applied for all the discovered devices (i.e., determination block 418 = "No"), the processor may return to block 412 to check for and apply additional rules.[0076] In response to determining that new devices are present (i.e., determination block 420 = "Yes"), the processor may return to block 402 to perform networking framework discovery for the new device or devices. In some embodiments, the processor may learn of the presence of new devices from the networking framework discovery process and may perform additional networking framework processing in block 402, which may include discovery processing. In some embodiments, the processor may determine that new devices are present by receiving networking framework presence messages from the new devices and may return to block 402 to perform network framework discovery, such as obtaining additional information about the new device or devices. In response to determining that no new devices are present (i.e., determination block 420 = "No"), the processor may present at least some of the content without blocking in block 422. Alternatively or additionally, the processor may present at least some of the content subject to the application (or previous application) of rules, such as rules applied in connection with blocks 428, 432, 436 of the method 400 (FIG. 4B), and blocks 505, 513, 517, 523, 527, 531 , 537, 539, 542, 551 , and/or block 553 of the methods described with reference to FIGs. 5A-5J.[0077] In the various embodiments, the degree of content blocking may bedetermined and performed according to various zones as described with reference to FIG. 2A. Thus, further in the embodiment method 400 as illustrated in FIG. 4B, the application of rules may be based on a zone implementation. As described above, different zones may be associated with different distances from the content presenting device. In such a zone configuration, different levels of content blocking may be performed by the processor according to the zone in which is located a mobile computing device that requested the spoiler blocking, such as by logging spoiler- related information. In block 416, as described above, the processor may apply rules for spoiler alert and content blocking. In block 417, the processor may determine spoiler alert information, such as by receiving networking framework messages from devices containing Spoiler Alert Event information. In block 419, the processor may determine the zone for each mobile computing device (and user).[0078] In block 424, the processor may apply rules for providing spoiler alerts and performing content blocking based on the determined zones of the devices. In determination block 426, the processor may determine whether any devices (and associated users) are in Zone 1 , which may be the zone closest to the content presenting device in which full content blocking measures may be appropriate. In response to determining that there is at least one device having at least one logged Spoiler Alert Event in Zone 1 (i.e., determination block 426 = "Yes"), the processor may apply Zone 1 spoiler alert and content blocking parameters in block 428. For example, the processor may perform full video and audio blocking in block 428, subject to the possible application of additional rules. As another example, if the application of content blocking rules for a given zone prevents content blocking, the processor may send spoiler alerts to the user's mobile computing device to alert the user of the device that a spoiler is imminent in block 428. The processor may proceed to determination block 418 of FIG. 4A for further processing. In response to determining that no devices are present in Zone 1 (i.e., determination block 426 = "No"), the processor may determine whether any devices are present in Zone 2 in determination block 430.[0079] In response to determining that at least one device is present in Zone 2, (i.e., determination block 430 = "Yes"), the processor may apply Zone 2 spoiler alert and content blocking parameters in block 432. For example, the processor may perform reduced video and audio blocking in block 432, subject to the possible application of additional rules. For example, if application of content blocking rules prevents content blocking, the processor may send spoiler alerts to alert the user of the device that a spoiler is imminent in block 432. The processor may proceed to determination block 418 of FIG. 4A for further processing. In response to determining that no devices are present in Zone 2 (i.e., determination block 430 = "No"), the processor may determine whether any devices are present in Zone 3 in determination block 434.[0080] In response to determining that at least one device is present in Zone 3, (i.e., determination block 434 = "Yes"), the processor may apply Zone 3 spoiler alert and content blocking parameters in block 436. For example, the processor may perform minimal video and audio blocking in block 436, subject to the possible application of additional rules. Alternatively, the processor may not perform blocking and may send only spoiler alerts. For example, if application of content blocking rules prevents content blocking, the processor may send spoiler alerts to alert the user of the device that a spoiler is imminent in block 436. Because users of mobile computing devices located in Zone 3 are far from the content presenting device, the processor may send only spoiler alerts to the computing devices in Zone 3. The spoiler alerts may alert the users of the computing devices to avoid moving into a closer zone such as Zone 1 or Zone 2. The alerts may be useful when content blocking rules would preclude content blocking. The processor may proceed to determination block 418 of FIG. 4A for further processing. In response to determining that no devices are present in Zone 3 (i.e., determination block 434 = "No"), the processor may proceed to determination block 418 of FIG. 4A for further processing. For ease of description and illustration, the various embodiments use examples involving three zones. However, more or fewer zones may also be possible in some embodiments, such as two, four, or more zones. The zones are described herein for ease of illustration as being based on progressive radial distances; however, zones may be configured differently, such as based on radial orientation, sector orientation, block orientation, etc.[0081] In the various embodiments, as described above, content blocking and the generation of spoiler alerts may be based on reception by a content presenting device of networking framework messages from mobile computing devices containing information about recorded programming of users of the mobile computing devices. Thus, in an embodiment method 401 , as illustrated in FIG. 4C, a processor of a mobile computing device may generate spoiler alert Event information. The Spoiler Alert Event information may be communicated to and used by a content presenting device to determine the generation of appropriate spoiler alert and/or content blocking. In block 441 , the processor of the mobile computing device may select the content to be recorded, such as from a program guide or other index or catalog provided by a content provider or distributor. Alternatively or additionally, the selected content may already be pre-recorded content, such as a sports event, movie, or series episode that the user has not yet viewed, such as pre-recorded content offered by a video service provider. In block 443, the processor of the mobile computing device may generate a record command for one or more selected items of programming content, such as through interaction with a digital recording device. In block 445, the processor of the mobile computing device may generate or update a Spoiler Alert Event listing for the selected content. The processor of the mobile computing device may include information such as name, air time/date, and other information in the Spoiler Alert Event listing, that is sufficient to allow a content presenting device to identify potential spoiler content in broadcasted or presented content, such as when the Spoiler Alert Event listing is communicated to the content presenting device (e.g. during device discovery).[0082] In block 447, the processor of the mobile computing device may wait until a networking framework discovery operation, networking framework query, or other opportunity to transfer the Spoiler Alert Event information occurs. In determination block 449, the processor of the mobile computing device may determine whether a content presenting device has been encountered, such as through a networking framework discovery process. In response to determining that a content presenting device has been discovered (i.e., determination block 449 = "Yes"), the processor of the mobile computing device may provide the Spoiler Alert Event information to a content presenting device in block 451. For example, the processor of the mobile computing device may provide information, such as the listing of Spoiler Alert Events and information that was generated in block 445. In response to determining that a content presenting device has not been encountered (i.e., determination block 449 = "No"), the processor of the mobile computing device may continue to wait for a networking framework discovery sequence in block 447.[0083] In block 453, the processor of the mobile computing device may wait for input indicating that the recording content has been consumed. For example, the user of the mobile computing device may view the recorded content on the mobile computing device. When the user has completed viewing the recorded content on the mobile computing device, the user may manually indicate that the content has been viewed. Alternatively or additionally, the processor of the mobile computing device may automatically determine that the recorded content has been viewed.[0084] In determination block 455, the processor of the mobile computing device may determine whether the recorded content has been consumed (e.g., watching the content on a television at a user residence or on a mobile device). The consumption of the content may be determined in a number of ways, including interactions between a user of the mobile computing device and a spoiler alert application that is running on the mobile computing device. For example, the processor of the mobile computing device may receive the input described above in block 453. Alternatively or additionally, the processor of the mobile computing device may automatically determine that the recorded content has been consumed, such as by comparing a listing associated with the recorded content and a listing of content viewing history. In response to determining that the recorded content has been consumed (i.e., determination block 455 = "Yes"), the processor of the mobile computing device may clear the Spoiler Alert Event listing from the listing of spoiler information in block 456 and may further update the Spoiler Alert Event listing in block 451. In response to determining that the recorded content has not been consumed (i.e., determination block 455 = "No"), the processor of the mobile computing device may continue to wait for an indication that the recorded content has been consumed in block 453.[0085] In determination block 457, the processor of the mobile computing device may determine whether additional Spoiler Alert Events remain in the listing. In response to determining that additional Spoiler Alert Events remain in the listing (i.e., determination block 457 = "Yes"), the processor of the mobile computing device may continue to wait for an indication that the recorded event associated with the remaining Spoiler Alert Events has been consumed in block 452. In response to determining that there are no more Spoiler Alert Events do not remain (i.e., determination block 457 = "No"), the processor of the mobile computing device may complete processing. Alternatively, the processor of the mobile computing device may register additional Spoiler Alert Events, such as ones that are entered by the user after discovery by the content presenting device. Such newly registered Spoiler Alert Events may be passed to the content presenting device using networking framework messages.[0086] In an embodiment method 403 illustrated in FIG. 4D, a processor of a content presenting device may receive Spoiler Alert Event information from mobile computing devices discovered using the networking framework. In block 461 , the processor of the content presenting device may receive Spoiler Alert Events from various mobile computing devices within proximity to the content presenting device. For example, the Spoiler Alert Events may be received and logged with the content presenting device during or after networking framework discovery. In block 463, the processor of content presenting device may compare the Spoiler Alert Event information associated with the Spoiler Alert Events received from one or more mobile computing devices in proximity to the content presenting devices, such as a name of the spoiler content, a broadcast time of the spoiler content, a blocking release time for the spoiler content (e.g., blocking duration), one or more keywords associated with the spoiler content with content to be presented by the content presenting device. For example, in some embodiments, the processor of the content presenting device may parse data in the content to be presented (e.g., program guide information, program listing information, current program title, etc.) and data in the Spoiler Alert Event related information and may compare the parsed data to determine whether any spoiler content is found. In some embodiments, the program guide information may contain further information about embedded spoilers. For example, a news program may provide spoiler content in the form of outcomes of sporting events, in which case spoiler information may be provided in a program guide or listing. In determination block 465, the processor of content presenting device may determine whether spoiler content is found in the content to be presented. In response to determining that spoiler content is found in the content to be presented (i.e., determination block 465 = "Yes"), such as based on the comparison in block 463, the processor of content presenting device may block the spoiler content from being displayed or played by the content presenting device in block 467 subject to any active rules as disclosed in greater detail hereinafter. Further, if rules prevent the blocking of spoiler content, the processor of content presenting device may send a spoiler alert to any mobile computing device or devices, for which the spoiler content has been logged, that indicates that spoiler content is being currently presented, may be about to be presented or may be in the process of being presented on the content presenting device. Such an indication or alert may prompt a user of the mobile computing device to avoid coming into proximity to the content presenting device. By "spoiler content," reference may be made to any content that may be associated with recorded or time-shifted content that a user has recorded such as a sports score, a plot giveaway, a result of a contest, and so on. In response to determining that spoiler content is not found in the content to be presented (i.e., determination block 465 = "No"), the processor of content presenting device may continue to parse the content to be presented comparing it with the Spoiler Alert Event- related information in block 463.[0087] In block 469, the processor of the content presenting device may present the recorded content in accordance with some of the rules. For example, if a certain number of the mobile computing devices have registered the same recorded content as a Spoiler Alert Event, the content presenting device, according to a rule, may poll, or request a vote from the audience regarding whether the recorded content should be displayed.[0088] In determination block 471, the processor of the content presenting device may determine whether the recorded content has been presented (e.g., in response to a vote). In response to determining that the content has been presented (i.e.,determination block 471 = "Yes"), the processor of the content presenting device may clear all of the Spoiler Alert Events (e.g., from various mobile computing devices) associated with the presented recorded content from the current listing in block 473. In some embodiments, the content presenting device may track whether an entirely new audience is present that has logged the same content as a Spoiler Alert Event that was previously presented. In such an instance, the content presenting device may repeat the above described operations as new mobile devices enter and other mobile devices leave proximity to the content presenting device. In response to determining that the recorded content has not been presented (i.e., determination block 471 = "No"), the processor of the content presenting device may continue to compare the spoiler alert event information with information about the content to be presented in block 463.[0089] In determination block 475, the processor of the content presenting device may determine whether additional Spoiler Alert Events are listed or logged. In response to determining that additional Spoiler Alert Events are listed (i.e., determination block 475 = "Yes"), the processor of the content presenting device may continue to block content or provide spoiler alerts in accordance with rules in block 467. In response to determining that no additional Spoiler Alert Events are listed (i.e., determination block 475 = "No"), the processor of the content presenting device may receive additional Spoiler Alert Events as new devices enter proximity in block 461. For example, the processor of the content presenting device may receive a networking framework discovery sequence, such as when a new mobile computing device enters into proximity of the content presenting device. At that time, any new Spoiler Alert Events may be registered. Alternatively or additionally, mobile devices may enter proximity to the content presenting device at any time, at which time Spoiler Alert Events may be registered, such as during discovery. Further, devices already registered with the networking framework and the content presenting device (e.g., after discovery has occurred) may update their spoiler information by sending any new Spoiler Alert Events to the content presenting device using networkingframework messages.[0090] In the various embodiments, content blocking and generating spoiler alerts may be based on various rules in the embodiment methods described herein.Examples of methods implementing such rules are illustrated in FIG. 5A through FIG. 5 J. In block 501 of the method 5001 illustrated in FIG. 5 A, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A and apply blocking for any discovered device or devices that have registered Spoiler Alert Events. In determination block 503, the processor of a content presenting device may determine whether there are any registered Spoiler Alert Events, such as whether any devices have provided Spoiler Alert Event information. In response to determining that at least one device has provided a Spoiler Alert Event (i.e., determination block 503 = "Yes"), the processor of a content presenting device may apply content blocking in block 505. For example, subject to the application of any other rules, even if only one discovered device is discovered having Spoiler Alert Events, the content of presenting device may apply rules for content blocking and spoiler alert generation for the single device. The processor of a content presenting device may apply blocking for all received Spoiler Alert Events that are received from discovered devices. In response to determining that no devices have provided a Spoiler Alert Event (i.e., determination block 503 = "No"), the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A in block 507.[0091] In block 501 of the method 5003 illustrated in FIG. 5B, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 509, the processor of a content presenting device may determine a count among the various discovered devices for Spoiler Alert Events for a particular program. For example, if ten devices are discovered, the processor of a content presenting device may determine a count among the ten devices to determine how many of the devices have provided a Spoiler Alert Event for the same content, such as a sporting event (e.g., UCLA vs. Notre Dame). In determination block 51 1 , the processor of a content presenting device may determine whether a majority of the computing devices have provided the Spoiler Alert Event for the given sporting event (or other program). In response todetermining that a majority of the computing devices have provided a Spoiler Alert Event for the given sporting event or program (i.e., determination block 51 1 = "Yes"), the processor of a content presenting device may implement content blocking based on the majority in block 513. In response to determining that a majority of the computing devices have not provided a Spoiler Alert Event for the same sporting event or program (i.e., determination block 51 1 = "No"), or after performing the operations of block 513, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A in block 507, where, subject to the application of other rules, the content may be presented unblocked in some embodiments.[0092] In block 501 of the method 5005 illustrated in FIG. 5C, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 515, the processor of a content presenting device may apply weights, such as weights based on priority (or other factor), for each device. Priority weights may be determined on a variety of factors such as company rank for mobile computing devices in a workplace environment. In such an example employees or officers of higher rank will have a higher weight associated with their Spoiler Alert Event information. In some embodiments, information about the rank of users may be transmitted with the Spoiler Alert Event information. Other priority weights may be possible, such as assigning a greater weight for the mobile computing device and thus Spoiler Alert Events registered to a user who has been in proximity to the content presenting device for the longest amount of time. In block 517, the processor of a content presenting device may apply content blocking and/or spoiler alerts based on the determined weights. In block 507, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A.[0093] In block 501 of the method 5007 illustrated in FIG. 5D, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 519, the processor of a content presenting device may determine a date from which a Spoiler Alert Event has been active. For example, a user's content recording device may have recorded an event a month prior and the user has still not consumed the content. The information regarding the recording date of the content may be conveyed to the content presenting device by the user's mobile computing device. In determination block 521 , the processor of a content presenting device may determine whether the date for the Spoiler Alert Event exceeds a threshold date. For example, the processor of a content presenting device may set the threshold at two weeks meaning any Spoiler Alert Events that are older than two weeks will not be honored. In response to determining that the date of the Spoiler Alert Event is within the threshold date (i.e., determination block 521 = "Yes"), the processor of a content presenting device may apply content blocking in block 523 subject to the application of other rules such as voting rules, priority rules, and so on. In some example, when a mobile computing device is associated with a high priority, the processor of a content presenting device may apply a longer threshold date. In other embodiments (not shown), the Spoiler Alert Event information may include a release time (i.e., a time when the spoiler alert should be cancelled). In response to determining that the date of the Spoiler Alert Event is not within the threshold date (i.e., determination block 521 = "No"), or after applying content blocking in block 523, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A in block 509.[0094] In block 501 of the method 5009 illustrated in FIG. 5E of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 525, the processor of a content presenting device may determine the type of content being presented by the content presenting device. For example, the processor may determine that the content presenting device is presenting a particular content type, such as news, advertising, program content (e.g., television series, movie, etc.), sporting event content, and so on. In block 527, the processor of a content presenting device may refrain from applying content blocking based on the determined content type. For example, the processor of a content presenting device may not apply content blocking based on the content presenting device presenting news content. Alternatively, the processor of the content presenting device may provide a spoiler alert message in lieu of content blocking based on the type of content being presented. In block 507, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A.[0095] In block 501 of the method 501 1 illustrated in FIG. 5F, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 529, the processor of a content presenting device may establish a maximum content blocking time and advertise this blocking time to viewers. For example, the processor of a content presenting device may display the blocking time on the content presenting device or may send an alert message to the computing devices that indicates what the maximum blocking time is. Alternatively or additionally, the processor of a content presenting device may keep a running display of the remaining blocking time or send alter messages indicating the remaining blocking time. In block 531 , the processor of a content presenting device may block content based on the maximum blocking time. For example, the processor of a content presenting device may apply blocking at the beginning of the blocking period and may remove blocking after the maximum blocking time expires. In block 507, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A.[0096] In block 501 of the method 5013 illustrated in FIG. 5G, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 533, the processor of a content presenting device may determine Spoiler Alert Events among the various discovered devices. In determination block 535 the processor of a content presenting device may determine whether a spoiler alert service subscription fee has been paid. For example, the processor may determine through a transaction with the device that the user has paid a subscription for the content blocking and spoiler alert service. As another example, the processor of the content presenting device may determine through messages exchanged with the device that the user of the device has made a designated payment through other means. In some embodiments, the processor of the content presenting device may consult a server to determine whether a subscription payment has been made. In response to determining that a subscription fee has been paid (i.e., determination block 535 = "Yes"), the processor of a content presenting device may apply content blocking in block 537. In response to determining that a subscription fee has not been paid (i.e., determination block 535 = "No"), or when content blocking has been applied in block 537, the processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A in block 507.[0097] In block 501 of the method 5015 illustrated in FIG. 5H, a processor of a content presenting device may proceed from applying rules in block 416 of the method 400 described above with reference to FIG. 4A. In block 539, the processor of a content presenting device may select any one or a combination of rules illustrated in the methods described herein with reference to FIG. 5A through FIG. 5G, FIG. 51 and FIG. 5 J to be applied in the embodiment method 400 as described with reference to FIG. 4A and FIG. 4B. In block 507, the processor of a content presenting device may return to determination block 418 of the method 400 described above with reference to FIG. 4A.[0098] In block 501 of the method 5017 illustrated in FIG. 51, a processor of a content presenting device may proceed from applying rules in block 416 of FIG. 4A. In determination block 541 , the processor of a content presenting device may determine whether any of the rules or other conditions results in content blocking being prevented. Alternatively or additionally, in some embodiments, the processor of a content presenting device may apply a rule that enables the presentation of the spoiler content despite the application of other rules that otherwise restrict the presentation of the spoiler content. In response to determining that rules or conditions prevent content from being blocked (i.e., determination block 541 = "Yes"), the processor of a content presenting device may provide a spoiler alert message in block 542. For example, when the presentation of the spoiler content is enabled despite being otherwise restricted by one or more other rules, a spoiler alert indication is provided that indicates to the given mobile computing device (or devices) that the presentation of the spoiler content will be enabled. Such an indication may be useful to allow devices that had previously registered spoiler alert events to leave the area of the content presentation device, or take other action to avoid viewing or hearing the spoiler content. A spoiler alert message may be sent by the processor of a content presenting device to the discovered devices to provide information that the content presenting device will not be blocking content and to advise the user of the device to avoid the area, to avoid watching or listening to the programming being presented by the content presenting device or other information. For example, the processor of a content presenting device may indicate when blocking will be resumed. The processor of a content presenting device may return to determination block 418 of the method 400 described with reference to FIG. 4A in block 507.[0099] In some embodiments, users of mobile computing devices may be in a situation in which recorded content that is subject to the Spoiler Alert Event registration may be unlocked, such as through operation of a processor of a content presenting device. The processor may be operating in connection with the networking framework and may accept a mobile payment provided by the mobile computing devices. In an example, when a sufficient number of people in the vicinity of a public content presenting device, such as sitting in cafes, bars, and so on are interested in viewing content that they had previously recorded, content which is the subject of the Spoiler Alert Events may be presented. For example, a large marquis or display, or mobile televisions that may drive by or park, may present the recorded content that is the subject of the Spoiler Alert Events provided that a sufficient number of users agree to pay. In some examples, recorded content may displace currently displayed content provided enough mobile computing devices are present that have previously registered Spoiler Alert Events for the same content. Such displacement of displayed content may be based on feedback from mobile computing device users. For example, a sufficient number of users may not be satisfied with the currently displayed content of a content presenting device, which may be determined based on voting or other user inputs. A sufficient number of those same users may also have registered Spoiler Alert Events for the same content. In such a case, the content presenting device may terminate the current content and display the recorded content associated with the Spoiler Alert Events.[0100] In block 501 of the method 5019 illustrated in FIG. 5 J, a processor of a content presenting device may proceed from applying rules in block 416 of method 400 described above with reference to FIG. 4A. In block 543, the processor of a content presenting device may obtain a count of the number of Spoiler Alert Events that have been registered by different mobile computing devices for the same content (e.g., a sports event). In determination block 545, the processor may determine whether the number of registered Spoiler Alert Events for the same content is sufficient to meet or exceed a criteria, such as a majority of all the discovered mobile computing devices, or a threshold number of all the discovered mobile computing devices. In response to determining that an insufficient number of Spoiler Alert Events are registered for the same content (i.e., determination block 545 = "No"), the processor of a content presenting device may return to determination block 418 of FIG. 4A in block 507 for further processing.[0101] In response to determining that a sufficient number of Spoiler Alert Events are registered for the same content (i.e., determination block 545 = "Yes"), the processor of the content presenting device may query the mobile computing devices that have registered Spoiler Alert Events for the same content to determine the number of such devices that are interested in an offer to view the recorded content in block 547. In some embodiments, the offer may be extended to devices that have not registered Spoiler Alert Events for the recorded content. The offer may be an offer to present the recorded content for a fee. In some embodiments, such as zone -based embodiments, the processor of the content presenting device may base the fee on a distance zone from the content presenting device as previously described. The fee for presenting the recorded content may be based on the zone since the separation distance may affect the user's ability to see and hear the content. For example, mobile computing devices in Zone 1 may pay a higher fee because they are close to the content presenting device and the mobile computing devices in Zones 2, 3 and beyond may pay a progressively lesser fee based on their distance from the content presenting device.[0102] In determination block 549, the processor of a content presenting device may determine whether a sufficient number of the mobile computing devices have accepted the offer to present the recorded content, which may include payment of the fee or agreement to pay the fee (e.g., accept an offer to be billed for the presentation of the recorded content). For example, the processor of the content presenting device may receive transmitted messages from ones of the mobile computing devices that are registered with the networking framework. The messages from the mobile computing devices may be messages accepting the offer to present the recorded content. The messages may also contain confirmation of payment. In some embodiments, the users of the mobile computing devices may be billed or invoiced for their acceptance of the offer to receive the presented content. As discussed above, some users who have not registered Spoiler Alert Events may nevertheless receive and accept the offer to view the presented content. The processor may determine the sufficiency of accepters of the offer based on a metric, such as that a majority or super majority of the mobile computing devices that had registered a Spoiler Alert Event for the given content accepted the offer to view the recorded content based on the offer. In otherembodiments, the processor may base the sufficiency of the accepters on an overall count of device accepting the offer regardless of whether or not they have registered Spoiler Alert Events for the given content. In response to determining that an insufficient number of the mobile computing devices have accepted the offer (e.g., transmitted a message or executed an electronic payment transaction) to present the recorded content, (i.e., determination block 549 = "No"), the processor of the content presenting device may return to determination block 418 of the method 400 (FIG. 4A) in block 507.[0103] In response to determining that a sufficient number of the mobile computing device have accepted the offer (e.g., transmitted a message and/or executed an electronic payment transaction) to present the recorded content, (i.e., determination block 549 = "Yes"), the processor of a content presenting device may present the recorded content in block 551. In block 553, the processor of the content presenting device may optionally have previously provided the offer and may present the content based on the location zones of the mobile computing devices. For example, the processor of the content presenting device may charge the mobile computing devices that are closest to the content presenting device a relatively higher price than those devices that are farther away from the content presenting device, such as in a remotely located zone. In the event that the mobile computing devices in the outer zones do not accept the offer to view the recorded content, the processor may nevertheless present the recorded content, but may restrict the volume and other presentation parameters to limit the ability of users of the mobile computing devices in these zones to see or hear the content.[0104] The various aspects may be implemented in any of a variety of mobile computing devices (e.g., smartphones, tablets, etc.) an example of which is illustrated in FIG. 6. The mobile computing device 600 may include a processor 602 coupled the various systems of the mobile computing device 600 for communication with and control thereof. For example, the processor 602 may be coupled to a touch screen controller 604, radio communication elements, speakers and microphones, and an internal memory 606. The processor 602 may be one or more multi-core integrated circuits designated for general or specific processing tasks. The internal memory 606 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. In another embodiment (not shown), the mobile computing device 600 may also be coupled to an external memory, such as an external hard drive.[0105] The touch screen controller 604 and the processor 602 may also be coupled to a touch screen panel 612, such as a resistive-sensing touch screen, capacitive-sensing touch screen, infrared sensing touch screen, etc. Additionally, the display of the mobile computing device 600 need not have touch screen capability. The mobile computing device 600 may have one or more radio signal transceivers 608 (e.g., Peanut, Bluetooth, Bluetooth LE, Zigbee, Wi-Fi, RF radio, etc.) and antennae 610, for sending and receiving communications, coupled to each other and/or to the processor 602. The transceivers 608 and antennae 610 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks andinterfaces. The mobile computing device 600 may include a cellular network wireless modem chip 616 that enables communication via a cellular network and is coupled to the processor.[0106] The mobile computing device 600 may include a peripheral device connection interface 618 coupled to the processor 602. The peripheral device connection interface 618 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as USB, Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 618 may also be coupled to a similarly configured peripheral device connection port (not shown).[0107] In some embodiments, the mobile computing device 600 may include microphones 615. For example, the mobile computing device may have aconventional microphone 615a for receiving voice or other audio frequency energy from a user during a call. The mobile computing device 600 may further be configured with additional microphones 615b and 615c, which may be configured to receive audio including ultrasound signals. Alternatively, all microphones 615a, 615b, and 615c may be configured to receive ultrasound signals. The microphones 615 may be piezo-electric transducers, or other conventional microphone elements. Because more than one microphone 615 may be used, relative location information may be received in connection with a received ultrasound signal through various triangulation methods. At least two microphones 615 configured to receive ultrasound signals may be used to generate position information for an emitter of ultrasound energy.[0108] The mobile computing device 600 may also include speakers 614 for providing audio outputs. The mobile computing device 600 may also include a housing 620, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile computing device 600 may include a power source 622 coupled to the processor 602, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 600. The mobile computing device 600 may also include a physical button 624 for receiving user inputs. The mobile computing device 600 may also include a power button 626 for turning the mobile computing device 600 on and off.[0109] In some embodiments, the mobile computing device 600 may further include an accelerometer 628, which senses movement, vibration, and other aspects of the device through the ability to detect multi-directional values of and changes in acceleration. In the various embodiments, the accelerometer 628 may be used to determine the x, y, and z positions of the mobile computing device 600. Using the information from the accelerometer, a pointing direction of the mobile computing device 600 may be detected.[0110] The various embodiments may be implemented in any of a variety of content presenting devices, example of which in the form of a flat screen television is illustrated in FIG. 7. For example, a flat screen television 700 may include a processor 701 coupled to internal memory 702. The internal memory 702 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. The processor 701 may also be coupled to a touch screen display 710, such as a resistive-sensing touch screen, capacitive-sensing touch screen infrared sensing touch screen, etc. The flat screen television 700 may have one or more radio signal transceivers 704 (e.g., Peanut, Bluetooth, Zigbee, WiFi, F radio) and antennas 708 for sending and receiving wireless signals as described herein. The transceivers 704 and antennas 708 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The flat screen television 700 may include a cellular network wireless modem chip 720 that enables communication via a cellular network. The flat screen television 700 may also include a physical button 706 for receiving user inputs. The flat screen television 700 may also include various sensors coupled to the processor 701 , such as a camera 722, and a microphone or microphones 723.[0111] For example, the flat screen television 700 may have a conventional microphone 723 for receiving voice commands or measuring ambient sound levels. The microphone 723 may be a piezo-electric transducer, or other conventional microphone elements.[0112] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular. [0113] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0114] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver smart objects, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.[0115] In one or more exemplary aspects, the functions described may beimplemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non- transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage smart objects, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non- transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product.[0116] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Variousmodifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
Standard cell circuits employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop are disclosed. In one aspect, a standard cell circuit is provided that employs active devices that include corresponding gates disposed with a gate pitch. First and second voltage rails having a line width are disposed in a first metal layer. Employing the first and second voltage rails having substantially a same line width reduces the height of the standard cell circuit as compared to conventional standard cell circuits. Metal lines are disposed in a second metal layer with a metal pitch less than the gate pitch such that the number of metal lines exceeds the number of gates. Electrically coupling the first and second voltage rails to the metal shunts increases the conductive area of each voltage rail, which reduces a voltage drop across each voltage rail.
claimed is: A standard cell circuit, comprising:a plurality of active devices comprising a plurality of corresponding gates disposed with a gate pitch;a first voltage rail having a line width disposed in a first metal layer and corresponding to a first one-half track, wherein the first voltage rail is configured to receive a first voltage;a second voltage rail having the line width disposed in the first metal layer and corresponding to a second one-half track, wherein the second voltage rail is configured to receive a second voltage;a plurality of metal lines disposed in a second metal layer with a metal pitch less than the gate pitch, wherein one or more metal lines of the plurality of metal lines is electrically coupled to one or more gates of the plurality of gates;a first metal shunt disposed in a third metal layer and electrically coupled to the first voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates; anda second metal shunt disposed in the third metal layer and electrically coupled to the second voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates.The standard cell circuit of claim 1, further comprising:one or more first vias disposed between the first metal layer and the second metal layer, wherein each of the one or more first vias electrically couples the first voltage rail to one or more corresponding metal lines; andone or more second vias disposed between the first metal layer and the second metal layer, wherein each of the one or more second vias electrically couples the second voltage rail to one or more corresponding metal lines.3. The standard cell circuit of claim 2, further comprising:one or more first vias disposed between the second metal layer and the third metal layer, wherein each of the one or more first vias electrically couples one or more corresponding metal lines to the first metal shunt; andone or more second vias disposed between the second metal layer and the third metal layer, wherein each of the one or more second vias electrically couples one or more corresponding metal lines to the second metal shunt.4. The standard cell circuit of claim 1, wherein the metal pitch is approximately equal to two-thirds (2/3) of the gate pitch.5. The standard cell circuit of claim 4, wherein:the metal pitch is approximately equal to twenty-eight (28) nanometers (nm); andthe gate pitch is approximately equal to forty-two (42) nm.6. The standard cell circuit of claim 1, wherein the metal pitch is between approximately one-half (1/2) and three-fourths (3/4) of the gate pitch.7. The standard cell circuit of claim 6, wherein:the metal pitch is between approximately twenty (20) nm and thirty (30) nm; and the gate pitch is between approximately forty (40) nm and forty-two (42) nm.8. The standard cell circuit of claim 1, further comprising a plurality of routing lines disposed in the first metal layer between the first voltage rail and the second voltage rail, wherein:each routing line has substantially a same line width as the first voltage rail and the second voltage rail; andeach routing line corresponds to a routing track of a plurality of routing tracks.9. The standard cell circuit of claim 8, wherein the plurality of routing tracks comprises four (4) tracks.10. The standard cell circuit of claim 1, wherein:the second metal layer is disposed between the first metal layer and the third metal layer; andthe third metal layer is disposed above the second metal layer.11. The standard cell circuit of claim 10, wherein the first metal layer comprises a metal zero (MO) metal layer.12. The standard cell circuit of claim 11, wherein the second metal layer comprises a metal one (Ml) metal layer.13. The standard cell circuit of claim 12, wherein the third metal layer comprises a metal two (M2) metal layer.14. The standard cell circuit of claim 1, wherein the line width is approximately equal to a minimum line width.15. The standard cell circuit of claim 1, further comprising a technology node size equal to approximately ten (10) nanometers (nm).16. The standard cell circuit of claim 1 integrated into an integrated circuit (IC).17. The standard cell circuit of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.A standard cell circuit, comprising:a means for performing a logic function comprising a means for receiving a gate voltage disposed with a gate pitch;a means for providing a first voltage disposed in a first metal layer having a line width and corresponding to a first one-half track;a means for providing a second voltage disposed in the first metal layer having the line width and corresponding to a second one-half track; a plurality of means for electrically coupling disposed in a second metal layer with a metal pitch less than the gate pitch, wherein one or more means for electrically coupling is electrically coupled to the means for receiving the gate voltage;a means for increasing a first resistance disposed in a third metal layer electrically coupled to the means for providing the first voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage; anda means for increasing a second resistance disposed in the third metal layer electrically coupled to the means for providing the second voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage.The standard cell circuit of claim 18, further comprising:a means for interconnecting the means for providing the first voltage to the one or more means for electrically coupling; anda means for interconnecting the means for providing the second voltage to the one or more means for electrically coupling.20. The standard cell circuit of claim 19, further comprising:a means for interconnecting the one or more means for electrically coupling to the means for increasing the first resistance; anda means for interconnecting the one or more means for electrically coupling to the means for increasing the second resistance.21. The standard cell circuit of claim 18, wherein the metal pitch is approximately equal to two-thirds of the gate pitch.22. The standard cell circuit of claim 18, wherein:the second metal layer is disposed between the first metal layer and the third metal layer; andthe third metal layer is disposed above the second metal layer.23. A method of manufacturing a standard cell circuit employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop, comprising:disposing a plurality of gates with a gate pitch, wherein each gate of the plurality of gates corresponds to an active device of a plurality of active devices; disposing a first voltage rail in a first metal layer and corresponding to a first one-half track, wherein the first voltage rail has a line width and is configured to receive a first voltage;disposing a second voltage rail in the first metal layer and corresponding to a second one-half track, wherein the second voltage rail has the line width and is configured to receive a second voltage;disposing a plurality of metal lines in a second metal layer and having a metal pitch less than the gate pitch, wherein one or more metal lines of the plurality of metal lines is electrically coupled to one or more gates of the plurality of gates;disposing a first metal shunt in a third metal layer, wherein the first metal shunt is electrically coupled to the first voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates; anddisposing a second metal shunt in the third metal layer, wherein the second metal shunt is electrically coupled to the second voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates.24. The method of claim 23, further comprising:disposing one or more first vias between the first metal layer and the second metal layer, wherein each of the one or more first vias electrically couples the first voltage rail to one or more corresponding metal lines; anddisposing one or more second vias between the first metal layer and the second metal layer, wherein each of the one or more second vias electrically couples the second voltage rail to one or more corresponding metal lines.25. The method of claim 24, further comprising:disposing one or more first vias between the second metal layer and the third metal layer, wherein each of the one or more first vias electrically couples one or more corresponding metal lines to the first metal shunt; anddisposing one or more second vias between the second metal layer and the third metal layer, wherein each of the one or more second vias electrically couples one or more corresponding metal lines to the second metal shunt.26. The method of claim 23, wherein disposing the plurality of metal lines comprises disposing the plurality of metal lines having the metal pitch approximately equal to two-thirds of the gate pitch.27. The method of claim 23, wherein disposing the plurality of metal lines comprises disposing the plurality of metal lines having the metal pitch between approximately one-half (1/2) and three-fourths (3/4) of the gate pitch.
STANDARD CELL CIRCUITS EMPLOYING VOLTAGE RAILS ELECTRICALLY COUPLED TO METAL SHUNTS FOR REDUCING OR AVOIDING INCREASES IN VOLTAGE DROPPRIORITY CLAIM[0001] The present application claims priority to U.S. Patent Application Serial No. 15/386,501 filed on December 21, 2016 and entitled "STANDARD CELL CIRCUITS EMPLOYING VOLTAGE RAILS ELECTRICALLY COUPLED TO METAL SHUNTS FOR REDUCING OR AVOIDING INCREASES IN VOLTAGE DROP," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to standard cell circuits, and particularly to avoiding or reducing increases in voltage drop in standard cell circuits.II. Background[0003] Processor-based computer systems can include a vast array of integrated circuits (ICs). Each IC has a complex layout design comprised of multiple IC devices. Standard cell circuits are often employed to assist in making the design of ICs less complex and more manageable. In particular, standard cell circuits provide a designer with pre-designed cells corresponding to commonly used IC devices that conform to specific design rules of a chosen technology. As non-limiting examples, standard cell circuits may include gates, inverters, multiplexers, and adders. Using standard cell circuits enables a designer to create ICs having consistent layout designs, thereby creating a more uniform and less complex layout design across multiple ICs, as compared to custom-designing each circuit.[0004] Conventional standard cell circuits are fabricated using process technologies that form device elements with a pre-defined technology node size. For example, a process technology may be employed to fabricate a conventional standard cell circuit with device elements fourteen (14) nanometers (nm) or ten (10) nm wide. Improvements in fabrication processes and related technologies are enabling decreases in technology node size, which allows a higher number of device elements, such as transistors, to be disposed in less area within a circuit. As technology node size scales down, gate and metal lines within a conventional standard cell circuit also scale down to reduce the area of a conventional standard cell circuit. For example, gate length can scale down to reduce the width of a conventional standard cell circuit, and metal line width can scale down to reduce the height.[0005] However, as the technology node size scales down to ten (10) nm and below for example, the width of a conventional standard cell circuit cannot continue to scale down due to gate pitch limitations. In particular, even as technology node size decreases, minimum gate length requirements for devices within a conventional standard cell circuit limit how small the gate pitch, and thus the width of the conventional standard cell circuit, may be reduced. Additionally, reducing the height of a conventional standard cell circuit may face limitations due to voltage requirements. For example, voltage rails employed in a conventional standard cell circuit and configured to receive voltage, such as supply voltage, can be scaled down to reduce the height of the conventional standard cell circuit. However, scaling down voltage rails increases rail resistances, thus increasing a voltage drop (i.e., current-resistance (IR) drop) across the voltage rails. Increased voltage drop reduces the voltage available from the voltage rails for devices in a conventional standard cell circuit, which may cause erroneous operation of the devices. Therefore, it would be advantageous to scale down the area of a standard cell circuit while reducing or avoiding increases in corresponding voltage drop.SUMMARY OF THE DISCLOSURE[0006] Aspects disclosed herein include standard cell circuits employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop. In particular, standard cell circuits described herein include metal lines disposed with a metal pitch, such that the number of metal lines allows some metal lines to be dedicated to electrically coupling voltage rails to metal shunts to increase the conductive area of the voltage rails. The increased conductive area reduces the resistance of the voltage rails, which reduces the voltage drop across the voltage rails. In this manner, the voltage rails can have a relatively smaller width while reducing or avoiding increases in voltage drop across the voltage rails. In one exemplary aspect, a standard cell circuit is provided in a circuit layout that employs active devices that include corresponding gates disposed with a gate pitch. A first voltage rail having a line width is disposed in a first metal layer, and a second voltage rail having substantially the same line width as the first voltage rail is disposed in the first metal layer. Employing the first and second voltage rails having substantially the same line width reduces the height of the standard cell circuit compared to conventional standard cell circuits. Metal lines are disposed in a second metal layer with a metal pitch less than the gate pitch, such that the number of metal lines exceeds the number of gates. In this manner, additional metal lines can be provided that can be dedicated to coupling the voltage rails to metal shunts disposed in a third metal layer to reduce the resistance of the narrower width voltage rails, while other metal lines can be dedicated to interconnecting the gates of the active devices. Electrically coupling the first and second voltage rails to the metal shunts increases the conductive area of each voltage rail, which reduces a corresponding resistance. The reduced resistance corresponds to a reduced voltage drop (i.e., current- resistance (IR) drop) across each voltage rail. Thus, the standard cell circuit achieves a reduced area compared to conventional standard cell circuits by way of the narrower voltage rails, while also reducing or avoiding increases in voltage drop corresponding to the narrower voltage rails.[0007] In this regard in one aspect, a standard cell circuit is provided. The standard cell circuit comprises a plurality of active devices comprising a plurality of corresponding gates disposed with a gate pitch. The standard cell circuit also comprises a first voltage rail having a line width disposed in a first metal layer and corresponding to a first one-half track. The first voltage rail is configured to receive a first voltage. The standard cell circuit also comprises a second voltage rail having the line width disposed in the first metal layer and corresponding to a second one-half track. The second voltage rail is configured to receive a second voltage. The standard cell circuit also comprises a plurality of metal lines disposed in a second metal layer with a metal pitch less than the gate pitch. One or more metal lines of the plurality of metal lines is electrically coupled to one or more gates of the plurality of gates. The standard cell circuit also comprises a first metal shunt disposed in a third metal layer and is electrically coupled to the first voltage rail using one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates. The standard cell circuit also comprises a second metal shunt disposed in the third metal layer and is electrically coupled to the second voltage rail using one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates.[0008] In another aspect, a standard cell circuit is provided. The standard cell circuit comprises a means for performing a logic function comprising a means for receiving a gate voltage disposed with a gate pitch. The standard cell circuit also comprises a means for providing a first voltage disposed in a first metal layer having a line width and corresponding to a first one-half track. The standard cell circuit also comprises a means for providing a second voltage disposed in the first metal layer having the line width and corresponding to a second one-half track. The standard cell circuit also comprises a plurality of means for electrically coupling disposed in a second metal layer with a metal pitch less than the gate pitch. One or more means for electrically coupling is electrically coupled to the means for receiving the gate voltage. The standard cell circuit also comprises a means for increasing a first resistance disposed in a third metal layer electrically coupled to the means for providing the first voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage. The standard cell circuit also comprises a means for increasing a second resistance disposed in the third metal layer electrically coupled to the means for providing the second voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage.[0009] In another aspect, a method of manufacturing a standard cell circuit employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop is provided. The method comprises disposing a plurality of gates with a gate pitch. Each gate of the plurality of gates corresponds to an active device of a plurality of active devices. The method also comprises disposing a first voltage rail in a first metal layer and corresponding to a first one -half track, wherein the first voltage rail has a line width and is configured to receive a first voltage. The method also comprises disposing a second voltage rail in the first metal layer and corresponding to a second one-half track, wherein the second voltage rail has the line width and is configured to receive a second voltage. The method also comprises disposing a plurality of metal lines in a second metal layer and having a metal pitch less than the gate pitch. One or more metal lines of the plurality of metal lines is electrically coupled to one or more gates of the plurality of gates. The method also comprises disposing a first metal shunt in a third metal layer, wherein the first metal shunt is electrically coupled to the first voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates. The method also comprises disposing a second metal shunt in the third metal layer, wherein the second metal shunt is electrically coupled to the second voltage rail and one or more metal lines of the plurality of metal lines not electrically coupled to the one or more gates.BRIEF DESCRIPTION OF THE FIGURES[0010] Figure 1 is a top-view diagram of a conventional standard cell circuit employing first and second voltage rails having a width that is larger than a width of routing lines;[0011] Figure 2A is a top-view diagram of an exemplary standard cell circuit employing voltage rails electrically coupled to metal shunts by way of dedicated metal lines made available by employing a metal pitch that is less than a gate pitch, wherein the metal shunts reduce or avoid increases in voltage drop that would otherwise result from narrower voltage rails while allowing the standard cell circuit to achieve a reduced area;[0012] Figure 2B illustrates a cross-sectional diagram of the standard cell circuit of Figure 2 A employing voltage rails electrically coupled to the metal shunts, taken generally along the line A-A of Figure 2A;[0013] Figure 2C illustrates a cross-sectional diagram of the standard cell circuit of Figure 2 A employing voltage rails electrically coupled to the metal shunts, taken generally along the line B-B of Figure 2A;[0014] Figure 3 is a flowchart illustrating an exemplary process for fabricating the standard cell circuit in Figure 2A employing voltage rails electrically coupled to metal shunts by way of dedicated metal lines made available by employing a metal pitch that is less than a gate pitch, wherein the metal shunts reduce or avoid increases in voltage drop that would otherwise result from narrower voltage rails while allowing the standard cell circuit to achieve a reduced area; [0015] Figure 4 is a cross-sectional diagram of an exemplary standard cell circuit employing voltage rails electrically coupled to respective metal shunts so as to achieve an increased power net (PN) vertical connection density;[0016] Figure 5 is a cross-sectional diagram of a conventional standard cell circuit with a PN vertical connection density limited by the cell width of the conventional standard cell circuit;[0017] Figure 6 is a block diagram of an exemplary processor-based system that can include the standard cell circuit employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop while achieving a reduced area of Figure 2A; and[0018] Figure 7 is a block diagram of an exemplary wireless communications device that includes radio-frequency (RF) components formed in an integrated circuit (IC), wherein the RF components can include the standard cell circuit employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop while achieving a reduced area of Figure 2A.DETAILED DESCRIPTION[0019] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0020] Aspects disclosed herein include standard cell circuits employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop. In particular, standard cell circuits described herein include metal lines disposed with a metal pitch, such that the number of metal lines allows some metal lines to be dedicated to electrically coupling voltage rails to metal shunts to increase the conductive area of the voltage rails. The increased conductive area reduces the resistance of the voltage rails, which reduces the voltage drop across the voltage rails. In this manner, the voltage rails can have a relatively smaller width while reducing or avoiding increases in voltage drop across the voltage rails. In one exemplary aspect, a standard cell circuit is provided in a circuit layout that employs active devices that include corresponding gates disposed with a gate pitch. A first voltage rail having a line width is disposed in a first metal layer, and a second voltage rail having substantially the same line width as the first voltage rail is disposed in the first metal layer. Employing the first and second voltage rails having substantially the same line width reduces the height of the standard cell circuit compared to conventional standard cell circuits. Metal lines are disposed in a second metal layer with a metal pitch less than the gate pitch, such that the number of metal lines exceeds the number of gates. In this manner, additional metal lines can be provided that can be dedicated to coupling the voltage rails to metal shunts disposed in a third metal layer to reduce the resistance of the narrower width voltage rails, while other metal lines can be dedicated to interconnecting the gates of the active devices. Electrically coupling the first and second voltage rails to the metal shunts increases the conductive area of each voltage rail, which reduces a corresponding resistance. The reduced resistance corresponds to a reduced voltage drop (i.e., current- resistance (IR) drop) across each voltage rail. Thus, the standard cell circuit achieves a reduced area compared to conventional standard cell circuits by way of the narrower voltage rails, while also reducing or avoiding increases in voltage drop corresponding to the narrower voltage rails.[0021] Before discussing the details of standard cell circuits employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop beginning in Figure 2A, a conventional standard cell circuit is first described. In this regard, Figure 1 illustrates a layout 100 of a conventional standard cell circuit 102. The standard cell circuit 102 employs active devices (not shown) that include corresponding gates 104(1)-104(4) disposed in a first direction 106 with a gate pitch GP. The standard cell circuit 102 includes a first voltage rail 108 disposed in a second direction 110 substantially orthogonal to the first direction 106 in a first metal layer 112 (e.g., a metal zero (M0) metal layer). The first voltage rail 108 has a rail width WRAIL- The first voltage rail 108 corresponds to a first track 114(1) and is configured to receive a first voltage, such as a supply voltage. Additionally, the standard cell circuit 102 includes a second voltage rail 116 disposed in the second direction 110 in the first metal layer 112. The second voltage rail 116 has the rail width WRAIL- The second voltage rail 116 corresponds to a second track 114(2) and is configured to receive a second voltage, such as a ground voltage. The first and second voltage rails 108, 116 have the rail width WRAIL such that the corresponding conductive area of each is large enough to achieve a relatively low resistance, and thus, a relatively low voltage drop across the first and second voltage rails 108, 116.[0022] With continuing reference to Figure 1, the standard cell circuit 102 also employs routing lines 118(1)-118(5) disposed in the second direction 110 in the first metal layer 112 between the first and second voltage rails 108, 116. The routing lines 118(1)-118(5) are used, in part, to interconnect elements in the standard cell circuit 102 to form various devices, such as particular logic gates. Each routing line 118(1)-118(5) corresponds to a routing track 120(1)- 120(4), and has a line width WLINE- TO further assist in interconnecting elements in the standard cell circuit 102, as well as to interconnect elements to the first and second voltage rails 108, 116, metal lines 122(1)- 122(3) are disposed substantially in the first direction 106 in a second metal layer 124 (e.g., a metal one (Ml) metal layer) between the respective gates 104(1)-104(4). The metal lines 122(1)-122(3) have a metal pitch MP approximately equal to the gate pitch GP. In other words, a ratio of the metal pitch MP to the gate pitch GP is approximately equal to 1:1. The standard cell circuit 102 employs such a 1:1 ratio, in part, due to conventional fabrication techniques.[0023] With continuing reference to Figure 1, as the technology node size scales down to ten (10) nanometers (nm) and below, the percentage by which the layout 100 can scale down in the second direction 110 is limited due to gate pitch GP requirements. However, the layout 100 may scale down in area by reducing a total height HCELL- For example, the total height HCELL of the layout 100 in the first direction 106 is measured from the center of the first voltage rail 108 to the center of the second voltage rail 116. Thus, to reduce the total height HCELL, the first and second voltage rails 108, 116 can be employed having a width smaller than the rail width WRAIL such that each of the first and second voltage rails 108, 116 consumes a one-half track instead of the first and second tracks 114(1), 114(2). Reducing the width of the first and second voltage rails 108, 116 in this manner causes the standard cell circuit 102 to be referred to as a five (5) track cell (i.e., two (2) one-half tracks plus routing tracks 120(1)- 120(4)) rather than a six (6) track cell (i.e., first and second tracks 114(1), 114(2) plus four (4) routing tracks 120(1)- 120(4)) as illustrated in Figure 1. However, reducing the rail width WRAIL decreases the conductive area of both the first and second voltage rails 108, 116. Such a reduction in the conductive area results in both the first and second voltage rails 108, 116 having an increased resistance, and thus an increased voltage drop (i.e., current- resistance (IR) drop). An increased voltage drop reduces the voltage distributed from the first and second voltage rails 108, 116 to corresponding devices, which may cause erroneous operation of devices in the standard cell circuit 102.[0024] In this regard, Figures 2A-2C illustrate an exemplary layout 200 of an exemplary standard cell circuit 202 employing first and second voltage rails 204, 206 electrically coupled to first and second metal shunts 208, 210 for reducing or avoiding increases in voltage drop while achieving a reduced area. As described in more detail below, the standard cell circuit 202 includes metal lines 212(1)-212(8) disposed with a metal pitch MP such that the number of metal lines 212(1)-212(8) allows additional metal lines 212(1)-212(8) to be dedicated to electrically coupling the first and second voltage rails 204, 206 to the respective first and second metal shunts 208, 210. Such coupling increases the conductive area of the first and second voltage rails 204, 206, which reduces the resistance and the voltage drop across the first and second voltage rails 204, 206. In this manner, the first and second voltage rails 204, 206 can have a relatively smaller width while reducing or avoiding increases in voltage drop across the first and second voltage rails 204, 206. Figure 2A illustrates a top-view of the layout 200 of the standard cell circuit 202, while Figures 2B and 2C illustrate cross-sectional views of the layout 200 of the standard cell circuit 202. The cross-sectional diagram of Figure 2B is taken generally along the line A-A of Figure 2A, and the cross-sectional diagram of Figure 2C is taken generally along the line B-B of Figure 2A. Components of the layout 200 of the standard cell circuit 202 are referred to with common element numbers in Figures 2A-2C.[0025] With reference to Figures 2A-2C, the standard cell circuit 202 includes active devices (not shown) that include corresponding gates 214(1)-214(4) disposed in a first direction 216 with a gate pitch GP. While this aspect includes the gates 214(1)- 214(4), other aspects may employ any number M of gates 214. The standard cell circuit 202 also includes the first voltage rail 204 disposed in a second direction 218 substantially orthogonal to the first direction 216 in a first metal layer 220 (e.g., a metal zero (M0) metal layer). The first voltage rail 204 has a line width WLINE- The first voltage rail 204 corresponds to a first one-half track 222(1) and is configured to receive a first voltage, such as a supply voltage. Additionally, the standard cell circuit 202 includes the second voltage rail 206 disposed in the second direction 218 in the first metal layer 220 (e.g., MO metal layer). The second voltage rail 206 has the line width WLINE- The second voltage rail 206 corresponds to a second one-half track 222(2) and is configured to receive a second voltage, such as a ground voltage.[0026] With continuing reference to Figures 2A-2C, the standard cell circuit 202 also includes routing lines 224(1 )-224(5) disposed in the second direction 218 substantially orthogonal to the first direction 216 in the first metal layer 220 (e.g., M0 metal layer) between the first and second voltage rails 204, 206. The routing lines 224(l)-224(5) are used, in part, to interconnect elements in the standard cell circuit 202 to form various devices, such as particular logic gates. Each routing line 224(l)-224(5) corresponds to a routing track 226(l)-226(4), and has the line width WLINE- For example, the routing line 224(1) corresponds to the routing track 226(1), the routing line 224(2) corresponds to the routing track 226(2), the routing lines 224(3), 224(4) correspond to the routing track 226(3), and the routing line 224(5) corresponds to the routing track 226(4). As used herein, a track, such as a one-half track 222(1), 222(2) or a routing track 226(l)-226(4), is a defined area in the layout 200 in which a particular type of line, such as the first voltage rail 204 or routing line 224(1), may be disposed.[0027] With continuing reference to Figures 2A-2C, to further assist in interconnecting elements in the standard cell circuit 202, as well as to interconnect elements to the first and second voltage rails 204, 206, the metal lines 212(1)-212(8) are disposed in the first direction 216 in a second metal layer 228 (e.g., a metal one (Ml) metal layer). As described in more detail below, the metal lines 212(1)-212(8) have a metal pitch MP that is less than the gate pitch GP such that the number of metal lines 212(1)-212(8) exceeds the number of gates 214(1)-214(4). While this aspect includes the metal lines 212(1)-212(8), other aspects may employ any number N of metal lines 212.[0028] With continuing reference to Figures 2A-2C, the first and second voltage rails 204, 206 having substantially the same line width WLINE as the routing lines 224(l)-224(5) results in the first and second voltage rails 204, 206 being narrower in line width WLINE than the rail width WRAIL of the first and second voltage rails 108, 116 in Figure 1. In this manner, the layout 200 of the standard cell circuit 202 has a smaller cell height HCELL compared to the layout 100 of the standard cell circuit 102 in Figure 1. However, the first and second voltage rails 204, 206 having the line width WLINE decreases the conductive area of the first and second voltage rails 204, 206, which increases their resistance. To reduce or avoid an increase in a voltage drop (i.e., current- resistance (IR) drop) attributable to such increased resistance, the standard cell circuit 202 includes the first and second metal shunts 208, 210 in a third metal layer 230 (e.g., a metal two (M2) metal layer) that are electrically coupled to the first and second voltage rails 204, 206, respectively. In particular, the first and second voltage rails 204, 206 are electrically coupled to the first and second metal shunts 208, 210, respectively, by way of a subset of the metal lines 212(1)-212(8) that are not electrically coupled to the gates 214(1)-214(4). The respective first and second metal shunts 208, 210 increase the conductive area of the first and second voltage rails 204, 206. Increasing the conductive area of the first and second voltage rails 204, 206 reduces their resistance, which reduces or avoids an increase in the voltage drop (i.e., IR drop) across the first and second voltage rails 204, 206.[0029] As a non-limiting example, with continuing reference to Figures 2A-2C, the first metal shunt 208 is electrically coupled to the first voltage rail 204 using the metal lines 212(3), 212(7). More specifically, in this example, first vias 232(1), 232(2) are disposed between the first metal layer 220 (e.g., M0 metal layer) and the second metal layer 228 (e.g., Ml metal layer) such that the first vias 232(1), 232(2) electrically couple the metal lines 212(3), 212(7), respectively, to the first voltage rail 204. Further, first vias 234(1), 234(2) are disposed between the second metal layer 228 (e.g., Ml metal layer) and the third metal layer 230 (e.g., M2 metal layer) such that the first vias 234(1), 234(2) electrically couple the metal lines 212(3), 212(7) to the first metal shunt 208. Additionally, the first voltage rail 204 is electrically coupled to device layers 236(1)- 236(3) of corresponding devices using contacts 238(l)-238(3), respectively. For example, the device layers 236(l)-236(3) may be sources of corresponding devices, wherein the contacts 238(l)-238(3) may be corresponding source contacts.[0030] With continuing reference to Figures 2A-2C, the second metal shunt 210 is electrically coupled to the second voltage rail 206 using the metal lines 212(4), 212(8). More specifically, in this example, second vias 240(1), 240(2) are disposed between the first metal layer 220 (e.g., M0 metal layer) and the second metal layer 228 (e.g., Ml metal layer) such that the second vias 240(1), 240(2) electrically couple the metal lines 212(4), 212(8), respectively, to the second voltage rail 206. Further, second vias 242(1), 242(2) are disposed between the second metal layer 228 (e.g., Ml metal layer) and the third metal layer 230 (e.g., M2 metal layer) such that the second vias 242(1), 242(2) electrically couple the metal lines 212(4), 212(8) to the second metal shunt 210. Additionally, the second voltage rail 206 is electrically coupled to device layers 236(4)- 236(6) of corresponding devices using contacts 238(4)-238(6), respectively. For example, the device layers 236(4)-236(6) may be sources of corresponding devices, wherein the contacts 238(4)-238(6) may be corresponding source contacts.[0031] With continuing reference to Figures 2A-2C, to employ the first and second metal shunts 208, 210 as described above, the metal lines 212(3), 212(4), 212(7), and 212(8) are not used to electrically couple other elements in the standard cell circuit 202, such as the gates 214(1)-214(4) of the active devices. In other words, the metal lines 212(3), 212(4), 212(7), and 212(8) are dedicated to electrically coupling the first and second metal shunts 208, 210 to the first and second voltage rails 204, 206, respectively, and are not used to electrically couple other elements. In order to have enough metal lines 212(1)-212(8) to allow the metal lines 212(3), 212(4), 212(7), and 212(8) to be used in this manner, the metal lines 212(1)-212(8) have the metal pitch MP that is less than the gate pitch GP such that the number of metal lines 212(1)-212(8) exceeds the number of gates 214(1)-214(4). As a non-limiting example, the metal pitch MP in this aspect is equal or approximately equal to two-thirds (2/3) of the gate pitch GP (i.e., a ratio of the metal pitch MP to the gate pitch GP is approximately equal to 2:3). Thus, in this example, if the standard cell circuit 202 is fabricated using a process technology having a ten (10) nm technology node size, the metal pitch MP and the gate pitch GP may be equal or approximately equal to twenty-eight (28) nm and forty-two (42) nm, respectively. This configuration results in enough metal lines 212(1)-212(8) to allow the metal lines 212(3), 212(4), 212(7), and 212(8) to be dedicated to electrically coupling the first and second voltage rails 204, 206 to the first and second metal shunts 208, 210, respectively. Further, the remaining metal lines 212(1), 212(2), 212(5), and 212(6) can be electrically coupled to one or more of the gates 214(1)-214(4) so as to interconnect corresponding active devices. [0032] With continuing reference to Figures 2A-2C, other aspects of the standard cell circuit 202 may employ a different ratio of metal pitch MP to gate pitch GP and achieve similar results. As a non-limiting example, the metal pitch MP can be between approximately one -half (1/2) and three-fourths (3/4) of the gate pitch GP. If the metal pitch MP to gate pitch GP ratio is in such an exemplary range, the metal pitch MP may be between approximately twenty (20) nm and thirty (30) nm, while the gate pitch GP may be between approximately forty (40) nm and forty-two (42) nm, for example.[0033] Further, as described above, employing the first and second voltage rails 204, 206 having the line width WLINE allows the cell height HCELL of the layout 200 of the standard cell circuit 202 to be less than the cell height HCELL of the layout 100 of the standard cell circuit 102 in Figure 1. In particular, the cell height HCELL can be minimized by setting the line width WLINE approximately equal to a minimum line width of the process technology used to fabricate the standard cell circuit 202. As used herein, the minimum line width is the minimum size in which a routing line 224(1)- 224(5) can be fabricated without violating design rules of the process technology. For example, a process technology having a ten (10) nm technology node size may have a minimum line width approximately equal to fourteen (14) nm. Minimizing the cell height HCELL allows the standard cell circuit 202 to achieve a reduced area compared to the standard cell circuit 102 in Figure 1. Thus, the standard cell circuit 202 can achieve a smaller area compared to the standard cell circuit 102 in Figure 1 by way of the narrower first and second voltage rails 204, 206, while also reducing or avoiding increases in voltage drop (i.e., IR drop) corresponding to the narrower first and second voltage rails 204, 206.[0034] Figure 3 illustrates an exemplary process 300 for fabricating the standard cell circuit 202 in Figure 2A. In this regard, the process 300 includes disposing the gates 214(1)-214(4) with the gate pitch GP (block 302). As previously noted, each gate 214(1)-214(4) corresponds to an active device. The process 300 also includes disposing the first voltage rail 204 in the first metal layer 220 (e.g., M0 metal layer) (block 304). As discussed above, the first voltage rail 204 corresponds to the first one-half track 222(1), has the line width WLINE, and is configured to receive the first voltage. Additionally, the process 300 includes disposing the second voltage rail 206 in the first metal layer 220 (e.g., M0 metal layer) (block 306). As discussed above, the second voltage rail 206 corresponds to the second one-half track 222(2), has the line width WLINE, and is configured to receive the second voltage. Although illustrated in separate blocks 304 and 306, the first and second voltage rails 204, 206 may be disposed concurrently or simultaneously during the fabrication process 300. Additionally, although not illustrated in Figure 3, aspects disclosed herein may also dispose the routing lines 224(l)-224(5) concurrently or simultaneously with the first and second voltage rails 204, 206, if applicable.[0035] With continuing reference to Figure 3, the process 300 includes disposing the metal lines 212(1)-212(8) in the second metal layer 228 (e.g., Ml metal layer) and having the metal pitch MP less than the gate pitch GP (block 308). The process 300 also includes disposing the first metal shunt 208 in the third metal layer 230 (e.g., M2 metal layer), wherein the first metal shunt 208 is electrically coupled to the first voltage rail 204 and the metal lines 212(3), 212(7), which are not electrically coupled to the gates 214(1)-214(4) (block 310). For example, such electrical coupling can be achieved by disposing the first vias 232(1), 232(2) between the first and second metal layers 220, 228, and disposing the first vias 234(1), 234(2) between the second and third metal layers 228, 230, as previously described. Additionally, the process 300 includes disposing the second metal shunt 210 in the third metal layer 230 (e.g., M2 metal layer), wherein the second metal shunt 210 is electrically coupled to the second voltage rail 206 and the metal lines 212(4), 212(8), which are not electrically coupled to the gates 214(1)-214(4) (block 312). For example, such electrical coupling can be achieved by disposing the second vias 240(1), 240(2) between the first and second metal layers 220, 228, and disposing the second vias 242(1), 242(2) between the second and third metal layers 228, 230, as previously described. Additionally, although illustrated in separate blocks 310 and 312, the first and second metal shunts 208, 210 may be disposed concurrently or simultaneously during the fabrication process 300.[0036] With continuing reference to Figure 3, in the standard cell circuit 202 in Figures 2A-2C and fabricated using the process 300, the second metal layer 228 (e.g., Ml metal layer) is disposed above the first metal layer 220 (e.g., M0 metal layer). The third metal layer 230 (e.g., M2 metal layer) is disposed above the second metal layer 228 (e.g., Ml metal layer). However, other aspects may employ the first, second, and third metal layers 220, 228, and 230 in alternative orientations relative to one another and achieve similar results. In other words, the first, second, and third metal layers 220, 228, and 230 are not limited to the MO, Ml, and M2 metal layers, respectively.[0037] In addition to reducing or avoiding increases in voltage drop as described above, a standard cell circuit employing voltage rails electrically coupled to metal shunts may also achieve a higher power net (PN) vertical connection density as compared to conventional standard cell circuits. A higher PN vertical connection density can be used to adjust the resistance corresponding to the voltage rails of the standard cell circuit to achieve a desired voltage drop (i.e., IR drop) independent of the width of the standard cell circuit.[0038] In this regard, Figure 4 illustrates a cross-sectional diagram of an exemplary standard cell circuit 400 designed to achieve an increased PN vertical connection density. More specifically, the standard cell circuit 400 includes a first voltage rail 402 disposed in a first metal layer 404 (e.g., a M0 metal layer). Metal lines 406(l)-406(7) are disposed in a second metal layer 408 (e.g., a Ml metal layer) that electrically couple to the first voltage rail 402 and a first metal shunt 410 disposed in a third metal layer 412 (e.g., a M2 metal layer). In this example, first vias 414(1)-414(7) are disposed between the first voltage rail 402 and a corresponding metal line 406(l)-406(7) such that the first vias 414(1)-414(7) electrically couple the metal lines 406(l)-406(7), respectively, to the first voltage rail 402. Additionally, first vias 416(1)-416(7) are disposed between the corresponding metal lines 406(l)-406(7) and the first metal shunt 410 such that the first vias 416(1)-416(7) electrically couple the metal lines 406(1)- 406(7), respectively, to the first metal shunt 410. The first voltage rail 402 is electrically coupled to a device 418 using a contact 420.[0039] With continuing reference to Figure 4, the metal lines 406(l)-406(7) can be adjusted to change the resistance of the first voltage rail 402. For example, the number of metal lines 406(l)-406(7) may be reduced to achieve a higher resistance, and thus a higher voltage drop (i.e., IR drop). Alternatively, the number of metal lines 406(1)- 406(7) may be left unchanged to achieve a lower resistance, and thus a lower voltage drop (i.e., IR drop). Importantly, adjusting the PN vertical connection density by adjusting the number of metal lines 406(l)-406(7) in this manner is independent of a cell width WCELL of the standard cell circuit 400. In other words, the PN vertical connection density, and thus the voltage drop (i.e., IR drop) of the standard cell circuit 400, can be adjusted as described above so as to provide a desired voltage to the device 418 without altering or being limited by the cell width WCELL-[0040] In contrast, Figure 5 illustrates a cross-sectional diagram of a conventional standard cell circuit 500 with a PN vertical connection density limited by the cell width WCELL of the standard cell circuit 500. More specifically, the standard cell circuit 500 includes a first voltage rail 502 disposed in a first metal layer 504 (e.g., a M0 metal layer). However, the standard cell circuit 500 does not include metal lines in a second metal layer 506 (e.g., a Ml metal layer) or a first metal shunt in a third metal layer 508 (e.g., a M2 metal layer) as in the standard cell circuit 400 in Figure 4. In this manner, the PN vertical connection density of the standard cell circuit 500 is limited to a first vertical leg 510(1) and a second vertical leg 510(2) at the outer boundary edges of the standard cell circuit 500. In other words, because the standard cell circuit 500 only includes connections to the first voltage rail 502 at the outer boundary edges, the PN vertical connection density is dependent on the cell width WCELL of the standard cell circuit 500. Thus, the voltage provided to a device 512 electrically coupled to the first voltage rail 502 using a contact 514 is also dependent on the cell width WCELL- [0041] The elements described herein are sometimes referred to as means for performing particular functions. In this regard, the active devices are sometimes referred to herein as "a means for performing a logic function," and the gates 214(1)- 214(4) are sometimes referred to herein as "a means for receiving gate voltage disposed in a first direction with a gate pitch." The first voltage rail 204 is sometimes referred to herein as "a means for providing a first voltage disposed in a first metal layer having a line width and corresponding to a first one -half track." The second voltage rail 206 is sometimes referred to herein as "a means for providing a second voltage disposed in the first metal layer having the line width and corresponding to a second one -half track." The metal lines 212(1)-212(8) are sometimes referred to herein as "a plurality of means for electrically coupling disposed in a second metal layer with a metal pitch less than the gate pitch."[0042] Additionally, the first metal shunt 208 is sometimes referred to herein as "a means for increasing a first resistance disposed in a third metal layer electrically coupled to the means for providing the first voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage." The second metal shunt 210 is sometimes referred to herein as "a means for increasing a second resistance disposed in the third metal layer electrically coupled to the means for providing the second voltage and one or more means for electrically coupling not electrically coupled to the means for receiving the gate voltage." Further, the first vias 232(1), 232(2) are sometimes referred to herein as "a means for interconnecting the means for providing the first voltage to the plurality of means for electrically coupling." The second vias 240(1), 240(2) are sometimes referred to herein as "a means for interconnecting the means for providing the second voltage to the plurality of means for electrically coupling." The first vias 234(1), 234(2) are sometimes referred to herein as "a means for interconnecting the means for electrically coupling to the means for increasing the first resistance." Further, the second vias 242(1), 242(2) are sometimes referred to herein as "a means for interconnecting the means for electrically coupling to the plurality of means for increasing the second resistance."[0043] The standard cell circuits employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter.[0044] In this regard, Figure 6 illustrates an example of a processor-based system 600 that can employ the standard cell circuit 202 employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop while achieving a reduced area illustrated in Figure 2A. In this example, the processor-based system 600 includes one or more central processing units (CPUs) 602, each including one or more processors 604. The CPU(s) 602 may have cache memory 606 coupled to the processor(s) 604 for rapid access to temporarily stored data. The CPU(s) 602 is coupled to a system bus 608 and can intercouple master and slave devices included in the processor-based system 600. As is well known, the CPU(s) 602 communicates with these other devices by exchanging address, control, and data information over the system bus 608. For example, the CPU(s) 602 can communicate bus transaction requests to a memory controller 610 as an example of a slave device. Although not illustrated in Figure 6, multiple system buses 608 could be provided, wherein each system bus 608 constitutes a different fabric.[0045] Other master and slave devices can be connected to the system bus 608. As illustrated in Figure 6, these devices can include a memory system 612, one or more input devices 614, one or more output devices 616, one or more network interface devices 618, and one or more display controllers 620, as examples. The input device(s) 614 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 616 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 618 can be any device configured to allow exchange of data to and from a network 622. The network 622 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 618 can be configured to support any type of communications protocol desired. The memory system 612 can include one or more memory units 624(0)-624(M).[0046] The CPU(s) 602 may also be configured to access the display controller(s) 620 over the system bus 608 to control information sent to one or more displays 626. The display controller(s) 620 sends information to the display(s) 626 to be displayed via one or more video processors 628, which process the information to be displayed into a format suitable for the display(s) 626. The display(s) 626 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, etc. [0047] Figure 7 illustrates an example of a wireless communications device 700 that can include the standard cell circuit 202 employing voltage rails electrically coupled to metal shunts for reducing or avoiding increases in voltage drop while achieving a reduced area illustrated in Figure 2A. In this regard, the wireless communications device 700 may be provided in an integrated circuit (IC) 702. The wireless communications device 700 may include or be provided in any of the above referenced devices, as examples. As shown in Figure 7, the wireless communications device 700 includes a transceiver 704 and a data processor 706. The data processor 706 may include a memory (not shown) to store data and program codes. The transceiver 704 includes a transmitter 708 and a receiver 710 that support bi-directional communication. In general, the wireless communications device 700 may include any number of transmitters and/or receivers for any number of communication systems and frequency bands. All or a portion of the transceiver 704 may be implemented on one or more analog ICs, RF ICs (RFICs), mixed-signal ICs, etc.[0048] A transmitter or a receiver may be implemented with a super-heterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency-converted between radio frequency (RF) and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage for a receiver. In the direct-conversion architecture, a signal is frequency converted between RF and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the wireless communications device 700 in Figure 7, the transmitter 708 and the receiver 710 are implemented with the direct-conversion architecture.[0049] In the transmit path, the data processor 706 processes data to be transmitted and provides I and Q analog output signals to the transmitter 708. In the exemplary wireless communications device 700, the data processor 706 includes digital-to-analog- converters (DACs) 712(1), 712(2) for converting digital signals generated by the data processor 706 into the I and Q analog output signals, e.g., I and Q output currents, for further processing.[0050] Within the transmitter 708, lowpass filters 714(1), 714(2) filter the I and Q analog output signals, respectively, to remove undesired signals caused by the prior digital-to-analog conversion. Amplifiers (AMP) 716(1), 716(2) amplify the signals from the lowpass filters 714(1), 714(2), respectively, and provide I and Q baseband signals. An upconverter 718 upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillator (LO) signals through mixers 720(1), 720(2) from a TX LO signal generator 722 to provide an upconverted signal 724. A filter 726 filters the upconverted signal 724 to remove undesired signals caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier (PA) 728 amplifies the upconverted signal 724 from the filter 726 to obtain the desired output power level and provides a transmit RF signal. The transmit RF signal is routed through a duplexer or switch 730 and transmitted via an antenna 732.[0051] In the receive path, the antenna 732 receives signals transmitted by base stations and provides a received RF signal, which is routed through the duplexer or switch 730 and provided to a low noise amplifier (LNA) 734. The duplexer or switch 730 is designed to operate with a specific RX-to-TX duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified by the LNA 734 and filtered by a filter 736 to obtain a desired RF input signal. Downconversion mixers 738(1), 738(2) mix the output of the filter 736 with I and Q receive (RX) LO signals (i.e., LO_I and LO_Q) from an RX LO signal generator 740 to generate I and Q baseband signals. The I and Q baseband signals are amplified by amplifiers (AMP) 742(1), 742(2) and further filtered by lowpass filters 744(1), 744(2) to obtain I and Q analog input signals, which are provided to the data processor 706. In this example, the data processor 706 includes analog-to-digital-converters (ADCs) 746(1), 746(2) for converting the analog input signals into digital signals to be further processed by the data processor 706.[0052] In the wireless communications device 700 in Figure 7, the TX LO signal generator 722 generates the I and Q TX LO signals used for frequency upconversion, while the RX LO signal generator 740 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A transmit (TX) phase-locked loop (PLL) circuit 748 receives timing information from the data processor 706 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from the TX LO signal generator 722. Similarly, a receive (RX) phase-locked loop (PLL) circuit 750 receives timing information from the data processor 706 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from the RX LO signal generator 740.[0053] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0054] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0055] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0056] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined and/or performed concurrently or simultaneously. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0057] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Techniques and devices for managing power consumption of a memory system using loopback are described. When a memory system is in a first state (e.g., a deactivated state), a host device may send a signal to change one or more components of the memory system to a second state (e.g., an activated state). The signal may be received by one or more memory devices, which may activate one or more components based on the signal. The one or more memory devices may send a second signal to a power management component, such as a power management integrated circuit (PMIC), using one or more techniques. The second signal may be received by the PMIC using a conductive path running between the memory devices and the PMIC. Based on receiving the second signal or some third signal that is based on the second signal, the PMIC may enter an activated state.
1.A method including:Receiving, at a memory device of the memory system, a first signal from a host device that activates one or more components of the memory device;Activating the one or more components of the memory device based at least in part on receiving the first signal from the host device; andBased at least in part on activating the one or more components of the memory device, the first of the one or more components of the power management integrated circuit PMIC will be activated via the conductive path coupled with the loopback pin of the memory device. The second signal is sent to the PMIC.2.The method according to claim 1, further comprising:A third signal is induced on a second conductive path coupled to the PMIC based at least in part on using the conductive path to send the second signal, the third signal being the one used to activate the PMIC Or multiple components.3.The method according to claim 2, which additionally comprises:To toggle the second signal sent on the conductive path between different voltage levels, wherein inducing the third signal on the second conductive path is based at least in part on toggling the first signal Two signals.4.The method according to claim 1, further comprising:A third signal is sent from the memory device to the gate of a transistor via a second conductive path, the transistor selectively coupling the memory device with the PMIC based at least in part on the third signal.5.The method of claim 4, wherein the second conductive path couples a second loopback pin of the memory device and the gate of the transistor.6.The method according to claim 4, which additionally comprises:Based at least in part on sending the third signal to the transistor, using the transistor to couple the first portion of the conductive path with the second portion of the conductive path.7.The method according to claim 1, further comprising:A third signal that deactivates the one or more components of the memory device is received from the host device, wherein receiving the first signal is based at least in part on the receiving the third signal.8.The method according to claim 7, which additionally comprises:Based at least in part on receiving the third signal, a fourth signal that deactivates the one or more components of the PMIC is sent to the PMIC.9.The method of claim 1, wherein sending the second signal to the PMIC occurs when the PMIC is in a deactivated state.10.A method including:At the power management integrated circuit PMIC, when one or more components of the PMIC are in a deactivated state, receiving a signal from a memory device of the memory system via a conductive path coupled with the loopback pin of the memory device; andThe one or more components of the PMIC are activated based at least in part on receiving the signal from the memory device via the conductive path.11.The method according to claim 10, which additionally comprises:A second signal that deactivates the one or more components of the PMIC is received from a host device via a sideband channel, wherein receiving the signal from the memory device is based at least in part on receiving the first signal from the host device Two signals.12.The method according to claim 10, which additionally comprises:A second signal that deactivates the one or more components of the PMIC is received from the memory device, wherein receiving the signal is based at least in part on receiving the second signal.13.The method according to claim 12, which additionally comprises:Entering the deactivated state by the PMIC is based at least in part on the memory device entering the deactivated state, wherein receiving the signal is based at least in part on the PMIC being in the deactivated state.14.The method according to claim 10, which additionally comprises:Receiving a second signal to deactivate the one or more components of the PMIC from the memory device;Receiving a third signal from a second memory device to deactivate the one or more components of the PMIC; andBased at least in part on receiving the second signal from the memory device and receiving the third signal from the second memory device, deactivating the one or more components of the PMIC.15.The method of claim 10, wherein:The conductive path is inductively coupled with a second conductive path, and the second conductive path is directly coupled with the loopback pin of the memory device; andThe signal is induced by a second signal sent on the second conductive path.16.The method of claim 10, wherein the signal is received by the inter-integrated circuit bus of the PMIC.17.A memory system includingA memory device, which includes a memory unit configured to store data;A power management integrated circuit PMIC configured to perform power control functions for the memory system and configured to selectively transition between a deactivated state and an activated state; andA conductive path coupled with the loopback pin of the memory device and the PMIC, the memory device is configured to cause the PMIC to be deactivated from the PMIC by sending a signal to the PMIC via the conductive path The state transitions to the activated state.18.The memory system according to claim 17, further comprising:A second conductive path coupled with the PMIC and inductively coupled with the conductive path, the memory device is configured to induce induction on the second conductive path by sending the signal through the conductive path A second signal, the second signal being configured to cause the PMIC to transition from the deactivated state to the activated state.19.The memory system according to claim 17, further comprising:A transistor positioned on the conductive path between the memory device and the PMIC and configured to selectively couple the memory device with the PMIC, the transistor including a second conductive path connected to the PMIC A gate coupled to a second loopback pin of the memory device, wherein the memory device is configured to use the second conductive path to send a gate signal that activates the transistor.20.The memory system according to claim 17, further comprising:A first group of memory devices coupled to a first channel, the first group of memory devices including the memory devices; andA second memory device group, which is coupled to a second channel, wherein the PMIC is configured to enter the Deactivate state.21.The memory system of claim 20, wherein the memory devices of the first memory device group are configured to send a first sleep signal to the PMIC via a second conductive path, and the second memory device The second memory device of the group is configured to send a second hibernation signal to the PMIC via the second conductive path, wherein the PMIC is based at least in part on receiving the first hibernation signal using the second conductive path Signal and the second sleep signal to enter the deactivated state.22.The memory system of claim 21, wherein the second conductive path includes the conductive path.23.The memory system of claim 20, wherein the memory devices of the first memory device group are configured to send a first sleep signal to the PMIC via the conductive path, and the second memory device The second memory device of the group is configured to send a second sleep signal to the PMIC via a second conductive path, wherein the PMIC is based at least in part on receiving the first sleep signal using the conductive path and using all The second conductive path enters the deactivated state after receiving the second sleep signal.24.The memory system of claim 17, wherein the conductive path is coupled to a serial clock pin of the PMIC.25.The memory system of claim 17, wherein the conductive path is coupled to an inter-integrated circuit bus of the PMIC.26.The memory system according to claim 17, further comprising:A sideband channel that couples the PMIC and an edge connector, the sideband channel is configured to carry a second signal between the PMIC and the host device, and the PMIC is based at least in part on the sideband The channel enters the deactivated state after receiving the second signal from the host device.27.The memory system according to claim 17, further comprising:A hub coupled with the PMIC and an edge connector, the hub configured to interface between a host device and one or more components of the memory system including the PMIC, wherein the PMIC is at least partially Entering the deactivated state based on receiving a second signal from the host device using the hub.28.18. The memory system of claim 17, wherein the loopback pin of the memory device is configured to provide feedback information during a test procedure of the memory device.29.The memory system according to claim 17, further comprising:An edge connector configured to selectively couple the memory device with a host device, wherein the pin of the PMIC can be coupled with the edge connector.30.A memory device includingMemory cell array;Loopback pin, which is coupled to the conductive path; andThe controller, which can be operated to:Receiving a first signal from the host device to activate one or more components of the memory device;Activating the one or more components of the memory device based at least in part on receiving the first signal from the host device; andA second signal is sent via the conductive path, the second signal being used to activate one or more components of a power management integrated circuit PMIC based at least in part on activating the one or more components of the memory device.31.The memory device according to claim 30, further comprising:The second loopback pin, which is coupled to the second conductive path, wherein the controller is additionally operable to:The second loopback pin is used to send a third signal to the gate of a transistor via the second conductive path, and the transistor selectively connects the memory device with the transistor based at least in part on the third signal. The PMIC coupling.32.A power management integrated circuit PMIC, which includes:Inter-integrated circuit bus, which is coupled to the conductive path; andThe controller, which can be operated to:When one or more components of the PMIC are in a deactivated state, receiving a signal from a memory device of the memory system via the conductive path, wherein the conductive path is coupled to the loopback pin of the memory device; andThe one or more components of the PMIC are activated based at least in part on receiving the signal from the memory device via the conductive path.33.The PMIC according to claim 32, which additionally comprises:An interface, which is coupled with a sideband channel, the sideband channel is coupled with a host device, wherein the controller is additionally operable to:A second signal deactivating the one or more components of the PMIC is received from the host device via the sideband channel.
Technology for power management using loopbackCross referenceThis patent application claims that Kinsley et al. filed on March 1, 2019, the United States patent application titled "Techniques for Power Management Using Loopback". No. 16/290,126 and Kingsley et al. filed on July 13, 2018, entitled "Techniques for Power Management Using Loopback", U.S. Provisional Patent Application No. 62/ The priority of No. 697,882, the application is assigned to the assignee, and each of the applications is expressly incorporated herein by reference in its entirety.Background techniqueThe following generally relates to systems including at least one memory device, and more specifically, to techniques for power management using loopback.Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, and digital displays. Information is stored by programming different states of the memory device. For example, binary devices most often store one of two states, often represented by logic 1 or logic 0. In other devices, more than two states can be stored. In order to access the stored information, the components of the device can read or sense the stored state of at least one of the memory devices. To store information, the components of the device can write or program the state in the memory device.There are various types of memory devices, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM ( MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), etc. The memory device may be volatile or non-volatile. For example, the non-volatile memory of FeRAM can maintain the stored logic state for a long period of time even in the absence of an external power supply. Volatile memory devices (e.g., DRAM) may lose their stored state over time unless they are periodically refreshed by an external power source.In some memory systems, a power management integrated circuit (PMIC) may be used to manage the power application of the memory device. The PMIC may be configured to operate in at least one activated state and at least one deactivated state. Techniques for transitioning between states may be desired.Description of the drawingsFigure 1 illustrates an example of a system supporting a technique for power management using loopback as disclosed herein.Figure 2 illustrates an example of a memory die supporting a technique for power management using loopback as disclosed herein.Figure 3 illustrates an example of a system supporting a memory system for power management using loopback as disclosed herein.FIG. 4 illustrates an example of a circuit of a memory system supporting a technique for power management using loopback as disclosed herein.FIG. 5 illustrates an example of a circuit of a memory system supporting a technique for power management using loopback as disclosed herein.Figure 6 illustrates an example of a flow chart supporting a technique for power management using loopback as disclosed herein.Figure 7 shows a block diagram of a controller supporting a technique for power management using loopback as disclosed herein.Figure 8 shows a block diagram of a controller supporting a technique for power management using loopback as disclosed herein.Figure 9 shows a block diagram of a controller supporting a technique for power management using loopback as disclosed herein.Figures 10 to 15 show flowcharts illustrating one or more methods of supporting techniques for power management using loopbacks as disclosed herein.Detailed waysSome memory systems may be configured to operate in a deactivated state (e.g., a hibernation state) to save power. When these memory systems operate in the deactivated state, the components of the memory system can enter the deactivated state. For example, a memory device of a memory system may be in a deactivated state and a power management integrated circuit (PMIC) may be in a deactivated state. In some cases, when the PMIC is in a deactivated state, the PMIC and/or other components of the memory system may not be able to receive certain signals. This can be attributed to the power failure of some components.A technique for managing the power consumption of a memory system using loopback is described herein. When the memory system is in a deactivated state, the host device can send a signal to reactivate one or more components of the memory system. One or more memory devices can receive the signal, and the one or more memory devices can activate one or more components in response to receiving the signal. One or more memory devices may use one or more loopback pins to send the second signal to the PMIC. The PMIC can receive the second signal using an extended conductive path between the memory device and the PMIC. After receiving the second signal or a third signal based on the second signal, the PMIC can be immediately activated by activating one or more components of the PMIC.The features of the present disclosure are initially described in the context of the memory system in FIGS. 1 and 2. The features of the present disclosure are described in the context of the memory systems, circuits, and flowcharts in FIGS. 3-6. These and other features of the present disclosure are further illustrated and described with reference to FIGS. 7 to 15 containing device diagrams, system diagrams, and flowcharts related to techniques for power management using loopback.Figure 1 illustrates an example of a system 100 that utilizes one or more memory devices in accordance with aspects disclosed herein. The system 100 may include an external memory controller 105, a memory device 110, and multiple channels 115 that couple the external memory controller 105 and the memory device 110. The system 100 may include one or more memory devices, but for ease of description, the one or more memory devices may be described as a single memory device 110.The system 100 may include various aspects of an electronic device, such as a computing device, a mobile computing device, a wireless device, or a graphics processing device. The system 100 may be an example of a portable electronic device. The system 100 may be an example of a computer, a desktop computer, a tablet computer, a smart phone, a cellular phone, a wearable device, an Internet connected device, and so on. The memory device 110 may be a system component configured to store data for one or more other components of the system 100. In some instances, the system 100 is configured for two-way wireless communication with other systems or devices using base stations or access points. In some examples, the system 100 is capable of machine type communication (MTC), machine-to-machine (M2M) communication, or device-to-device (xD2D) communication.At least part of the system 100 may be an example of a host device. Such a host device may be an example of a device that uses a memory to perform a process, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a notebook computer, a tablet computer, a smart phone, a cellular phone, a wearable Devices, Internet connection devices, some other fixed or portable electronic devices, etc. In some cases, the host device may refer to hardware, firmware, software, or any combination thereof that implements the functions of the external memory controller 105. In some cases, the external memory controller 105 may be referred to as a host or host device. In some examples, system 100 is a graphics card.In some cases, the memory device 110 may be an independent device or component that is configured to communicate with other components of the system 100 and provide a physical memory address/space that the system 100 may use or reference. In some examples, the memory device 110 may be configurable to work with at least one or more different types of systems 100. The signaling between the components of the system 100 and the memory device 110 may be operable to support the modulation scheme used to modulate the signal, the different pin designs used to communicate the signal, the different packaging of the system 100 and the memory device 110, the system 100 Clock signaling and synchronization with the memory device 110, timing conventions, and/or other factors.The memory device 110 may be configured to store data for the components of the system 100. In some cases, the memory device 110 may act as a slave device of the system 100 (eg, respond to and execute commands provided by the system 100 through the external memory controller 105). Such commands may include access commands for access operations, such as write commands for write operations, read commands for read operations, refresh commands for refresh operations, or other commands. The memory device 110 may include two or more memory dies 160 (e.g., memory chips) that support a desired or designated capacity for data storage. A memory device 110 that includes two or more memory dies may be referred to as a multi-die memory or package (also referred to as a multi-chip memory or package).The system 100 may additionally include a processor 120, a basic input/output system (BIOS) component 125, one or more peripheral components 130, and an input/output (I/O) controller 135. The components of the system 100 can communicate electronically with each other using the bus 140.The processor 120 may be configured to control at least part of the system 100. The processor 120 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or It can be a combination of these types of components. In this type of situation, the processor 120 may be an instance of a central processing unit (CPU), a graphics processing unit (GPU), a general-purpose GPU (GPGPU), or a system on chip (SoC), among other examples.The BIOS component 125 may be a software component including BIOS operating as firmware, which can initialize and run various hardware components of the system 100. The BIOS component 125 can also manage the data flow between the processor 120 and various components of the system 100, such as the peripheral component 130, the I/O controller 135, and the like. The BIOS component 125 may include programs or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory.The peripheral component 130 may be any input device or output device, or an interface of such a device, and it may be integrated into the system 100 or integrated with the system 100. Examples can include disk controllers, sound controllers, graphics controllers, Ethernet controllers, modems, universal serial bus (USB) controllers, serial or parallel ports, or peripheral card slots, such as peripheral component interconnect ( PCI) or accelerated graphics port (AGP) slot. The peripheral component 130 may be other components understood by those skilled in the art as a peripheral device.The I/O controller 135 can manage data communication between the processor 120 and the peripheral components 130, the input device 145, or the output device 150. The I/O controller 135 can manage peripheral devices that are not integrated into the system 100 or are not integrated with the system 100. In some cases, I/O controller 135 may represent physical connections or ports to external peripheral components.The input 145 may represent a device or signal external to the system 100 that provides information, signal, or data to the system 100 or its components. This may include user interface or interface with other devices or between other devices. In some cases, the input 145 may be a peripheral device that interfaces with the system 100 via one or more peripheral components 130, or may be managed by the I/O controller 135.The output 150 may represent a device or signal external to the system 100 that is configured to receive output from the system 100 or any of its components. Examples of output 150 may include a display, audio speakers, a printed device or another processor on a printed circuit board, and so on. In some cases, the output 150 may be a peripheral device that interfaces with the system 100 via one or more peripheral components 130, or may be managed by the I/O controller 135.The components of the system 100 may be composed of general-purpose or special-purpose circuit systems designed to perform their functions. This may include various circuit elements configured to perform the functions described herein, such as wires, transistors, capacitors, inductors, resistors, amplifiers, or other active or passive elements.The memory device 110 may include a device memory controller 155 and one or more memory dies 160. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, and/or local memory controller 165-N) and a memory array 170 (e.g., memory Array 170-a, memory array 170-b, and/or memory array 170-N). The memory array 170 may be a collection (e.g., a grid) of memory cells, where each memory cell is configured to store at least one bit of digital data. The features of the memory array 170 and/or memory cells are described in more detail with reference to FIG. 2.The memory device 110 may be an example of a two-dimensional (2D) memory cell array or may be an example of a three-dimensional (3D) memory cell array. For example, a 2D memory device may include a single memory die 160. A 3D memory device may include two or more memory die 160 (e.g., memory die 160-a, memory die 160-b, and/or any The number of memory dies 160-N). In a 3D memory device, multiple memory dies 160-N may be stacked on top of each other. In some cases, the memory die 160-N in a 3D memory device may be referred to as a stack, hierarchy, layer, or die. The 3D memory device may include any number of stacked memory dies 160-N (e.g., two high stacked memory dies, three high stacked memory dies, four high stacked memory dies, five Two high stacked memory dies, six high stacked memory dies, seven high stacked memory dies, eight high stacked memory dies). Compared with a single 2D memory device, this can increase the number of memory cells that can be positioned on the substrate, and can reduce production costs or improve the performance of the memory array, or both. In some 3D memory devices, different layers can share at least one common access line, so that some layers can share at least one of a word line, a digit line, and/or a plate line.The device memory controller 155 may include circuits or components configured to control the operation of the memory device 110. As such, the device memory controller 155 may include hardware, firmware, and software that enable the memory device 110 to execute commands, and may be configured to receive, transmit, or execute commands, data, or control information related to the memory device 110. The device memory controller 155 may be configured to communicate with the external memory controller 105, one or more memory dies 160, or the processor 120. In some cases, the memory device 110 may receive data and/or commands from the external memory controller 105. For example, the memory device 110 may receive a write command or a read command that instructs the memory device 110 to store certain data representing a component of the system 100 (for example, the processor 120), and the read command indicates The memory device 110 provides certain data stored in the memory die 160 to the components of the system 100 (e.g., the processor 120). In some cases, the device memory controller 155 may, in conjunction with the local memory controller 165 of the memory die 160, control the operation of the memory device 110 described herein. Examples of components included in the device memory controller 155 and/or the local memory controller 165 may include a receiver for demodulating signals received from the external memory controller 105, and for modulating and transmitting signals to the external memory controller. The decoder, logic, decoder, amplifier, filter, etc. of the converter 105.The local memory controller 165 (eg, local to the memory die 160) may be configured to control the operation of the memory die 160. Also, the local memory controller 165 may be configured to communicate with the device memory controller 155 (e.g., receive and transmit data and/or commands). The local memory controller 165 may support the device memory controller 155 to control the operation of the memory device 110 as described herein. In some cases, the memory device 110 does not include the device memory controller 155, and the local memory controller 165 or the external memory controller 105 may perform various functions described herein. Thus, the local memory controller 165 may be configured to communicate with the device memory controller 155, communicate with other local memory controllers 165, or directly communicate with the external memory controller 105 or the processor 120.The external memory controller 105 may be configured to implement communication of information, data, and/or commands between the components of the system 100 (for example, the processor 120) and the memory device 110. The external memory controller 105 can act as a liaison between the components of the system 100 and the memory device 110, so that the components of the system 100 do not need to know the operation details of the memory device. The components of the system 100 may present to the external memory controller 105 a request (for example, a read command or a write command) satisfied by the external memory controller 105. The external memory controller 105 may convert or translate the communication exchanged between the components of the system 100 and the memory device 110. In some cases, the external memory controller 105 may include a system clock that generates a common (source) system clock signal. In some cases, the external memory controller 105 may include a common data clock that generates a common (source) data clock signal.In some cases, the external memory controller 105 or other components of the system 100 or the functions described herein may be implemented by the processor 120. For example, the external memory controller 105 may be hardware, firmware, or software implemented by the processor 120 or other components of the system 100, or some combination thereof. Although the external memory controller 105 is depicted as being external to the memory device 110, in some cases, the external memory controller 105 or the functions described herein may be implemented by the memory device 110. For example, the external memory controller 105 may be hardware, firmware, or software implemented by the device memory controller 155 or one or more local memory controllers 165, or some combination thereof. In some cases, the external memory controller 105 may be distributed on the processor 120 and the memory device 110, so that part of the external memory controller 105 is implemented by the processor 120, and other parts are implemented by the device memory controller 155 or the local memory controller. 165 implementation. Similarly, in some cases, one or more functions herein attributed to the device memory controller 155 or the local memory controller 165 may be used by the external memory controller 105 (separate from the processor 120 or included in the processing In the device 120) execute.The components of the system 100 can exchange information with the memory device 110 using multiple channels 115. In some examples, the channel 115 may enable communication between the external memory controller 105 and the memory device 110. Each channel 115 may include one or more signal paths or transmission media (e.g., conductors) between terminals associated with the components of the system 100. For example, the channel 115 may include a first terminal that includes one or more pins or pads at the external memory controller 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and the pin may be configured to act as part of a channel.In some cases, the pins or pads of the terminals may be part of the signal path of the channel 115. Additional signal paths may be coupled with the terminals of the channel for routing signals within the components of the system 100. For example, the memory device 110 may include signal paths (e.g., signal paths within the memory device 110 or its components, such as within the memory die 160) that route signals from the terminals of the channel 115 to the memory device 110 Various components (e.g., device memory controller 155, memory die 160, local memory controller 165, memory array 170).Channel 115 (and associated signal paths and terminals) may be dedicated to conveying specific types of information. In some cases, channel 115 may be an aggregate channel and thus may include multiple individual channels. For example, the data channel 190 may be x4 (for example, including four signal paths), x8 (for example, including eight signal paths), x16 (including sixteen signal paths), and so on.In some cases, channel 115 may include one or more command and address (CA) channels 186. The CA channel 186 may be configured to transfer commands between the external memory controller 105 and the memory device 110, including control information (e.g., address information) associated with the commands. For example, the CA channel 186 may include a read command regarding the address of the required data. In some cases, the CA channel 186 can be registered on a rising clock signal edge and/or a falling clock signal edge. In some cases, the CA channel 186 may include eight or nine signal paths.In some cases, the channel 115 may include one or more clock signal (CK) channels 188. The CK channel 188 may be configured to communicate one or more common clock signals between the external memory controller 105 and the memory device 110. Each clock signal can be configured to oscillate between a high state and a low state and coordinate the actions of the external memory controller 105 and the memory device 110. In some cases, the clock signal may be a differential output (e.g., CK_t signal and CK_c signal), and the signal path of the CK channel 188 may be configured accordingly. In some cases, the clock signal can be single-ended. The CK channel 188 may contain any number of signal paths. In some cases, the clock signal CK (eg, CK_t signal and CK_c signal) may provide a timing reference for command and addressing operations of the memory device 110 or for other system-wide operations of the memory device 110. The clock signal CK can therefore be referred to as a control clock signal CK, a command clock signal CK or a system clock signal CK differently. The system clock signal CK may be generated by a system clock, and the system clock may include one or more hardware components (for example, an oscillator, a crystal, a logic gate, a transistor, etc.).In some cases, the channel 115 may include one or more data (DQ) channels 190. The data channel 190 may be configured to communicate data and/or control information between the external memory controller 105 and the memory device 110. For example, the data channel 190 may convey information to be written to the memory device 110 (e.g., two-way) or information read from the memory device 110. The data channel 190 may convey signals modulated using different modulation schemes (e.g., NRZ, PAM4).In some cases, channel 115 may include one or more other channels 192 that may be dedicated for other purposes. These other channels 192 may include any number of signal paths.In some cases, other channels 192 may include one or more write clock signal (WCK) channels. Although the'W' in WCK can represent "write" nominally, the write clock signal WCK (e.g., WCK_t signal and WCK_c signal) can provide a timing reference commonly used for access operations of the memory device 110 (e.g., Timing reference for both read and write operations). Therefore, the write clock signal WCK can also be referred to as the data clock signal WCK. The WCK channel may be configured to communicate a common data clock signal between the external memory controller 105 and the memory device 110. The data clock signal may be configured to coordinate access operations (for example, write operations or read operations) of the external memory controller 105 and the memory device 110. In some cases, the write clock signal can be a differential output (eg, WCK_t signal and WCK_c signal), and the signal path of the WCK channel can be configured accordingly. The WCK channel can contain any number of signal paths. The data clock signal WCK may be generated by a data clock, and the data clock may include one or more hardware components (for example, an oscillator, a crystal, a logic gate, a transistor, etc.).In some cases, other channels 192 may include one or more error detection code (EDC) channels. The EDC channel can be configured to convey error detection signals, such as checksums, to improve system reliability. The EDC channel can contain any number of signal paths.The channel 115 can couple the external memory controller 105 and the memory device 110 using different architectures. Examples of various architectures may include buses, point-to-point connections, crossbar switches, high-density interposers such as silicon interposers, or channels formed in organic substrates, or some combination thereof. For example, in some cases, the signal path may at least partially include high-density interposers, such as silicon interposers or glass interposers.The signal transmitted through channel 115 can be modulated using a variety of different modulation schemes. In some cases, a binary symbol (or binary level) modulation scheme may be used to modulate the signal communicated between the external memory controller 105 and the memory device 110. The binary symbol modulation scheme may be an example of an M-ary modulation scheme, where M is equal to two. Each symbol of the binary symbol modulation scheme can be configured to represent one bit of digital data (for example, the symbol can represent a logic 1 or a logic 0). Examples of binary symbol modulation schemes include, but are not limited to, non-return-to-zero (NRZ), unipolar encoding, bipolar encoding, Manchester encoding, pulse amplitude modulation (PAM) with two symbols (for example, PAM2), and so on.In some cases, a multi-symbol (or multi-level) modulation scheme may be used to modulate the signal communicated between the external memory controller 105 and the memory device 110. The multi-symbol modulation scheme may be an example of an M-ary modulation scheme, where M is greater than or equal to three. Each symbol of the multi-symbol modulation scheme may be configured to represent more than one bit of digital data (for example, the symbol may represent logic 00, logic 01, logic 10, or logic 11). Examples of multi-symbol modulation schemes include, but are not limited to, PAM4, PAM8, etc., quadrature amplitude modulation (QAM), quadrature phase shift keying (QPSK), and the like. The multi-symbol signal or PAM4 signal may be a signal modulated using a modulation scheme containing at least three levels to encode information of more than one bit. Multi-symbol modulation schemes and symbols may alternatively be referred to as non-binary, multi-bit, or higher-order modulation schemes and symbols.In some cases, the memory device 110 may be configured to use one or more loopback pins of the memory device 110 to send an activation signal to the PMIC. The loopback pin can be coupled to the PMIC using a conductive path. In some cases, the conductive path can be gated by a transistor. In some cases, the conductive path may be inductively coupled with the second conductive path.FIG. 2 illustrates an example of a memory device 200 according to various examples of the present disclosure. The memory die 200 may be an example of the memory die 160 described with reference to FIG. 1. In some cases, the memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory device. The memory die 200 may include one or more memory cells 205 that are programmable to store different logic states. Each memory cell 205 may be programmable to store two or more states. For example, the memory unit 205 may be configured to store one bit of digital logic (e.g., logic 0 and logic 1) at a time. In some cases, a single memory cell 205 (e.g., a multi-level memory cell) may be configured to store more than one bit of digital logic (e.g., logic 00, logic 01, logic 10, or logic 11) at a time.The memory unit 205 may store charge representing the programmable state in the capacitor. The DRAM architecture may include capacitors that include dielectric materials to store charges that represent a programmable state. In other memory architectures, other storage devices and components are also possible. For example, nonlinear dielectric materials can be used.Operations such as reading and writing can be performed on the memory cell 205 by activating or selecting access lines such as the word line 210 and/or the digit line 215. In some cases, the digit line 215 may also be referred to as a bit line. References to access lines, word lines and digit lines or the like are interchangeable and do not affect understanding or operation. Activating or selecting the word line 210 or the digit line 215 may include applying a voltage to the corresponding line.The memory die 200 may include access lines (eg, word lines 210 and digit lines 215) arranged in a grid-like pattern. The memory cell 205 can be positioned at the intersection of the word line 210 and the digit line 215. By applying a bias to the word line 210 and the digit line 215 (for example, applying a voltage to the word line 210 or the digit line 215), a single memory cell 205 can be accessed at their intersection.Access to the memory unit 205 can be controlled by the row decoder 220 or the column decoder 225. For example, the row decoder 220 may receive a row address from the local memory controller 260 and activate the word line 210 based on the received row address. The column decoder 225 may receive a column address from the local memory controller 260 and may activate the digital line 215 based on the received column address. For example, the memory die 200 may include a plurality of word lines 210 labeled WL_1 to WL_M and a plurality of digital lines 215 labeled DL_1 to DL_N, where M and N depend on the size of the memory array. Therefore, by activating the word line 210 and the digit line 215, such as WL_1 and DL_3, the memory cell 205 at the intersection point thereof can be accessed. The intersection of the word line 210 and the digit line 215 in a two-dimensional or three-dimensional configuration may be referred to as the address of the memory cell 205.The memory unit 205 may include logic storage components, such as a capacitor 230 and a switching component 235. The capacitor 230 may be an example of a dielectric capacitor or a ferroelectric capacitor. The first node of the capacitor 230 may be coupled with the switching element 235 and the second node of the capacitor 230 may be coupled with the voltage source 240. In some cases, the voltage source 240 may be a cell board reference voltage, such as Vpl, or may be grounded, such as Vss. In some cases, the voltage source 240 may be an example of a plate line coupled with a plate line driver. The switching component 235 may be an example of a transistor or any other type of switching device that selectively establishes or cancels the establishment of electronic communication between two components.The selection or deselection of the memory unit 205 can be realized by activating or deactivating the switching component 235. The capacitor 230 can be in electronic communication with the digital line 215 using the switching component 235. For example, the capacitor 230 may be isolated from the digit line 215 when the switching element 235 is deactivated, and the capacitor 230 may be coupled with the digit line 215 when the switching element 235 is activated. In some cases, the switching component 235 is a transistor and its operation can be controlled by applying a voltage to the transistor gate, where the voltage difference between the transistor gate and the transistor source can be greater or less than the threshold voltage of the transistor. In some cases, the switching element 235 may be a p-type transistor or an n-type transistor. The word line 210 may be in electronic communication with the gate of the switching element 235, and the switching element 235 may be activated/deactivated based on the voltage applied to the word line 210.The word line 210 may be a wire for electronic communication with the memory cell 205 used to perform access operations on the memory cell 205. In some architectures, the word line 210 may be in electronic communication with the gate of the switching element 235 of the memory cell 205, and may be configured to control the switching element 235 of the memory cell. In some architectures, the word line 210 may electronically communicate with the node of the capacitor of the memory cell 205, and the memory cell 205 may not include a switching component.The digital line 215 may be a conductive line connecting the memory unit 205 and the sensing component 245. In some architectures, the memory cell 205 may be selectively coupled with the digit line 215 during part of the access operation. For example, the word line 210 and the switching component 235 of the memory cell 205 may be configured to couple and/or isolate the capacitor 230 and the digit line 215 of the memory cell 205. In some architectures, the memory unit 205 may be in electronic communication (e.g., constant) with the digital line 215.The sensing component 245 may be configured to detect the state (eg, charge) stored on the capacitor 230 of the memory cell 205, and determine the logical state of the memory cell 205 based on the stored state. In some cases, the charge stored by the memory cell 205 may be extremely small. Thus, the sensing component 245 may include one or more sense amplifiers to amplify the signal output by the memory unit 205. The sense amplifier can detect a small change in the charge of the digital line 215 during the read operation, and can generate a signal corresponding to the logic state 0 or the logic state 1 based on the detected charge. During a read operation, the capacitor 230 of the memory cell 205 may output a signal (eg, discharge the charge to its corresponding digital line 215). The signal can change the voltage of the digital line 215. The sensing component 245 may be configured to compare the signal received from the memory cell 205 across the digital line 215 with a reference signal 250 (eg, a reference voltage). The sensing component 245 can determine the stored state of the memory unit 205 based on the comparison. For example, in binary signaling, if the digital line 215 has a higher voltage than the reference signal 250, the sensing component 245 can determine that the stored state of the memory cell 205 is logic 1, and if the digital line 215 has a higher voltage than the reference signal 250 250 is low, then the sensing component 245 can determine that the stored state of the memory cell 205 is logic 0. The sensing component 245 may include various transistors or amplifiers to detect and amplify signal differences. The detected logic state of the memory cell 205 can be output by the column decoder 225 as an output 255. In some cases, the sensing component 245 may be part of another component (e.g., column decoder 225, row decoder 220). In some cases, the sensing component 245 may electronically communicate with the row decoder 220 or the column decoder 225.The local memory controller 260 may control the operation of the memory unit 205 through various components (for example, the row decoder 220, the column decoder 225, and the sensing component 245). The local memory controller 260 may be an example of the local memory controller 165 described with reference to FIG. 1. In some cases, one or more of the row decoder 220, the column decoder 225, and the sensing component 245 may be in the same location as the local memory controller 260. The local memory controller 260 can be configured to receive commands and/or data from the external memory controller 105 (or the device memory controller 155 described with reference to FIG. 1), and to translate the commands and/or data into the memory die 200. Using the information, perform one or more operations on the memory die 200, and communicate data from the memory die 200 to the external memory controller 105 (or device memory controller 155) in response to performing the one or more operations. The local memory controller 260 may generate row and column address signals to activate the target word line 210 and the target digital line 215. The local memory controller 260 can also generate and control various voltages or currents used during the operation of the memory die 200. In general, the amplitude, shape, or duration of the applied voltage or current discussed herein can be adjusted or changed, and can be different for the various operations discussed in operating the memory die 200.In some cases, the memory die 200 may be configured to use one or more loopback pins of the memory die 200 to send an activation signal to the PMIC. The loopback pin can be coupled to the PMIC using a conductive path. In some cases, the conductive path can be gated by a transistor. In some cases, the conductive path may be inductively coupled with the second conductive path.In some cases, the local memory controller 260 may be configured to perform write operations (eg, programming operations) on one or more memory cells 205 of the memory die 200. During a write operation, the memory cell 205 of the memory die 200 can be programmed to store the desired logic state. In some cases, multiple memory cells 205 can be programmed during a single write operation. The local memory controller 260 may identify the target memory unit 205 to which the write operation will be performed. The local memory controller 260 can identify the target word line 210 and the target digital line 215 that are in electronic communication with the target memory unit 205 (eg, the address of the target memory unit 205). The local memory controller 260 may activate the target word line 210 and the target digital line 215 (for example, apply a voltage to the word line 210 or the digital line 215) to access the target memory cell 205. The local memory controller 260 may apply a specific signal (e.g., voltage) to the digital line 215 during the write operation to store a specific state (e.g., charge) in the capacitor 230 of the memory cell 205, the specific state (e.g., charge) ) Can indicate the required logic state.In some cases, the local memory controller 260 may be configured to perform read operations (e.g., sensing operations) on one or more memory cells 205 of the memory die 200. During a read operation, the logic state stored in the memory cell 205 of the memory die 200 can be determined. In some cases, multiple memory cells 205 may be sensed during a single read operation. The local memory controller 260 can identify the target memory cell 205 on which the read operation is to be performed. The local memory controller 260 can identify the target word line 210 and the target digital line 215 that are in electronic communication with the target memory unit 205 (eg, the address of the target memory unit 205). The local memory controller 260 may activate the target word line 210 and the target digital line 215 (for example, apply a voltage to the word line 210 or the digital line 215) to access the target memory cell 205. The target memory cell 205 may transmit a signal to the sensing component 245 in response to biasing the access line. The sensing component 245 can amplify the signal. The local memory controller 260 can activate the sensing component 245 (eg, a latch sensing component) and thereby compare the signal received from the memory unit 205 with the reference signal 250. Based on the comparison, the sensing component 245 can determine the logic state stored on the memory unit 205. As part of the read operation, the local memory controller 260 may transfer the logic state stored on the memory unit 205 to the external memory controller 105 (or device memory controller 155).Figure 3 illustrates an example of a memory system 300 that supports power management using loopback. The memory system 300 may include a power management integrated circuit 305, a first memory device group 310, and a second memory device group 315. The memory system 300 may also include an edge connector 320 and a hub 325.In some computing devices, the memory can be packaged into a memory component or module, such as a single in-line memory module (SIMM), a dual in-line memory module (DIMM), or a small form-factor dual in-line memory module (SO- DIMM). The memory system 300 may be an example of one of these memory components or modules. The memory system 300 may include one or more memory devices (e.g., memory device 160) arranged in a variety of configurations (e.g., a certain number of different groups of memory devices 310, 315). In some examples, the memory system 300 can be configured as a package that can be integrated into a larger device using one or more ports or connectors.The PMIC 305 of the memory system 300 may be used to manage the power constraints of the memory device and/or various components of the memory device group 310 and 315 of the memory system 300. The PMIC 305 may perform one or more of the following functions: current conversion, power supply selection, voltage scaling, power sequencing, or deactivation state power control, or any combination thereof. In some cases, the PMIC 305 may enter a deactivated state, where one or more components of the PMIC 305 are deactivated so that the memory system 300 or larger host device can save power.The memory system 300 may include memory devices configured in different configurations. For example, the memory system 300 may include a first memory device group 310 and a second memory device group 315. A group of memory devices may include one or more memory devices, where each group of memory devices may communicate with the host device using a data channel, which may be an independent data channel. In some examples, the memory system 300 may include memory devices organized into a single group or the memory system 300 may include two or more than two groups of memory devices (eg, three groups, four groups, five groups). group).The memory system 300 may include an edge connector 320 for interfacing with a host device. The edge connector 320 may include multiple pins for exchanging messages between the memory module and the host device. It can be based on single data rate (SDR) interface, double data rate (DDR) interface (for example, DDR1, DDR2, DDR3, DDR4, DDR5) or graphics double data rate (GDDR) interface (for example, GDDR1, GDDR2, GDDR3, GDDR4, GDDR5, GDDR6, GDDR6x, GDDR7/next) configure the edge connector 320.The hub 325 may be configured to route one or more messages within the memory system 300. In the memory system 300, not every individual component of the memory system 300 may have a dedicated connection (for example, a dedicated pin) to the host device via the edge connector 320. The hub 325 may be configured to receive one or more messages for a number of different components from the host device (via the edge connector 320) and then route those messages to the appropriate components. This may allow the host device to control multiple components without increasing the number of pins in the edge connector 320. In some cases, the hub 325 may be configured to route messages between components of the memory system internally (e.g., without interfacing with a host device).Some host devices can operate in different states to save power. For example, a mobile device (such as a smart phone, a tablet computer, or a portable computer) can enter a sleep state, a low power state, or a deactivated state to save power. The power savings in these devices can be desirable because the devices can be battery powered. As part of entering the deactivated state, the memory system 300 may also enter the deactivated state, in which one or more components of the memory system 300 may be deactivated or may be in a low power state.In some cases, when entering the deactivated state, the PMIC 305 may deactivate one or more components of the memory device and/or one or more components of the PMIC 305. Upon entering the deactivated state (for example, after receiving a message from the host device), these components can be configured to be activated or reactivated immediately.In some cases, deactivating some components of the PMIC 305 can disrupt communications between certain components of the memory system 300. For example, if PMIC 305 deactivates certain voltage rails or voltage sources, some components may not be able to communicate with hub 325, edge connector 320, other components, or any combination thereof. One such example may be PMIC305 itself. Thus, a technique for activating (or reactivating) the PMIC 305 in the deactivated state may be desirable.A technique for managing the power consumption of the memory system 300 using loopback is described herein. When the memory system 300 is in a deactivated state, the host device may send a signal to activate or reactivate one or more components of the memory system 300. One or more memory device groups 310, 315 can receive the signal, and the one or more memory device groups 310, 315 can activate one or more components in response to receiving the signal. One or more memory device groups 310, 315 can use one or more loopback pins to send a second signal to the PMIC 305, which can be used to activate or reactivate one or more components of the PMIC 305. In some examples, the PMIC 305 (eg, using one or more direct connections between the memory device group, 310, 315, and the PMIC 305) may receive the second signal. In some examples, PMIC 305 (eg, using one or more inductive connections between memory device groups 310, 315 and PMIC 305) may receive the second signal. After receiving the second signal or a third signal based on the second signal, the PMIC 305 can be immediately activated by activating one or more components of the PMIC 305.FIG. 4 illustrates an example of a circuit 400 of a memory system supporting a technique for power management using loopback. The circuit 400 may be an example of one or more components of the memory system 300 described with reference to FIG. 3. The circuit 400 may include a PMIC 405, a memory device 410, and a conductive path 415 between the PMIC 405 and the memory device 410. In the example of the circuit 400, the conductive path 415 may be an example of a conductive line that directly couples the PMIC 405 with the memory device 410. The PMIC 405 may be an example of the PMIC 305 described with reference to FIG. 3. The memory device 410 may be an example of the memory device 160 or the memory device groups 310 and 315 described with reference to FIGS. 1 to 3.The PMIC 405 may include a supply interface 420, an inter-integrated circuit bus 425, a logic 430, low-dropout regulators 435, 440, power supplies 445, 450, and, in some cases, a multi-time programmable memory 455. The supply interface 420 may be configured to receive power to activate the PMIC 405 and distribute it to other components of the memory system through the PMIC 405. The inter-integrated circuit bus 425 may be an example of a bus configured to couple the PMIC 405 with other components. In some cases, the inter-integrated circuit bus 425 may include pins configured to receive a serial clock from another component. The logic 430 may include an analog-to-digital converter, a digital-to-analog converter, an oscillator, or other components, or a combination thereof. In some instances, logic 430 may be used to provide feedback to other components in the memory system.The low-dropout regulators 435, 440 may be used to output power (eg, DC power) to the memory device including the memory device 410 of the memory system. In some cases, when the output voltage is close to the supply voltage input to the PMIC 405, the low-dropout regulators 435, 440 can be used to adjust the output voltage. The power supplies 445, 450 may be used to output power to the memory device including the memory device 410 of the memory system. In some cases, the power sources 445, 450 may be examples of switching regulators. PMIC 405 may include any number of low-dropout regulators (e.g., one, two, three, four, five, six, seven, eight), or may include any number of power supplies (e.g., one, Two, three, four, five, six, seven, eight), or any number of both low-dropout regulators and power supplies.The multi-time programmable memory 455 may optionally be included in the PMIC 405, and may be any type of memory used by the PMIC 405 to perform the functions described herein. In some cases, the multi-time programmable memory 455 may be an example of an electrically erasable programmable read-only memory (EEPROM) or other type of memory technology. The multi-time programmable memory 455 can be used to protect the circuit, improve the reliability of the power-on sequence or the power-off sequence, set the output voltage, set the output pull-down resistance, or other functions, or any combination thereof.The memory device 410 may include at least one loopback pin 460. The conductive path 415 can couple the loopback pin 460 of the memory device 410 with the PMIC 405. In some cases, the conductive path 415 may be coupled to the serial clock pin of the inter-integrated circuit bus 425 of the PMIC 405. The conductive path 415 may include any set of one or more lines that establish a communication link between the memory device 410 and the PMIC 405. The conductive path 415 can directly couple the memory device 410 and the PMIC 405, which means that the electrical path 415 can establish a connection between the two components to allow direct routing of signals between the components.A certain memory device may include loopback pins for use during testing, manufacturing, and/or operation of the memory device. For example, during the test phase of the memory device, multiple read commands or write commands or both may be applied to the memory device. The loopback pin can be used to transmit feedback data directly to the test bench. This type of direct feedback loop can increase the test speed. In some applications, after the memory device has been tested, the loopback pin may not be used to communicate with the host device. In some cases, edge connectors built to specifications such as the DDR specification may not use loopback pins. In such cases, the loopback pins of a certain memory device may not be used or connected to other components.Provided herein is a technique for using at least one loopback pin of a memory device to drive a signal to the PMIC 405 for activating at least a part of the PMIC 405. The signal may be an example of an activation signal. The memory device 410 may use the loopback pin to transmit a signal because when the PMIC 405 is in the first state (eg, deactivated state), the PMIC 405 may not be able to receive certain types of communication.In some examples, the conductive path 415 may be a direct communication path between the memory device 410 and the PMIC 405. In such instances, the signal emitted by the loopback pin 460 may be carried by one or more conductive wires through one or more devices (e.g., transistors or other components) to the PMIC 405.In other examples, the conductive path 415 may be a gated conductive path including a transistor 465 controlled by the loopback pin of the memory device 410 (eg, the second loopback pin 470). In such examples, the transistor 465 may be positioned along the conductive path 415 between the memory device 410 and the PMIC 405. The second conductive path 475 can couple the second loopback pin 470 with the gate of the transistor 465. The memory device 410 may be configured to send a signal (for example, an activation signal) to the PMIC 405 based on transmitting a signal using the first loopback pin 460 and activating the transistor 465 using the second loopback pin 470.In some cases, the memory device 410 may use any pin of the memory device 410 to transmit an activation signal. In such cases, the loopback pin 460 may be an example of a pin that the memory device 410 may use in some instances and should not be considered limiting. In some cases, the memory device 410 may use any pin of the memory device to transmit the gate signal to the transistor 465. In such cases, the second loopback pin 470 is an example of a pin that the memory device 410 may use in some instances and should not be considered as limiting.FIG. 5 illustrates an example of a circuit 500 of a memory system supporting a technique for power management using loopback. The circuit 500 may be an example of one or more components of the memory system 300 described with reference to FIG. 3. The circuit 500 may include a PMIC 505, a memory device 510, and a conductive path 515 between the PMIC 505 and the memory device 510. In the example of the circuit 500, the PMIC 505 and the memory device 510 may be coupled using the first conductive path 515 that is inductively coupled with the second conductive path 580. The PMIC 505 may be an example of the PMIC 305, 405 described with reference to FIGS. 3 and 4. The memory device 510 may be an example of the memory device 160, 410 or the memory device group 310, 315 described with reference to FIGS. 1 to 4.The PMIC 505 may include a supply interface 520, an inter-integrated circuit bus 525, a logic 530, low-dropout regulators 535, 540, power supplies 545, 550, and, in some cases, a multi-time programmable memory 555. The supply interface 520 may be configured to receive power to operate the PMIC 505 and distribute it to other components of the memory system through the PMIC 505. The inter-integrated circuit bus 525 may be an example of a bus configured to couple the PMIC 505 with other components. In some cases, the inter-integrated circuit bus 525 may include pins configured to receive information (e.g., serial clock) from another component. The logic 530 may include an analog/digital converter, a digital/analog converter, an oscillator, or other components, or a combination thereof. In some examples, logic 530 may be used to provide feedback to other components in the memory system.The low-dropout regulators 535, 540 may be used to output power (eg, DC power) to the memory device including the memory device 510 of the memory system. In some cases, when the output voltage is close to the supply voltage input to the PMIC 505, the low dropout regulators 535, 540 can be used to adjust the output voltage. The power supplies 545, 550 may be used to output power to the memory device including the memory device 510 of the memory system. In some cases, the power supplies 545, 550 may be examples of switching regulators. The PMIC 505 can contain any number of low-dropout regulators (e.g., one, two, three, four, five, six, seven, eight), or can contain any number of power supplies (e.g., one, Two, three, four, five, six, seven, eight), or any number of both low-dropout regulators and power supplies.The multi-time programmable memory 555 may optionally be included in the PMIC 505, and may be any type of memory used by the PMIC 505 to perform the functions described herein. In some cases, the multi-time programmable memory 555 may be an example of an electrically erasable programmable read-only memory (EEPROM) or other type of memory technology. The multi-time programmable memory 555 can be used to protect the circuit, improve the reliability of the power-on sequence or power-off sequence, set the output voltage, set the output pull-down resistance, or other functions.The memory device 510 may include at least one loopback pin 560. The first conductive path 515 can be directly coupled with the loopback pin 560 of the memory device 510. The second conductive path 580 may be directly coupled to the pins of the PMIC 505 (for example, the serial clock pins of the inter-integrated circuit bus 525 of the PMIC 505). The first conductive path 515 may be inductively coupled with the second conductive path 580. To establish inductive coupling, the first conductive path 515 may be routed to extend parallel to the second conductive path 580 for the length of the conductive path. To send the activation signal to the PMIC 505, the memory device 510 may send a signal on the first conductive path 515, which may induce a signal on the second conductive path 580, and the PMIC 505 may receive the signal. In some cases, the first conductive path 515 may be coupled with the clock pin of the memory device 510. In some cases, the conductive path 515 may be coupled with the clock pin of the edge connector.In some examples, the first conductive path 515 and the second conductive path 580 may form a conductive path between the memory device 510 and the PMIC 505. The first conductive path 515 may be inductively coupled with the second conductive path 580 so that a signal transmitted on the first conductive path 515 can induce a signal on the second conductive path 580 and vice versa. In such instances, the first signal transmitted by the loopback pin 560 on the first conductive path 515 may induce the second signal received by the PMIC 505 on the second conductive path 580.In other examples, the conductive path 515 may be a gated conductive path including a transistor 565 controlled by the second loopback pin 570 of the memory device 510. In such an example, the transistor 565 may be electrically conductive between a first portion of the first conductive path 515 that is coupled to the memory device 510 and a second portion of the first conductive path 515 that may be inductively coupled to the second conductive path 580. Path 515 is located. The third conductive path 575 may couple the second loopback pin 570 with the gate of the transistor 565. The memory device 510 may be configured to transmit a signal based on using the first loopback pin 560 and activate the transistor 565 using the second loopback pin 570, transmit the activation signal on the first conductive path 515 and sense on the second conductive path 580. Give birth to a second signal.In some cases, the memory device 510 may use any pin of the memory device 510 to transmit an activation signal. In such cases, the loopback pin 560 may be an example of a pin that can be used by the memory device 510 and should not be considered as limiting. In some cases, the memory device 510 may use any one or more pins of the memory device to transmit the gate signal to the transistor 565. In such cases, the second loopback pin 570 may be an example of a pin that can be used by the memory device 510 and should not be considered as limiting.Figure 6 illustrates an example of a flowchart 600 that supports a technique for power management using loopback. The flowchart 600 illustrates the techniques that the host device 605, the memory device 610, or the PMIC 615, or any combination thereof, can be used to exchange the deactivation signal and the activation signal. In some cases, the memory device 610 may be configured to use at least one loopback pin of the memory device 610 to send an activation signal to the PMIC 615. The memory device 510 may be an example of the memory device 160, 410, 510 or the memory device group 310, 315 described with reference to FIGS. 1 to 5. The PMIC 615 may be an example of the PMIC 305, 405, and 505 described with reference to FIGS. 3 to 5.The flowchart 600 is broken down into two sections for illustrative and descriptive purposes, including: the first section, which describes the procedure for transmitting a deactivation signal and deactivating one or more components (for example, 620-635), Operations and messages; and the second section, which describes the programs, operations, and messages used to transmit activation signals and activate one or more components (eg, 650-670).The host device 605 may determine that one or more memory devices or one or more memory device groups of the memory system will enter a different state, such as a deactivated state (e.g., an S3 state). The host device 605 may transmit the deactivation signal 620 to the memory device 610. The deactivation signal 620 may instruct the memory device 610 to transition from the first state to the second state (e.g., transition from the activated state to the deactivated state). In some examples, the deactivation signal 620 may pass through the edge connector of the memory device 610.At block 625, in response to receiving the deactivation signal 620, the memory device 610 may enter a deactivated state. Entering the deactivated state may include deactivating one or more components of the memory device 610.The memory device 610 may transmit the deactivation signal 630 to the PMIC 615 to cause the PMIC 615 to transition from the active state to the deactivated state. The deactivation signal 630 may be similar to or different from the deactivation signal 620. Deactivating at least part of the memory device 610 and the PMIC 615 may enable the memory system to save power.At block 635, the PMIC 615 may enter a deactivated state based on receiving at least one deactivation signal. There may be many different conditions for the PMIC 615 to enter the deactivated state.In some cases, the PMIC 615 may enter the deactivated state based on receiving the deactivation signal 630 from the memory device 610. In such cases, the memory system may include a single group of memory devices coupled with the host device 605 using a single data channel.In some cases, the PMIC 615 may enter the deactivated state based on receiving the deactivation signal 620-a from the host device 605. In such cases, the sideband channel can couple the PMIC 615 to the host device 605. The deactivation signal 620-a may be similar to the deactivation signal 620, except that the deactivation signal 620-a may be received by the PMIC 615 instead of the memory device 610. In such cases, the memory device 610 may optionally not send the deactivation signal 630.In some cases, the PMIC 615 may enter the deactivated state based on receiving the deactivation signal 630 from each memory device group in the memory system. When the memory system includes multiple memory device groups, the PMIC 615 can be configured to manage the power operation of at least some, if not all, memory device groups. Even when one memory device group enters the deactivated state, the PMIC 615 can still manage the operation of the second memory device group that is still in the active state. In such cases, the PMIC 615 may enter the deactivated state based on receiving a signal indicating that all memory device groups served by the PMIC 615 are also in the deactivated state or transition to the deactivated state.In some examples, each memory device group may be configured to send one or more individual deactivation signals 630 to the PMIC 615. In some examples, groups of memory devices can transmit deactivation signals 630 to each other and PMIC 615 can receive a single deactivation signal indicating that all groups can be or can enter a deactivated state. In such instances, the first loopback pin of the first memory device of the first group may be coupled with the second loopback pin of the second memory device of the second group. The first loopback pin can transmit a deactivation signal between the first group and the second group. The second memory device of the second group may also include a third loopback pin coupled with the PMIC 615. The third loopback pin may transmit the deactivation signal 630 to the PMIC 615 to indicate that the first memory device group and the second memory device group can be in or can enter the deactivated state. The PMIC 615 may enter the deactivated state based on receiving the third signal.The host device 605 may determine that one or more memory devices or one or more groups of memory devices of the memory system will transition from a deactivated state to an activated state. The host device 605 may transmit the activation signal 650 to the memory device 610. The activation signal 650 may indicate that the memory device 610 will transition from the deactivated state to the activated state. The activation signal 650 may pass through the edge connector of the memory device 610.In some cases, when the PMIC 615 is in a deactivated state, the PMIC 615 and/or other components of the memory system may not be able to receive certain signals. This can be attributed to some components being in a state (e.g., power off). This article provides techniques for transmitting the activation signal to the PMIC 615 in a way that overcomes other signaling difficulties. For example, the memory device 610 may use a loopback pin to drive a signal sent to the PMIC 615 and be configured to cause the PMIC 615 to transition from a first state to a second state (eg, transition from a deactivated state to an activated state). The communication path between the memory device 610 and the PMIC 615 may be coupled with the clock pin of the PMIC 615.At block 655, in response to receiving the activation signal 650, the memory device 610 may enter an activated state. Entering the activated state may include activating one or more components of the memory device 610.The memory device 610 may transmit the activation signal 665 to the PMIC 615 to cause the PMIC 615 to transition from the deactivated state to the activated state. The activation signal 665 may be similar to the activation signal 650. Activating at least part of the memory device 610 and the PMIC 615 can enable the memory system to have complete functionality. One or more different methods may be used to send the activation signal 665 from the memory device 610 to the PMIC 615.In some cases, as described in more detail with reference to FIG. 4, the memory device 610 may be directly coupled with the PMIC 615. In such cases, the memory device 610 can use the loopback pin (or other pin) to drive the signal on the conductive path (e.g., activation signal 665), and the PMIC 615 can use the conductive line to receive the signal (e.g., activate Signal 665).In some cases, as described in more detail with reference to FIG. 4, the memory device 610 may be directly coupled with the PMIC 615 using a gated conductive path. In such cases, the memory device 610 may use the loopback pin (or other pin) to drive the first signal (for example, the activation signal 665) on the first conductive line. The memory device 610 can also use the second loopback pin (or other pin) to drive the second signal on the second conductive line to the transistor located on the first conductive path. The transistor may be configured to selectively couple the memory device 610 with the PMIC 615 based on receiving the second signal. For example, as shown at block 660, upon receiving the second signal, the transistor may be activated and thereby establish a communication link between the memory device and the PMIC 615. The PMIC 615 may be based at least in part on the memory device 610 sending the first signal on the first conductive line and using the second conductive line to activate the transistor while using the conductive line to receive the first signal.In some cases, as described in more detail with reference to FIG. 5, the memory device 610 may be inductively coupled with the PMIC 615 using the first conductive path and the second conductive path. In such cases, the memory device 610 may use the loopback pin (or other pins) to drive the first signal on the first conductive path (for example, the activation signal 665 sent by the memory device). The first signal may induce a second signal (for example, the activation signal 665 received by the PMIC 615) on the second conductive path based on the inductive coupling between the two paths. The PMIC 615 can receive the second signal induced on the second conductive line. In some examples, the memory device 610 may modify the first signal to improve inductive coupling and thereby improve the strength of the second signal induced on the second conductive path. In some examples, the memory device 610 may toggle the first signal between at least two different voltage levels to modify the first signal and induce a second signal on the second conductive path.In some cases, as described in more detail with reference to FIG. 5, the memory device 610 may be inductively coupled with the PMIC 615 using the first gated conductive path and the second conductive path. In such cases, the memory device 610 may use the loopback pin (or other pins) to drive the first signal on the first conductive line (for example, the activation signal 665 sent by the memory device 610). The memory device 610 can also use the second loopback pin (or other pin) to drive the second signal on the third conductive line to the transistor positioned on the first conductive path. The transistor may be configured to selectively couple the first portion of the first conductive path coupled with the memory device 610 and the second portion of the first conductive path inductively coupled with the second conductive path based on receiving the second signal. For example, as shown at block 660, upon receiving the second signal, the transistor can be activated and the communication link portion of the first conductive path can be established in turn.The first signal sent on the first conductive path may induce a third signal on the second conductive path (e.g., the activation signal 665 received by the PMIC 615). The PMIC 615 may be based at least in part on the memory device 610 sending a first signal on a first conductive line and using a third conductive line to activate a transistor, and a second conductive path to receive a third signal. In some examples, the memory device 610 may modify the first signal to improve inductive coupling and thereby improve the strength of the third signal induced on the second conductive path. In some examples, the memory device 610 may toggle the first signal between at least two different voltage levels to modify the first signal and induce a third signal on the second conductive path.At block 670, the PMIC 615 may enter an activated state based on receiving at least one activation signal 665. To enter the activated state, the PMIC 615 can activate one or more components that are currently deactivated. In some cases, the PMIC 615 may transition from the deactivated state to the activated state based on receiving an activation signal from any one of the memory device groups served by the PMIC 615.FIG. 7 shows a block diagram 700 of a controller 705 of a memory device supporting a technique for power management using loopback according to an aspect of the present disclosure. The controller 705 may be an example of the aspects of the controllers 155, 165, 260 described herein. The controller 705 may include a memory interface manager 710, a state manager 715, a host interface manager 720, and a deactivation manager 725. Each of these modules can communicate directly or indirectly with each other (e.g., via one or more buses).The memory interface manager 710 may receive a signal from a memory device of the memory system via a conductive path coupled with a loopback pin of the memory device at the PMIC when one or more components of the PMIC are in a deactivated state. In some examples, the memory interface manager 710 may receive a second signal from the memory device to deactivate one or more components of the PMIC, where receiving the signal is based on receiving the second signal. In some cases, the conductive path is inductively coupled with the second conductive path, which is directly coupled with the loopback pin of the memory device. In some cases, the signal is induced by a second signal sent on the second conductive path. In some cases, the signal is received by the PMIC's inter-integrated circuit bus.The state manager 715 may activate one or more components of the PMIC based on receiving the signal from the memory device via the conductive path. In some examples, the state manager 715 may enter the deactivated state based on the memory device entering the deactivated state through the PMIC, wherein receiving the signal is based on the PMIC being in the deactivated state.The host interface manager 720 may receive a second signal to deactivate one or more components of the PMIC from the host device via the sideband channel, wherein receiving the signal from the memory device is based on receiving the second signal from the host device.The deactivation manager 725 may receive a second signal from the memory device to deactivate one or more components of the PMIC. In some examples, the deactivation manager 725 may receive a third signal from the second memory device to deactivate one or more components of the PMIC. In some examples, the deactivation manager 725 may deactivate one or more components of the PMIC based on receiving the second signal from the memory device and the third signal from the second memory device.FIG. 8 shows a block diagram 800 of a controller 805 of a PMIC supporting a technology for power management using loopback according to an aspect of the present disclosure. The controller 805 may be an example of an aspect of the logic 430 or 530 described herein. The controller 805 may include a host interface manager 810, a state manager 815, a PMIC interface manager 820, a toggle switch manager 825, and a gate manager 830. Each of these modules can communicate directly or indirectly with each other (e.g., via one or more buses).The host interface manager 810 may receive a first signal to activate one or more components of the memory device from the host device at the memory device of the memory system.The state manager 815 may activate one or more components of the memory device based on receiving the first signal from the host device. In some examples, the state manager 815 may receive a third signal from the host device to deactivate one or more components of the memory device, where receiving the first signal is based on receiving the third signal. In some examples, the state manager 815 may send a fourth signal to the PMIC to deactivate one or more components of the PMIC based on receiving the third signal.The PMIC interface manager 820 may send a second signal for activating one or more components of the PMIC based on activating one or more components of the memory device to the PMIC on the conductive path coupled with the loopback pin of the memory device. In some examples, the PMIC interface manager 820 may induce a third signal on the second conductive path coupled to the PMIC based on using the conductive path to send the second signal, and the third signal is one or more signals for activating the PMIC. Components. In some instances, when the PMIC is in the deactivated state, the PMIC interface manager 820 may send the second signal to the PMIC.The toggle manager 825 can toggle the second signal sent on the conductive path between different voltage levels, wherein the third signal is induced on the second conductive path based on the toggle second signal.The gate manager 830 may send the third signal from the memory device to the gate of the transistor via the second conductive path, and the transistor selectively couples the memory device with the PMIC based on the third signal. In some examples, the gate manager 830 may use the transistor to couple the first portion of the conductive path with the second portion of the conductive path based on sending the third signal to the transistor. In some cases, the second conductive path couples the second loopback pin of the memory device and the gate of the transistor.9 shows a block diagram 900 of a controller 905 of a memory system (eg, DIMM) supporting a technique for power management using loopback according to aspects of the present disclosure. The controller 905 may be an example of aspects of the controller 105, 155, 165, 260 and/or logic 430, 530 described herein. The controller 905 may include a memory device manager 910, a PMIC manager 915, a state manager 920, and a gate manager 925. Each of these modules can communicate directly or indirectly with each other (e.g., via one or more buses).The memory device manager 910 may send a wake-up signal from the memory device of the memory system to the PMIC of the memory system via the loopback pin of the memory device and the conductive path of the PMIC. In some examples, the memory device manager 910 may send the first signal from the memory device of the memory system to the PMIC of the memory system via the first conductive path.In some examples, the memory device manager 910 may induce a second signal on the second conductive path coupled with the PMIC based on sending the first signal using the first conductive path. In some examples, the memory device manager 910 may modify the level of the wake-up signal sent on the conductive path, where the component that activates the PMIC is based on modifying the level of the wake-up signal.In some examples, the memory device manager 910 may toggle the first signal sent on the first conductive path between different voltage levels, wherein the inducing of the second signal on the second conductive path is based on toggling The first signal. In some cases, the second conductive path is inductively coupled with the first conductive path and the second signal is configured to wake up the PMIC.The PMIC manager 915 may receive the wake-up signal transmitted on the conductive path at the PMIC. In some examples, the PMIC manager 915 may receive a sleep command from the host device using a sideband channel through the PMIC, where entering the deactivated state is based on receiving the sleep command using the sideband channel.In some examples, the PMIC manager 915 may receive a sleep command from the memory device through the PMIC, where entering the deactivated state is based on receiving the sleep command from the memory device. In some examples, the PMIC manager 915 may receive the second hibernation command from the second memory device of the memory system through the PMIC, where entering the deactivated state is based on receiving the hibernation command from the memory device and receiving the second hibernation command from the second memory device. Sleep command.In some examples, the PMIC manager 915 may receive the second signal induced on the second conductive path through the inter-integrated circuit bus of the PMIC, where the component that activates the PMIC is based on receiving the second signal. In some examples, the PMIC manager 915 may receive a sleep command from the host device on the sideband channel through the PMIC, where entering the deactivated state is based on receiving the sleep command on the sideband channel.In some examples, the PMIC manager 915 may receive a sleep command from the memory device associated with the first channel of the memory system through the PMIC, where entering the deactivated state is based on receiving the sleep command from the memory device. In some examples, the PMIC manager 915 may receive the second sleep command from the second memory device associated with the second channel of the memory system through the PMIC, where entering the deactivated state is based on receiving the second sleep command from the memory associated with the first channel. The device receives the sleep command and receives the second sleep command from the second memory device associated with the second channel.The state manager 920 may activate the components of the PMIC based on receiving a wake-up signal on the conductive path. In some examples, the state manager 920 may activate the components of the PMIC based on inducing a second signal on the second conductive path. In some examples, the state manager 920 may enter the deactivated state based on the memory device entering the deactivated state through the PMIC, where sending the wake-up signal from the memory device is based on the PMIC being in the deactivated state. In some examples, the state manager 920 may enter the deactivated state through the PMIC, where sending the first signal is based on the PMIC being in the deactivated state.The gate manager 925 may send the gate signal from the memory device to the gate of the transistor via the second conductive path, which selectively couples the memory device with the PMIC based on the gate signal, wherein receiving the wake-up signal is based on the gate signal. Polar signal. In some examples, the gate manager 925 may send the third signal from the memory device to the gate of the transistor via the third conductive path, which selectively connects the first portion of the first conductive path to the gate of the transistor based on the third signal. The second part of the first conductive path is coupled, where inducing the second signal is based on sending the third signal. In some cases, the second conductive path couples the second loopback pin of the memory device and the gate of the transistor. In some cases, the first conductive path is coupled with the first loopback pin of the memory device. In some cases, the third conductive path is coupled with the second loopback pin of the memory device and the gate of the transistor.FIG. 10 shows a flowchart illustrating a method 1000 for supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of the method 1000 may be implemented by the memory device described herein or a component thereof (for example, a controller of the memory device). For example, the operations of the method 1000 may be performed by the memory device as described with reference to FIGS. 1-7. In some examples, the memory device can execute a set of instructions to control the functional elements of the memory device to perform the functions described below. Additionally or alternatively, the memory device may use dedicated hardware to perform aspects of the functions described below.At 1005, the memory device may receive a first signal from the host device at the memory device of the memory system that activates one or more components of the memory device. In some instances, aspects of the operation of 1005 may be performed by the host interface manager described with reference to FIG. 7.At 1010, the memory device can activate one or more components of the memory device based on receiving the first signal from the host device. In some instances, aspects of the operation of 1010 may be performed by the state manager described with reference to FIG. 7.At 1015, the memory device may send a second signal for activating one or more components of the PMIC based on activating one or more components of the memory device to the PMIC via a conductive path coupled with the loopback pin of the memory device . In some examples, aspects of the operation of 1015 may be performed by the PMIC interface manager described with reference to FIG. 7.In some instances, a device as described herein may perform one or more methods such as method 1000. The apparatus may include features, devices, or instructions (for example, instructions stored in a non-transitory computer-readable medium that can be executed by a processor) for: receiving and activating the memory from the host device at the memory device of the memory system A first signal of one or more components of the device; activation of the one or more components of the memory device based on receiving the first signal from the host device; and the activation of the memory device based on One or more components send a second signal for activating one or more components of the PMIC to the PMIC via a conductive path coupled with a loopback pin of the memory device.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: toggling a second signal sent on a conductive path between different voltage levels, where The third signal induced on the second conductive path may be based on a two-state switching of the second signal.Some examples of the method 1000 and the apparatus described herein may additionally include operations, features, devices, or instructions for: toggling the first signal sent on the first conductive path between different voltage levels, The inducing the second signal on the second conductive path may be based on the two-state switching of the first signal.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, devices, or instructions for sending a third signal from the memory device to the gate of a transistor via a second conductive path, the transistor The memory device and the PMIC are selectively coupled based on the third signal.In some examples of the method 1000 and apparatus described herein, the second conductive path couples the second loopback pin of the memory device and the gate of the transistor.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, means, or instructions for the following operations: based on sending a third signal to the transistor, using the transistor to connect the first part of the conductive path with the first part of the conductive path Two-part coupling.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a third signal from the host device to deactivate one or more components of the memory device, where the first A signal may be based on receiving a third signal.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, devices, or instructions for: sending a fourth signal to deactivate one or more components of the PMIC based on receiving the third signal To PMIC.Some examples of the method 1000 and apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: sending the second signal to the PMIC occurs when the PMIC may be in a deactivated state.FIG. 11 shows a flowchart illustrating a method 1100 of supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of the method 1100 may be implemented by the memory device described herein or a component thereof (for example, a controller of the memory device). For example, the operations of the method 1100 may be performed by the memory device as described with reference to FIGS. 1-7. In some examples, the memory device can execute a set of instructions to control the functional elements of the memory device to perform the functions described below. Additionally or alternatively, the memory device may use dedicated hardware to perform aspects of the functions described below.At 1105, the memory device may receive a first signal from the host device at the memory device of the memory system that activates one or more components of the memory device. The operation of 1105 can be performed according to the method described herein. In some instances, aspects of the operation of 1105 may be performed by the host interface manager described with reference to FIG. 7.At 1110, the memory device can activate one or more components of the memory device based on receiving the first signal from the host device. The operation of 1110 can be performed according to the method described herein. In some instances, aspects of the operation of 1110 may be performed by the state manager described with reference to FIG. 7.At 1115, the memory device may send a second signal for activating one or more components of the PMIC based on activating one or more components of the memory device to the PMIC via a conductive path coupled with the loopback pin of the memory device . The operation of 1115 can be performed according to the method described herein. In some examples, aspects of the operation of 1115 may be performed by the PMIC interface manager described with reference to FIG. 7.At 1120, the memory device may induce a third signal on the second conductive path coupled with the PMIC based on sending the second signal using the conductive path, the third signal being used to activate one or more components of the PMIC. The operation of 1120 may be performed according to the method described herein. In some instances, aspects of the operation of 1120 may be performed by the PMIC interface manager described with reference to FIG. 7.FIG. 12 shows a flowchart illustrating a method 1200 for supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of the method 1200 may be implemented by the PMIC described herein or its components (eg, logic). For example, the operations of the method 1200 may be performed by the PMIC as described with reference to FIGS. 3 to 6 and 8. In some instances, the PMIC can execute a set of instructions to control the functional elements of the PMIC to perform the functions described below. Additionally or alternatively, the PMIC may use dedicated hardware to perform aspects of the functions described below.At 1205, the PMIC may receive a signal from a memory device of the memory system at the PMIC via a conductive path coupled with a loopback pin of the memory device when one or more components of the PMIC are in a deactivated state. In some examples, aspects of the operation of 1205 may be performed by the memory interface manager described with reference to FIG. 8.At 1210, the PMIC can activate one or more components of the PMIC based on receiving the signal from the memory device via the conductive path. In some instances, aspects of the operations of 1210 may be performed by the state manager described with reference to FIG. 8.In some instances, a device as described herein may perform one or more methods such as method 1200. The device may include features, devices, or instructions (for example, instructions stored on a non-transitory computer-readable medium that can be executed by a processor) for the following operations: when one or more components of the PMIC are in a deactivated state, A signal at the PMIC is received from a memory device of the memory system via a conductive path coupled with a loopback pin of the memory device; and one or more components of the PMIC are activated based on receiving the signal from the memory device via the conductive path.Some examples of the method 1200 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a second signal to deactivate one or more components of the PMIC from the host device via the sideband channel, Wherein receiving the signal from the memory device may be based on receiving the second signal from the host device.Some examples of the method 1200 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a second signal from a memory device to deactivate one or more components of the PMIC, wherein the receiving The signal may be based on receiving the second signal.Some examples of the method 1200 and the apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: enter the deactivated state based on the memory device entering the deactivated state through the PMIC, wherein receiving the signal may be based on The PMIC is in the deactivated state.Some examples of the method 1200 and the apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a second signal from a memory device to deactivate one or more components of the PMIC; from a second memory The device receives a third signal that deactivates one or more components of the PMIC; and based on receiving the second signal from the memory device and the third signal from the second memory device, deactivates the one or more components of the PMIC.In some examples of the method 1200 and the apparatus described herein, the conductive path may be inductively coupled with the second conductive path, the second conductive path is directly coupled with the loopback pin of the memory device, and the signal may be The second signal sent on the second conductive path is induced.In some examples of the method 1200 and devices described herein, the signal may be received by the PMIC's inter-integrated circuit bus.FIG. 13 shows a flowchart illustrating a method 1300 for supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of the method 1300 may be implemented by the memory system described herein or its components (e.g., a memory device, a PMIC or a controller or its logic). For example, the operations of the method 1300 may be performed by a memory system, a memory device, a PMIC, or a combination thereof as described with reference to FIGS. 3-9. In some instances, the memory system can execute a set of instructions to control the functional elements of the memory system to perform the functions described below. Additionally or alternatively, the memory system may use dedicated hardware to perform aspects of the functions described below.At 1305, the memory system may send a wake-up signal from the memory device of the memory system to the PMIC of the memory system via the conductive path coupling the loopback pin of the memory device and the PMIC. In some examples, aspects of the operations of 1305 may be performed by the memory device manager described with reference to FIG. 9.At 1310, the memory system can receive a wake-up signal sent on the conductive path. In some examples, aspects of the operation of 1310 may be performed by the PMIC manager described with reference to FIG. 9.At 1315, the memory system may activate the components of the PMIC based on receiving a wake-up signal via the conductive path. In some instances, aspects of the operation of 1315 may be performed by the state manager described with reference to FIG. 9.In some instances, a device as described herein may perform one or more methods such as method 1300. The device may include features, devices, or instructions (for example, instructions stored on a non-transitory computer-readable medium that can be executed by a processor) for the following operations: via the loopback pin of the coupling memory device and the conductive path of the PMIC, The wake-up signal is sent from the memory device of the memory system to the PMIC of the memory system; the wake-up signal sent on the conductive path is received at the PMIC; and the component that activates the PMIC based on receiving the wake-up signal via the conductive path.Some examples of the methods 1300 and devices described herein may additionally include operations, features, devices, or instructions for: modifying the level of the wake-up signal sent on the conductive path, where the component that activates the PMIC may wake up based on the modification The level of the signal.Some examples of the method 1300 and apparatus described herein may additionally include operations, features, devices, or instructions for sending a gate signal from the memory device to the gate of a transistor via a second conductive path, the transistor The memory device is selectively coupled with the PMIC based on the gate signal, wherein receiving the wake-up signal may be based on the gate signal.In some examples of the method 1300 and apparatus described herein, the second conductive path couples the second loopback pin of the memory device and the gate of the transistor.Some examples of the method 1300 and the apparatus described herein may additionally include operations, features, devices, or instructions for: entering the deactivated state based on the memory device entering the deactivated state through the PMIC, where a wake-up from the memory device is sent The signal may be based on the PMIC being in the deactivated state.Some examples of the method 1300 and the apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: receiving a sleep command from a host device using a sideband channel through the PMIC, where entering the deactivated state may be based on using the sideband channel The band channel received a sleep command.Some examples of the method 1300 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a sleep command from a memory device through a PMIC, where entering a deactivated state may be based on receiving a sleep command from the memory device command.Some examples of the method 1300 and the apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: receiving a second hibernation command from a second memory device of the memory system through the PMIC, wherein entering the deactivated state may Based on receiving the hibernation command from the memory device and receiving the second hibernation command from the second memory device.FIG. 14 shows a flowchart illustrating a method 1400 of supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of the method 1400 may be implemented by the memory system described herein or its components (e.g., a memory device, a PMIC or a controller or its logic). For example, the operations of the method 1400 may be performed by a memory system, a memory device, a PMIC, or a combination thereof as described with reference to FIGS. 3-9. In some instances, the memory system can execute a set of instructions to control the functional elements of the memory system to perform the functions described below. Additionally or alternatively, the memory system may use dedicated hardware to perform aspects of the functions described below.At 1405, the memory system may send a wake-up signal from the memory device of the memory system to the PMIC of the memory system via the conductive path coupling the loopback pin of the memory device and the PMIC. In some examples, aspects of the operations of 1405 may be performed by the memory device manager described with reference to FIG. 9.At 1410, the memory system may send a gate signal from the memory device to the gate of a transistor via the second conductive path, the transistor selectively coupling the memory device with the PMIC based on the gate signal, wherein receiving the wake-up signal is based on Gate signal. In some examples, aspects of the operation of 1410 may be performed by the gate manager as described with reference to FIG. 9.At 1415, the memory system can receive a wake-up signal sent on the conductive path. In some examples, aspects of the operations of 1415 may be performed by the PMIC manager described with reference to FIG. 9.At 1420, the memory system may activate the components of the PMIC based on receiving a wake-up signal via the conductive path. In some instances, aspects of the operations of 1420 may be performed by the state manager described with reference to FIG. 9.FIG. 15 shows a flowchart illustrating a method 1500 of supporting a technique for power management using loopback according to aspects of the present disclosure. The operations of method 1500 may be implemented by the memory system described herein or its components (e.g., memory device, PMIC or controller or its logic). For example, the operations of the method 1500 may be performed by a memory system, a memory device, a PMIC, or a combination thereof as described with reference to FIGS. 3-9. In some instances, the memory system can execute a set of instructions to control the functional elements of the memory system to perform the functions described below. Additionally or alternatively, the memory system may use dedicated hardware to perform aspects of the functions described below.At 1505, the memory system may send the first signal from the memory device of the memory system to the PMIC of the memory system via the first conductive path. In some examples, aspects of the operations of 1505 may be performed by the memory device manager described with reference to FIG. 9.At 1510, the memory system may induce a second signal on the second conductive path coupled with the PMIC based on sending the first signal using the first conductive path. In some examples, aspects of the operations of 1510 may be performed by the memory device manager described with reference to FIG. 9.At 1515, the memory system may activate the components of the PMIC based on inducing a second signal on the second conductive path. In some instances, aspects of the operation of 1515 may be performed by the state manager described with reference to FIG. 9.In some instances, a device as described herein may perform one or more methods such as method 1500. The apparatus may include features, devices, or instructions (for example, instructions stored on a non-transitory computer-readable medium that can be executed by a processor) for: transmitting the first signal from the memory device of the memory system via the first conductive path The PMIC sent to the memory system; the second signal is induced on the second conductive path coupled with the PMIC based on sending the first signal using the first conductive path; and the PMIC is activated based on the second signal induced on the second conductive path s component.Some examples of the method 1500 and the apparatus described herein may additionally include operations, features, devices, or instructions for: toggling the first signal sent on the first conductive path between different voltage levels, The inducing the second signal on the second conductive path may be based on the two-state switching of the first signal.Some examples of the method 1500 and apparatus described herein may additionally include operations, features, devices, or instructions for sending a third signal from the memory device to the gate of a transistor via a third conductive path, the transistor The first portion of the first conductive path and the second portion of the first conductive path are selectively coupled based on the third signal, wherein inducing the second signal may be based on sending the third signal.In some examples of the method 1500 and apparatus described herein, the first conductive path may be coupled with the first loopback pin of the memory device and the third conductive path may be coupled with the second loopback pin of the memory device and the transistor Gate coupling.Some examples of the method 1500 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a second signal induced on a second conductive path through the inter-integrated circuit bus of the PMIC, where The component that activates the PMIC may be based on receiving the second signal.Some examples of the method 1500 and apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: entering a deactivated state through the PMIC, where sending the first signal may be based on the PMIC being in the deactivated state.Some examples of the method 1500 and apparatus described herein may additionally include operations, features, devices, or instructions for receiving a sleep command from a host device via a sideband channel through a PMIC, where entering a deactivated state may be based on The band channel received a sleep command.Some examples of the method 1500 and the apparatus described herein may additionally include operations, features, devices, or instructions for the following operations: receiving a sleep command from a memory device associated with the first channel of the memory system through the PMIC, where the entry release The activation state may be based on receiving a hibernation command from the memory device.Some examples of the method 1500 and apparatus described herein may additionally include operations, features, devices, or instructions for: receiving a second hibernation through a PMIC from a second memory device associated with the second channel of the memory system The command, where entering the deactivated state may be based on receiving a sleep command from a memory device associated with the first channel and receiving a second sleep command from a second memory device associated with the second channel.In some examples of the method 1500 and the device described herein, the second conductive path may be inductively coupled with the first conductive path and the second signal may be configured to wake up the PMIC.It should be noted that the methods described herein describe possible implementations, and operations and steps can be rearranged or modified in other ways, and other implementations are possible. In addition, two or more aspects from the method can be combined.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, the data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description can be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof. And chips. Some figures may illustrate the signal as a single signal; however, those of ordinary skill in the art will understand that the signal may represent a signal bus, where the bus may have multiple bit widths.As used herein, the term "virtual ground" refers to a circuit node that is maintained at a voltage of approximately zero volts (0V) without being directly coupled to ground. Therefore, the voltage of the virtual ground may temporarily fluctuate and return to approximately 0V in a steady state. Various electronic circuit elements such as a voltage divider composed of operational amplifiers and resistors can be used to implement virtual grounding. Other embodiments are also possible. "Virtual ground" or "virtual ground to ground" means to connect to about 0V.The terms "electronic communication", "conductive contact", "connection" and "coupling" may refer to the relationship between components that support the flow of signals between components. If there are any conductive paths between the components that can support the flow of signals between the components at any time, the components are considered to be in electronic communication with each other (or in conductive contact with each other, or connected to each other, or coupled with each other). At any given time, based on the operation of the device containing the connected components, the conductive paths between the components that are in electronic communication with each other (or in conductive contact with each other, or connected with each other, or coupled with each other) may be open or closed. The conductive paths between connected components may be direct conductive paths between components, or the conductive paths between connected components may be indirect conductive paths, which may include intermediate components such as switches, transistors, or other components. In some cases, the signal flow between connected components may be interrupted for a period of time, for example, using one or more intermediate components such as switches or transistors.The term "coupling" refers to the condition of moving from an open-circuit relationship between components to a closed-circuit relationship between components. In an open-circuit relationship, the signal cannot currently be transmitted between the components through a conductive path. In a closed-circuit relationship, the signal can pass through conduction. The path is communicated between the components. When a component such as a controller couples other components together, the component initiates a change that allows signals to flow between other components through conductive paths that previously did not allow the signal to flow.The term "isolation" refers to the relationship between components where signals cannot currently flow between components. If there is an open circuit between the components, the components are isolated from each other. For example, the components separated by a switch positioned between two components are isolated from each other when the switch is open. When the controller isolates the two components, the controller implements the following change: prevents the signal from flowing between the components using the conductive path that previously permitted the signal to flow.The term "layer" as used herein refers to a layer or sheet of geometric structure. Each layer can have three dimensions (e.g., height, width, and depth), and can cover at least a portion of the surface. For example, the layer may be a three-dimensional structure, where two dimensions are greater than the third dimension, such as a film. Layers can contain different elements, components and/or materials. In some cases, a layer may consist of two or more sub-layers. In some drawings, the two dimensions of the three-dimensional layer are depicted for illustrative purposes. However, those skilled in the art will recognize that the layers are three-dimensional in nature.The devices discussed herein, including memory arrays, can be formed on semiconductor substrates such as silicon, germanium, silicon germanium alloy, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping using various chemical species including but not limited to phosphorus, boron, or arsenic. The doping can be performed by ion implantation or by any other doping method during the initial formation or growth of the substrate.The switching components or transistors discussed herein may represent field effect transistors (FETs) and include three-terminal devices including a source, a drain, and a gate. The terminal may be connected to other electronic components through a conductive material such as metal. The source and drain may be conductive and may include heavily doped (e.g., degenerate) semiconductor regions. The source and drain may be separated by lightly doped semiconductor regions or channels. If the channel is n-type (for example, most of the carriers are signals), then the FET can be referred to as an n-type FET. If the channel is p-type (ie, most of the carriers are holes), then the FET can be referred to as a p-type FET. The channel can be terminated by an insulated gate oxide. The channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive voltage or a negative voltage to an n-type FET or a p-type FET, respectively, can cause the channel to become conductive. When a voltage greater than or equal to the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned on" or "activated". When a voltage less than the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned off" or "deactivated".The description set forth herein in conjunction with the accompanying drawings describes example configurations, and does not represent all examples that can be implemented or fall within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance, or illustration" and is not "preferred" or "better" than other examples. The detailed description contains specific details that provide an understanding of the described technology. However, these techniques can be practiced without these specific details. In some cases, well-known structures and devices are shown in the form of block diagrams so as not to obscure the concepts of the described examples.In the drawings, similar components or features may have the same reference label. In addition, various components of the same type can be distinguished by following the reference marks of the dashed line and the second mark, which are distinguished among similar components. If only the first reference sign is used in the specification, the description is applicable to any one of the similar components having the same first reference sign regardless of the second reference sign.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, the data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description can be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof. And chips.The various illustrative blocks and modules described in conjunction with the disclosure herein may use general-purpose processors, DSPs, ASICs, FPGAs or other programmable logic devices, discrete gates, or transistor logic designed to perform the functions described herein , Discrete hardware components or any combination thereof to implement or execute. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors combined with a DSP core, or any other such configuration).The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions can be stored as one or more instructions or codes on a computer-readable medium or transmitted via the computer-readable medium. Other examples and implementations are within the scope of this disclosure and the appended claims. For example, due to the nature of software, the functions described above can be implemented using software executed by a processor, hardware, firmware, hard wiring, or any combination of these. Features that implement a function may also be physically located at various locations, including being distributed so that various parts of the function are implemented at different physical locations. In addition, as used herein (included in the claims), as used in a list of items (e.g., a list of items followed by phrases such as "at least one of" or "one or more of") The "or" of indicates a list containing endpoints, such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). In addition, as used herein, the phrase "based on" should not be understood as referring to a closed set of conditions. For example, the exemplary steps described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should equally be interpreted as the phrase "based at least in part."Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates the transfer of a computer program from one place to another. Non-transitory storage media may be any available media that can be accessed by a general-purpose or special-purpose computer. By way of example and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage devices, magnetic disk storage devices, or other magnetic A storage device, or any other non-transitory medium that can be used to carry or store desired program code devices in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer or general-purpose or special-purpose processor. And, any connection is appropriately referred to as a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, the coaxial cable , Fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of media. As used herein, magnetic disks and optical disks include CDs, laser disks, optical disks, digital versatile disks (DVD), flexible disks, and Blu-ray disks. Disks usually reproduce data magnetically, while optical disks use lasers to reproduce data optically. data. Combinations of the above are also included in the scope of computer-readable media.The description herein is provided to enable those skilled in the art to make or use the present disclosure. Various modifications to the present disclosure will be obvious to those skilled in the art, and the general principles defined herein can be applied to other variations without departing from the scope of the present disclosure. Therefore, the present disclosure is not limited to the examples and designs described herein, but is given the widest scope consistent with the principles and novel features disclosed herein.
Methods, systems, and devices for authenticating software images are described. A system may include one or more control units that use software images for managing different functions of the system. The system may also include a secure storage device configured to validate or authenticate the software images used by the different control units of the system. A software image of a control unit may be authenticated by generating a first hash associated with a portion of its underlying source code and generating a second hash associated with a corresponding portion of the source code of the copy of the software image stored to the secure storage device. Different patterns of the source code of the software images may be used to generate the hashes. The first hash and second hash may be compared, and the software image may be authenticated based on the hashes matching.
CLAIMSWhat is claimed is:1. A method, comprising: identifying, by a secure storage device, a portion of a software image associated with an electronic control unit to authenticate using the secure storage device; receiving, by the secure storage device, data associated with the portion of the software image from the electronic control unit based at least in part on identifying the portion to authenticate; calculating, by the secure storage device, a first hash associated with the data received from the electronic control unit; identifying, by the secure storage device, a second hash stored by the secure storage device and associated with the portion of the software image of the electronic control unit based at least in part on calculating the first hash; authenticating, by the secure storage device, the software image associated with the electronic control unit based at least in part on the first hash matching the second hash; and transmitting, to the electronic control unit, an indication of the authentication.2. The method of claim 1, wherein identifying the second hash comprises: identifying the second hash stored by the secure storage device based at least in part on the identified portion of the software image.3. The method of claim 2, further comprising: generating a plurality of second hashes associated with a plurality of portions of the software image of the electronic control unit, each second hash of the plurality of second hashes corresponding to one portion of the plurality of portions of the software image; and storing, by the secure storage device, the plurality of second hashes based at least in part on generating the plurality of second hashes, wherein identifying the second hash is based at least in part on storing the plurality of second hashes.4. The method of claim 1, wherein identifying the second hash comprises: calculating, by the secure storage device, the second hash using the portion of a second software image associated with the electronic control unit.5. The method of claim 4, further comprising: receiving the portion of the second software image from the electronic control unit.6. The method of claim 1, further comprising: comparing the first hash with the second hash based at least in part on identifying the second hash; and determining whether the first hash is different than the second hash based at least on comparing the first hash with the second hash, wherein authenticating the software image is based at least in part on the first hash matching the second hash.7. The method of claim 6, further comprising: refraining from authenticating the software image based at least in part on determining that the first hash is different than the second hash; and transmitting, to the electronic control unit, a second indication that the software image associated with the electronic control unit is not authenticated based at least in part on refraining from authenticating the software image received from the electronic control unit.8. The method of claim 7, further comprising: transmitting, to the electronic control unit, a copy of a second software image stored by the secure storage device based at least in part on refraining from authenticating the software image received from the electronic control unit.9. The method of claim 1, further comprising: initiating a boot sequence of the electronic control unit, wherein identifying the portion of the software image of the electronic control unit to authenticate is based at least in part on initiating the boot sequence.10. The method of claim 1, further comprising: receiving, by the secure storage device, diagnostic information associated with the electronic control unit before identifying the portion of the software image; identifying a quantity of portions of the software image associated with the electronic control unit based at least in part on receiving the diagnostic information, wherein each of the portions of the software image are associated with a different pattern identifier; and assigning one or more address ranges of the software image to each of the portions of the software image based at least in part on identifying the quantity of portions of the software image.11. The method of claim 10, further comprising: assigning the quantity of portions of the software image, the pattern identifier associated with each of the quantity of portions of the software image, and the one or more assigned address ranges to respective entries in a first table stored at the secure storage device based at least in part on assigning the one or more address ranges of the software image to each of the portions of the software image.12. The method of claim 10, wherein a respective second hash is associated with each of the one or more assigned address ranges, the method further comprising: assigning an integer value to each of the respective second hashes based at least in part on assigning the respective entries in the first table.13. The method of claim 12, further comprising: assigning the generated second hashes and associated pattern identifiers to respective entries in a second table stored at the secure storage device based at least in part on assigning the integer value to each of the generated second hashes.14. The method of claim 13, further comprising: generating a random value corresponding an entry in the second table, wherein identifying the portion of the software image to authenticate is based at least in part on the random value.15. The method of claim 13, wherein the second table comprises a plurality of pattern identifiers for portions of a plurality of software images of a plurality of electronic control units and a plurality of hashes associated with each of the portions of the software image.16. The method of claim 13, further comprising: identifying a flag indicating that a third hash did not match before calculating the first hash; and selecting a first entry in the second table stored at the secure storage device based at least in part on identifying the flag, wherein the entire software image is to be authenticated based on selecting the first entry in the second table.17. The method of claim 1, further comprising: initiating a boot sequence of one or more of a plurality of additional electronic control units; identifying, by the secure storage device, a portion of one or more software images that are each associated with additional electronic control units configured to boot on a same system to authenticate using the secure storage device; receiving, by the secure storage device, data associated with the portions of the software images from a plurality of additional embedded systems electronic control units based at least in part on identifying the portions to authenticate; generating, by the secure storage device, a respective third hash associated with the data received from each of the plurality of additional electronic control units; identifying, by the secure storage device, respective second hashes stored by the secure storage device and associated with the respective portions of the software images of the plurality of additional electronic control units based at least in part on generating the respective third hashes; authenticating, by the secure storage device, the software images associated with the plurality of additional electronic control units based at least in part on the third hashes matching the second hashes; and transmitting, to the plurality of additional electronic control units, an indication of the respective authentication.18. The method of claim 1, wherein the secure storage device is configured to authenticate software images associated with a plurality of electronic control units within a system.19. A system, comprising: a plurality of electronic control units for storing data associated with one or more software images; a secure storage device coupled with the plurality of electronic control units and configured to: identify a portion of a software image associated with an electronic control unit to authenticate; receive data associated with the portion of the software image from the electronic control unit based at least in part on identifying the portion to authenticate; generate a first hash associated with the data received from the electronic control unit; identify a second hash and associated with the portion of the software image of the electronic control unit based at least in part on generating the first hash; authenticate the software image associated with the electronic control unit based at least in part on the first hash matching the second hash; and transmit an indication of the authentication.20. The system of claim 19, wherein the secure storage device is configured to: identify the second hash based at least in part on the identified portion of the software image; generate a plurality of second hashes associated with a plurality of portions of the software image of the electronic control unit, each second hash of the plurality of second hashes corresponding to one portion of the plurality of portions of the software image; and store the plurality of second hashes based at least in part on generating the plurality of second hashes, wherein identifying the second hash is based at least in part on storing the plurality of second hashes.21. The system of claim 19, wherein the secure storage device is configured to: generate the second hash using the portion of a second software image associated with the electronic control unit; receive the second software image from the electronic control unit; and store a copy of the second software image.22. The system of claim 19, wherein the secure storage device is configured to: initiate a boot sequence of the electronic control unit, wherein identifying the portion of the software image of the electronic control unit to authenticate is based at least in part on initiating the boot sequence.23. A non-transitory computer-readable medium storing computerexecutable code, the code executable by a processor to: identify a portion of a software image associated with an electronic control unit to authenticate using a secure storage device; receive data associated with the portion of the software image from the electronic control unit based at least in part on identifying the portion to authenticate; generate a first hash associated with the data received from the electronic control unit; identify a second hash stored by the secure storage device and associated with the portion of the software image of the electronic control unit based at least in part on generating the first hash; authenticate the software image associated with the electronic control unit based at least in part on the first hash matching the second hash; and transmit an indication of the authentication.24. The non-transitory computer-readable medium of claim 23, wherein the code is executable by the processor to: generate a plurality of second hashes associated with a plurality of portions of the software image of the electronic control unit, each second hash of the plurality of second hashes corresponding to one portion of the plurality of portions of the software image; and store the plurality of second hashes based at least in part on generating the plurality of second hashes.25. The non-transitory computer-readable medium of claim 23, wherein the code is executable by the processor to: compare the first hash with the second hash based at least in part on identifying the second hash; and determine whether the first hash is different than the second hash based at least on comparing the first hash with the second hash, wherein authenticating the software image is based at least in part on the first hash matching the second hash.
AUTHENTICATING SOFTWARE IMAGESCROSS REFERENCES[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/020,293 by Duval et al., entitled “AUTHENTICATING SOFTWARE IMAGES,” filed September 14, 2020; assigned to the assignee hereof and expressly incorporated by reference herein.BACKGROUND[0002] The following relates generally to one or more systems for memory and more specifically to authenticating software images.[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often denoted by a logic 1 or a logic 0. In some examples, a single memory cell may support more than two states, any one of which may be stored. To access the stored information, a component may read, or sense, at least one stored state in the memory device. To store information, a component may write, or program, the state in the memory device.[0004] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), selfselecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state when disconnected from an external power source.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example of a system that supports authenticating software images in accordance with examples as disclosed herein. [0006] FIG. 2 illustrates an example of a system that supports authenticating software images in accordance with examples as disclosed herein.[0007] FIG. 3 illustrates an example of a memory device that supports authenticating software images in accordance with examples as disclosed herein.[0008] FIG. 4 illustrates an example of a process flow diagram that supports authenticating software images in accordance with examples as disclosed herein.[0009] FIG. 5 shows a block diagram of a secure storage device that supports authenticating software images in accordance with examples as disclosed herein.[0010] FIG. 6 shows a flowchart illustrating a method or methods that support authenticating software images in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0011] Some computing systems may include one or more control units for managing various aspects of the system. For example, an automotive system may include one or more control units associated with various components and operations of a vehicle. A control unit may be an embedded system that controls one or more electrical systems or subsystems of the system. Examples of control units of a vehicle may include a power train control unit, a human-machine interface control unit, a door control unit, different types of engine control units, seat control unit, speed control unit, telematic control unit, transmission control unit, brake control unit, battery management control unit, or others, or a combination thereof. The control units may store respective software images that, when executed, cause the system (and its associated components) to operate in an intended manner. In some systems, different control units may implement different authentication procedures. Because some authentication procedures may be more secure than others, weaknesses in some control units may allow for hackers to gain unpermitted access to the respective control unit. Moreover, relatively unsecure authentication procedures for any one control unit may place other control units and/or the entire computing system at risk for malicious attacks. Accordingly, it may be beneficial to authenticate the software images of control units to prevent malicious actors from altering or gaining control of one or more aspects of a computing system.[0012] Techniques are described herein for authenticating software images of control units. The control units of a system may be accessed by a computing unit (e.g., a central computing unit) that may include a secure storage device. The secure storage device may be configured to validate or authenticate the software images of the other control units of the system. The secure storage device may include storage (e.g., a portion of its storage) that is inaccessible to an outside device (e.g., a host device). The secure storage device may store copies of the software images stored to each control unit of the system or hashes of the software images. The stored information may represent the intended state (e.g., an unaltered state) of the software images of the control units, and may be used to authenticate the software images of the control units. By using information associated with the intended state (e.g., the unaltered state) of the software images during an authentication process, the secure storage device may be able to determine whether any software images stored to the various control units was altered or subject to a malicious attack.[0013] To authenticate a software image of a control unit, the secure storage device may read (e.g., measure) a portion of the source code of the software image saved to the control unit. In some examples, the portion of the source code of the software image may be read during a boot sequence of the associated computing system or a boot sequence associated with the control unit or both. The portion of the source code may be less than the entire source code of the software image, and may be used to generate a first hash (e.g., a first cryptographic hash).[0014] The first hash may be compared with a second hash that is generated using the source code of the corresponding copy of the software image stored to the secure storage device. For example, the secure storage device may read a portion of the source code of software image stored to the control unit and generate a first hash, and may read a corresponding portion of the source code of the copy of the software image stored to the secure storage device and generate a second hash. The hashes may be compared and, based on the hashes matching, the software image of the control unit may be authenticated. When the software image of the control unit is authenticated, the software image may be executed and the control unit may boot. However, if the software image of the control unit is not authenticated, the control unit may be prevented from booting and other security measures may be implemented. Authenticating software images of control units as described herein may increase the overall security and reliability of the associated computing system. For example, centralizing authentication of the different control units with the secure storage device may improve the security of the control units and decrease a probability that any given control unit or the system more general is compromised by an unauthorized user. [0015] Features of the disclosure are initially described in the context of systems as described with reference to FIGs. 1 and 2. Features of the disclosure are described in the context of block diagrams and process flow diagrams with reference to FIGs. 3 and 4. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and a flowchart that relate to authenticating software images as described with reference to FIGs. 5 and 6.[0016] FIG. 1 illustrates an example of a system 100 that supports authenticating software images in accordance with examples as disclosed herein. The system 100 may include a central device 105 and one or more control units 185. The control unit(s) 185 may be an example of an infotainment system of a vehicle, a telematics system of a vehicle, a powertrain system of a vehicle, a speed control system of a vehicle, or the like.[0017] The central device 105 may include a processor 117, a storage device 112, and one or more additional components 145. The storage device may include user space 114 that is configured to store data (e.g., general purpose data). The storage device 112 may also include a secure component 110 that is configured to securely store at least software image data 140, and software hash data 125. In some cases, the secure component 110 may store additional software 130.[0018] In some examples, the storage device 112 may store one or more keys, such as a management key and/or an identity key. The key may be an example of a symmetric server root key (SRK) or a set of two keys such as a management public key and a device-side identity private key. The server root key or management public key may allow an entity in possession of a copy of the SRK or the management private key to manage the secure component 110 by turning on or configuring security features of the secure component 110. The storage device 112 may include one or more components associated with a memory device that are operable to perform one or more authentication procedures as discussed herein.[0019] The storage device 112 can be integrated into a central device 105, which may include a processor 117 interacting with the storage device 112. The processor 117 may load one or more software images on the control unit(s) 185 and may communicate with the control unit(s) 185 to receive software images (e.g., portions of software images) during an authentication process. Prior to running a software image on a control unit 185, the storage device 112 may authenticate the software image to ensure that it has not been tampered with, and elect not to run the software image if its code has been modified by a malicious actor.[0020] In some examples, the storage device 112 may include copies of one or more software images that are stored at the control unit(s) 185. The copies of the software images may be stored at the software image data 140 portion of the secure component 110. Before the software images may run (e.g., before booting the software images), the image(s) may be authenticated to ensure that the underlying code has not been unintentionally altered.[0021] Techniques for authenticating software images of the control unit(s) 185 are described herein. During a first portion of authenticating the image(s), the control unit(s) 185 or any entity in possession of the software image to be authenticated can transmit a portion of the software image to the secure component 110. The secure component 110 may calculate a hash of the portion of the software image and may store the hash to the hash 115 portion of the secure component 110. In some examples, as described herein, the calculated hash may not be stored to the secure component 110 during the authentication process.[0022] As described herein, the secure component 110 may store copies of software images run by the control unit(s) 185 or may store hashes of authenticated versions of the software images run by the control unit(s) 185 (e.g., a golden hashes). To authenticate a software images of a control unit 185, the secure component 110 may generate a hash of the software image to be used by the control unit. The secure component 110 may identify a hash stored in its memory or hash a same portion of the copy of the software image stored by the secure component 110 (e.g., identify or generate golden hashes). To authenticate the software image, the calculated and golden hashes may be compared and, if the hashes match, the control unit 185 may run the program associated with the software image. Authenticating software images of control units as described herein may increase the overall security and reliability of the associated computing system.[0023] FIG. 2 illustrates an example of a system 200 that supports authenticating software images in accordance with examples as disclosed herein. In some examples, the system 200 may be an example of an automotive system. The system 200 may include a computing unit 205 that includes a processor 210 and a secure storage device 215. The computing unit 205 may be coupled with one or more control units 220 via a bus 275 and/or a bus 280. Each of the computing units 205 may run different programs for controlling different portions of the system 200 (e.g., different portions of an automobile). The computing units 205 may each include a respective processor and memory device. The respective memory devices of the computing units 205 may store code (e.g., programs, software images) associated with the respective programs. The stored code may be authenticated using the methods described herein, which may increase the overall security and reliability of the system 200.[0024] The system 200 may include multiple control units 220 which may be examples of or may be referred to as electronic control units (ECUs) 225. Each computing unit 225 may be associated with a different function of the system 200. For example, in an automotive application, each computing unit 225 may be associated with a different vehicle system. For example, the control unit 220-a may be associated with vehicle telematics, the control unit 220-c may be associated with vehicle powertrain, the control unit 220-d may be associated with vehicle powertrain, and the control unit 220-e may be associated with vehicle speed control. In some examples, the control unit 220-b may be an example of a gateway control unit, which may allow the control unit 220-c and the control unit 220-e to communicate with the computing unit 205. The control unit 220-a and the control unit 220-d may communicate with the computing unit 205 via the bus 275. A control unit 220 may be an embedded system that controls one or more electrical systems or subsystems of the system 200. Examples of control units 220 of a vehicle may include a power train control unit, a human-machine interface control unit, a door control unit, different types of engine control units, seat control unit, speed control unit, telematic control unit, transmission control unit, brake control unit, battery management control unit, or others, or a combination thereof.[0025] Each computing unit 225 may be configured to run one or more programs (e.g., software programs) associated with its respective function. The source code of the respective programs may be stored in a software image saved to the memory of each control unit 220 and may be executed by a respective processor. For example, a software image associated with the telematics of a vehicle may be stored to the memory 230 of the control unit 220-a, and the processor 210 may execute the software image to run the program. The memory 230 may include non-volatile memory such as flash memory (e.g., not-and (NAND) memory). In some examples, each of the control units 220 may include non-volatile memory and may include a respective processor configured to execute a software image stored to the respective memory. [0026] The computing unit 205 (e.g., the central computing unit 205) may be configured to manage each of the control units 220. For example, the computing unit 205 may communicate with each of the control units 220 (e.g., via the bus 275, the bus 280, and/or the control unit 220-b) to execute and/or authenticate respective software images. The computing unit 205 may include a secure storage device 215, which may include non-volatile memory such as flash memory (e.g., NAND memory). The secure storage device 215 may be configured to store a copy of the software image of each control unit 220 (e.g., a copy of the code stored to each control unit 220). The copies of the software images stored to the secure storage device 215 may be used for authenticating the software images stored to each control unit 220, and may be stored to the secure storage device 215 during an initialization of the system 200. For example, in an automotive context, the copies of the software images may be stored to the secure storage device 215 during a manufacturing process of a vehicle.[0027] By storing a copy of each software image to the secure storage device 215 or by storing hashes of authenticated versions of each software image, the computing unit 205 may authenticate the software images stored to each control unit 220. Because a system 200 may include multiple control units 220 storing respective software images, each control unit may be susceptible to an independent malicious attack. By storing copies of each of the software images to the secure storage device 215 or copies of the hashes of authenticated versions of each software image, the computing unit 205 may be able to authenticate each of the control units 220 thus increasing the overall security of the system 200. For example, the secure storage device 215 may be configured to authenticate the software images of each control unit 220 upon a boot sequence of the computing system. Thus before the control units 220 execute a respective software image, the software image may be authenticated to ensure that it was not subject to a malicious attack. Additional details of the authentication process may be described below with reference to FIGs. 3 and 4.[0028] FIG. 3 illustrates an example of a memory device 300 in accordance with examples as disclosed herein. In some examples, the memory device 300 may be or may include a secure storage device 305, which may be an example of a secure storage device 215 as described with reference to FIG. 2. The secure storage device 305 may include a controller 310 that is coupled with memory 315. The memory 315 may include non-volatile memory, such as flash memory. The secure storage device 305 may be located within a computing unit, such as the computing unit 205 as described with reference to FIG. 2. In some examples, the secure storage device 305 may store copies of software images of various control units, such as the control units 220 as described with reference to FIG. 2. The stored copies of the code may be used for authenticating software images stored to the control units using the methods described herein, which may increase the overall security and reliability of the associated computing system.[0029] The secure storage device 305 may include a controller 310, which may be configured to authenticate software images associated with control units (e.g., control units 225 as described with reference to FIG. 2) of a computing system. In some examples, the controller 310 may be configured to communicate with various control units to obtain (e.g., measure, sample) portions of respective software images for authentication. For example, the controller 310 may obtain portions of the source code of respective control units and generate hashes associated with the respective portions of the source code. The generated hashes may be compared with hashes (e.g., golden hashes) stored to the memory 315 during an authentication process.[0030] The memory 315 may be coupled with the controller 310 and may include one or more partitions. The memory 315, which may include non-volatile memory, may include a first partition 320, a second partition 325, and a third partition 330. In other examples (not shown), the memory 315 may include N partitions, where N is an integer (e.g., a positive integer). As shown in FIG. 3, the first partition 320 may be configured to store one or more tables for use in authenticating the software images of the control units. The first partition 320 may be secure such that it is accessible only by the controller 310. The second partition 325 may be configured to store copies of the software images of each control unit and may be secure such that it is accessible only by the controller 310 or to store the hashes of authenticated versions of the software images of each control unit. The stored copies may be used for calculating hashes (e.g., golden hashes) and/or restoring a software image of a control unit (e.g., a control unit 220 as described with reference to FIG. 2). In some examples, the first partition 320 and the second partition 325 may be a single partition.[0031] Additionally or alternatively, the third partition 330 may represent user space for general storage, and may be accessible to components other than the controller 310. That is, the third partition 330 may not be secured in the same manner than the first partition 320 and the second partition 325 are secured. In some examples, it may be desirable to maximize the size of the third partition 330. Thus, the size of the memory may be a matter of design choice, though may be large enough to store a copy of the software image of each control unit and maximize the amount of available user space.[0032] To authenticate software images of the control units of an associated computing system, copies of each software image may be stored to a secure portion of the memory 315 (e.g., to the second partition 325). In some examples, the copies of the software images may be stored during manufacturing of an associated computing system, and may be stored by the controller 310 or other device that has permission to access a secure portion of the memory 315 (e.g., the second partition 325). The copies of the software images may represent the intended version of the source code of a software image (e.g., an unaltered version of the source code).[0033] In some examples, the controller 310 may be configured to generate one or more hashes (e.g., golden hashes) associated with each copy of a software image stored to the second partition 325. The hashes may be calculated by sampling (e.g., measuring) patterns of the source code of each software image. The patterns may represent subsets of the complete source code, and may be determined based on various operating characteristics of the associated computing system and/or prior assessments of known malicious attacks. For example, the patterns may be selected based on read speeds, read sizes, bus speeds, a maximum quantity of patterns for any one control unit, or other characteristics or known information. The patterns may be stored to a table (e.g., a first table) stored to the first partition 320.[0034] The first table, which may be referred to as a control unit patterns table, may store various pattern identifications (IDs) and associated sampling patterns. For example, each pattern ID may be associated with a respective portion of the source code of a software image to measure during an authentication operation. The respective portion may identify a starting address and a size of code to sample. Table 1, as reproduced below, may illustrate an example control unit patterns table.Table 1[0035] In other examples (not shown), the control unit patterns table may include any quantity of pattern IDs, and each pattern ID may include any quantity of patterns to sample. Additionally or alternatively, the control unit patterns table may store information relating to an estimated size of the source code that a respective pattern ID may sample (e.g., a percentage of the full source code), an estimated time to measure the pattern of code, and/or a score indicating the effectiveness of using the respective pattern ID in an authentication operation.[0036] A sampling pattern may be used to generate hashes (whether hashes of the software image being run by the control unit or hashes of the authenticated versions of the software image). A sampling pattern may enable the secure storage device to load less information than the entire software image, which can decrease the time spent to authenticate the various control units. During an authentication operation, both hashes used should be associated with the same sampling pattern. If different sampling patterns of the software image are used to generate the hashes, the hashes will likely not match. Additionally, using multiple sampling patterns may allow the secure storage device to store hashes or multiple versions of code of the same software image, which may improve the security performance of the authentication operation. For example, if a hash associated with a first sampling pattern fails to identify a modification introduced by a malicious actor, other hashes associated with other sampling patterns may be used to detect modifications and correctly result in an authentication failure result.[0037] In some examples, a second table (e.g., a pattern sequences table) may be used in conjunction with the control unit patterns table to authenticate software images of control units. For example, the second table may be used when generating hashes (e.g., golden hashes) associated with the copies of the software images stored to the second partition 325, and to determining which patterns of the software images of each control unit to measure. Each entry in the second table may be associated with a sequence number and one or more pattern IDs. A sequence may correlate to a software image of a control unit and the pattern IDs may correspond to indexes of sampling patterns used by secure storage device to generate golden hashes or portions of software images stored by the secure storage device. In case the golden hashes are statically generated, the table may include the hash for the entire sampling sequence or one hash per element of the list of pattern IDs. Table 2, as reproduced below, may illustrate an example pattern sequences table.Table 2[0038] When generating hashes (e.g., golden hashes) associated with the copies of the software images stored to the second partition 325 (e.g., authenticated versions of the sampled software images), the controller 310 may measure the code of each software image using the pattern IDs of the pattern sequences table and associated sampling patterns of the control unit patterns table. For example, a first hash for each software image may be generated using the sampling patterns (from Table 1) associated with pattern IDs 1.1, 2.3, and 3.6 that are associated with sequence 1 in Table 2. To further the example, a second hash for each software image may be generated using the sampling patterns (from Table 1) associated with pattern IDs 1.2, 2.3, and 3.4 that are associated with sequence 2 in Table 2. The measured data for each software image may be hashed and may be stored to the memory 315. In some examples, the generated hashes (e.g., the golden hashes) may be stored as entries in the pattern sequences table (not shown). In other examples, the hashes may be stored to a secure portion of the memory 315 (e.g., to the first partition 320 or the second partition 325). As described herein, the second partition 325 may store copies of software images that may be used for calculating hashes (e.g., golden hashes) and/or restoring a software image of a control unit (e.g., a control unit 220 as described with reference to FIG. 2).[0039] In some examples, the golden hashes may be generated each time the associated computing system is booted, which may be referred to as dynamically generating the hashes. In other examples, the golden hashes may be generated a single time, which may be referred to as statically generating the hashes. For example, the hashes may be generated upon manufacturing the secure storage device 305 and/or upon installing a software update on one or more control units. While dynamically generating the hashes may provide some extra security measures relative to statically generating the hashes, statically generating the hashes may reduce the overall boot time of the associated computing system which may be desirable.[0040] The hashes stored in the memory 315 (e.g., the golden hashes) can be used to authenticate the software images of various control units of the computing system. To authenticate the software images of the control units, the controller 310 may select a random integer corresponding to the sequence column in the pattern sequences table. For example, if the controller 310 randomly selects two (2), then the entry associated with sequence 2 may be used to measure the software image of the source code of each control unit. Using the same example, the source code of each control unit that is associated with the sampling patterns of pattern IDs 1.2, 2.3, and 3.4 may be measured and hashed. The hashes generated by measuring the source code of each control unit may be compared with the hashes stored to the memory 315 (e.g., the golden hashes) to authenticate the software images of the control units.[0041] If a respective software image of a control unit is authenticated (e.g., if the compared hashes match), the software image may be executed and the program may boot as intended. However, if a software image is not authenticated (e.g., if the compared hashes do not match) then the software image may have been subject to a malicious attack. In situations where a software image is not authenticated, the controller 310 may delay booting of the associated computing system and identify the faulty control unit by enabling a flag associated with the faulty control unit (e.g., raising a flag associated with the faulty control unit). For example, in an automotive application, the controller 310 may delay starting the automobile and may identify the faulty control unit as needing repair. In some examples, the state of the flag may remain across power cycles, forcing a full authentication sequence rather than a sampled sequence until the source code is known to be restored to its intended state in every ECU.[0042] In some examples, a flag for a control unit may be enabled during the boot sequence. If the flag is enabled, the controller 310 may measure and hash the entire source code of the software image of the control unit, as opposed to measuring and hashing only a portion. The complete hash (e.g., the complete first hash) may be compared with a golden hash stored to the memory 315 and associated with the entire source code of the correct software image. Additionally or alternatively, if a single flag of a control unit is enabled during the boot sequence, the controller 310 may measure and hash the entire source code of the software image of each control unit of the system. The complete first hashes may be compared with complete hashes generated by hashing the source code of each of the copies of the software images stored to the memory 315.[0043] Upon identifying the faulty control unit, the controller 310 may be configured to download the valid source code to the faulty control unit from the memory 315 (e.g., from the second partition 325), if the copy of the source code is available in the second partition 325. Additionally or alternatively, upon detecting a faulty control unit, the controller 310 may be configured to prevent the computing system (e.g., the automobile) from starting, display an error on one or more displays associated with the computing system, and/or boot the computing system in a safe mode by refraining from booting the faulty control unit(s). Authenticating software images of control units as described herein may increase the overall security and reliability of the associated computing system.[0044] FIG. 4 illustrates an example process flow diagram 400 that supports authenticating software images in accordance with examples as disclosed herein. The process flow diagram 400 may illustrate the operation of one or more components of the system 200 as described with reference to FIG. 2 and the secure storage device 305 as described with reference to FIG. 3. For example, the process flow diagram 400 may illustrate a secure storage device 305 authenticating a software image of a control unit 220 as described with reference to FIGs. 2 and 3, respectively. The process flow diagram 400 may illustrate the operations of a control units 405 and a secure storage device 410.[0045] A secure storage device may be configured to authenticate software images of one or more control units of a computing system. The secure storage device may store golden hashes generated from intended versions of the software images of the control units of the computing system. The stored golden hashes may enable the secure storage device to determine whether the software image of the control units has been altered.[0046] To authenticate the software images, the secure storage device may be configured to measure a portion of the source code of a software image saved to a control unit. As described with reference to FIG. 3, the secure storage device may determine a portion of the source code to measure based on entries in a control unit patterns table and/or a pattern sequences table. The secure storage device may measure the portion of the source code of each control unit and generate a corresponding hash (e.g., a corresponding cryptographic hash). The generated hash may be compared with a hash stored to the secure storage device and associated with a stored copy of the source code. Based on a comparison of the hashes, the secure storage device may determine whether the software image of any control unit has been modified.[0047] In some examples, the host control units 405 may include a software image to be authenticated (e.g., during a boot sequence). The software image may be or may include an operating system, a software program, or the like. Because control units may be susceptible to hacking, where one or more aspects of code of the software image are altered, it may be beneficial to authenticate the software image before the control units 405 complete an associated boot sequence. If the code has been altered, and the control units 405 may be prevented from booting and/or a correct version of the software image may be loaded to the control units 405 (e.g., from the secure storage device 410). In other examples, the program associated with the software image may be prevented from running so that additional damage done by the hacking does not occur. Authenticating software images of control units as described herein may increase the overall security and reliability of the associated computing system.[0048] At 415, one or more tables for use in authenticating a software image of the control units 405 may be generated (e.g., by a remote computing device) and stored to the secure storage device 410. For example, the tables may be generated by a remote computer knowing the intended version of the software images for each of the control units 405. The remote computer may store the tables to the secure storage device 410 during a system installation phase and/or during further maintenance or software update operations. For example, as described with reference to FIG. 3, the remote computing device may generate a control unit patterns table and/or a pattern sequences table. The pattern sequences table may be used for identifying the pattern IDs to use in measuring the source code of the software image of the control units 405. The control unit patterns table may be used for identifying sampling patterns of the code based on the pattern IDs. In some examples (not shown), the tables may be constructed at a different portion of the authentication process. For example, when statically generating hashes (e.g., golden hashes) the tables and associated hashes may be generated prior to initiating the boot sequence (e.g., prior to 420). In other examples, when dynamically generating hashes (e.g., golden hashes), one or more portions of the tables may be generated after receiving a portion of a software image from the control units 405 (e.g., during or after 435).[0049] At 420, a boot sequence may optionally be initiated at the secure storage device 410. A boot sequence may be initiated when the secure storage device 410 is powered on or when a control units 405 associated with the secure storage device 410 is activated, for example when an automobile is started (e.g., powered on). In some examples, the control units 405 may be booted after a software image associated with the control units 405 is authenticated.[0050] At 425, the secure storage device 410 may optionally select a sequence ID corresponding to a row in the pattern sequences table. The integer may identify pattern IDs for measuring the source code of the software image of the control units 405. Because the pattern IDs may indicate which portions of the code to measure (e.g., using the control unit patterns table), the selected integer value may indirectly determine which portion of the source code of the control units 405 is measured. In case an attack detection flag is active, a sequence ID enforcing the hashing of the entire source code for all ECUs may be selected when the device boots. In case the flag is inactive, a random row in the table may be selected.[0051] At 430, the secure storage device 410 may identify a portion of the source code of a software image of the control units 405 to measure. In some examples, at 430, the secure storage device 410 may communicate with the control units 405 to measure the portion of the portion of the software image (e.g., the portion of the source code of the software image). The portion of the software image identified (measured) may be based on the selected sequence ID (e.g., at 425) and based on entries of the pattern sequences table and/or control unit patterns table.[0052] At 440, the secure storage device 410 may receive the portion of the software image 435 from the control units 405. As described herein, the portion of the software image 435 may include portions of the source code of the software image stored to the control units 405. The portion of the source code received by the secure storage device 410 may include a subset of the entire source code of the software image and may be received (e.g., at 440) based on generating the selecting the pattern ID (e.g., at 425) and/or identifying the portion of the software image (e.g., at 430).[0053] At 445, the secure storage device 410 may optionally generate a second hash (e.g., a golden hash) using a copy of the software image stored to the secure storage device 410. In some examples, this may be referred to as dynamically generating a second hash of a copy of the software image stored to the secure storage device 410. The second hash may be generated by measuring a portion of the copy of the software image stored to the secure storage device 410 that is the same as the measured portion of the software image stored to the control units 405. For example, portion of the copy of the software image stored to the secure storage device 410 may be measured based on the selected sequence ID (e.g., at 425) and based on entries of the pattern sequences table and/or control unit patterns table. The second hash may be stored (e.g., temporarily stored) as an entry in the pattern sequences table.[0054] At 450, the secure storage device 410 may generate a first hash associated with the received portion of the software images for the control units 405 (e.g., at 440). The generated first hash may be a cryptographic hash and may be used to authenticate (or not authenticate) the software image of the control units 405. In some examples, the generated hash may be temporarily stored at the secure storage device 410 until the authentication process is complete.[0055] At 455, the secure storage device 410 may identify a second hash for use in authenticating the software image of the control units 405. In some examples, one or more second hashes may be stored in the pattern sequences table. For example, the second hashes may be associated with a respective sequence ID. Accordingly, when a sequence ID is selected (e.g., at 425), the corresponding sequence ID may be associated with a second hash. Accordingly, the secure storage device 410 may identify the second hash based on the sequence ID selected (e.g., at 425).[0056] At 460, the secure storage device 410 may authenticate (or not authenticate) the software image of the control units 405. The secure storage device may compare the first hash (e.g., the hash generated based on receiving the portion of the software image from the control units 405) to the second hash (e.g., the golden hash generated using a portion of the copy of the software image stored to the secure storage device 410). The software image of the control units 405 may be authenticated if the first hash and second hash match, and my not be authenticated if the first hash and second hash do not match.[0057] At 465, the secure storage device 410 may generate a message 470 indicating whether the software image of the control units 405 was authenticated. If authenticated, the message 470 may indicate to the control units 405 that it is safe to boot (e.g., to execute the software image). In case an attack detection flag was raised, the secure storage device 410 may remove the flag so that subsequent boot sequences may be conducted normally. If not authenticated, the same attack detection flag may be raised and the message may instruct the control units 405 to refrain from booting (e.g., to refrain from executing the software image), may prompt an error message to be displayed on one or more components of the associated computing system, and/or may include a copy of the software image stored at the secure storage device 410 to be loaded at the control units 405. The message 470 may be transmitted to the control units 405 (or other component of the associated computing system (not shown)).[0058] At 475, the control units 405 may receive the message. At 480, the control units 405 may optionally execute the stored software image. For example, when the software image of the control units 405 is authenticated, the software image may be executed. In other examples (not shown), at 480 the control units 405 may refrain from executing the software image and/or may download a copy of the source code of the software image stored to the secure storage device 410 that was transmitted in the message 470.[0059] FIG. 5 shows a block diagram 500 of a secure storage device 505 that supports authenticating software images in accordance with examples as disclosed herein. The secure storage device 505 may be an example of aspects of a secure storage device as described with reference to FIGs. 1 through 4. The secure storage device 505 may include an identification component 510, a reception component 515, a calculation component 520, an authentication component 525, a transmission component 530, a generation component 535, a storing component 540, a comparison component 545, a determination component 550, an initiation component 555, and an assigning component 560. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). In some examples, each of the components may be implemented on a secure device’s controller, such as the controller 310 as described with reference to FIG. 3. In other examples, some or all of the components may be implemented in software running on an external processor, such as a processor 117 as described with reference to FIG. 1.[0060] The identification component 510 may identify, by a secure storage device, a portion of a software image associated with an electronic control unit to authenticate using the secure storage device. In some examples, the identification component 510 may identify, by the secure storage device, a second hash stored by the secure storage device and associated with the portion of the software image of the electronic control unit based on calculating the first hash. In some examples, the identification component 510 may identify the second hash stored by the secure storage device based on the identified portion of the software image.[0061] In some examples, the identification component 510 may identify a quantity of portions of the software image associated with the electronic control unit based on receiving the diagnostic information, where each of the portions of the software image are associated with a different pattern identifier. In some examples, the identification component 510 may identify, by the secure storage device, a portion of one or more software images that are each associated with additional electronic control units configured to boot on a same system to authenticate using the secure storage device.[0062] In some examples, the identification component 510 may identify, by the secure storage device, respective second hashes stored by the secure storage device and associated with the respective portions of the software images of the set of additional electronic control units based on generating the respective third hashes. In some examples, the identification component 510 may a flag indicating that a third hash did not match before calculating the first hash.[0063] The reception component 515 may receive, by the secure storage device, data associated with the portion of the software image from the electronic control unit based on identifying the portion to authenticate. In some examples, the reception component 515 may receive the portion of the second software image from the electronic control unit.[0064] In some examples, the reception component 515 may receive, by the secure storage device, diagnostic information associated with the electronic control unit before identifying the portion of the software image. In some examples, the reception component 515 may receive, by the secure storage device, data associated with the portions of the software images from a set of additional embedded systems electronic control units based on identifying the portions to authenticate.[0065] The calculation component 520 may calculate, by the secure storage device, a first hash associated with the data received from the electronic control unit. In some examples, the calculation component 520 may calculate, by the secure storage device, the second hash using the portion of a second software image associated with the electronic control unit.[0066] The authentication component 525 may authenticate, by the secure storage device, the software image associated with the electronic control unit based on the first hash matching the second hash. In some examples, the authentication component 525 may refrain from authenticating the software image based on determining that the first hash is different than the second hash. In some examples, the authentication component 525 may authenticate, by the secure storage device, the software images associated with the set of additional electronic control units based on the third hashes matching the second hashes.[0067] The transmission component 530 may transmit, to the electronic control unit, an indication of the authentication. In some examples, the transmission component 530 may transmit, to the electronic control unit, a second indication that the software image associated with the electronic control unit is not authenticated based on refraining from authenticating the software image received from the electronic control unit.[0068] In some examples, the transmission component 530 may transmit, to the electronic control unit, a copy of a second software image stored by the secure storage device based on refraining from authenticating the software image received from the electronic control unit. In some examples, the transmission component 530 may transmit, to the set of additional electronic control units, an indication of the respective authentication.[0069] The generation component 535 may generate a set of second hashes associated with a set of portions of the software image of the electronic control unit, each second hash of the set of second hashes corresponding to one portion of the set of portions of the software image. In some examples, the generation component 535 may generate a random value corresponding an entry in the second table, where identifying the portion of the software image to authenticate is based on the random value. In some examples, the generation component 535 may generate, by the secure storage device, a respective third hash associated with the data received from each of the set of additional electronic control units. [0070] The storing component 540 may store, by the secure storage device, the set of second hashes based on generating the set of second hashes, where identifying the second hash is based on storing the set of second hashes.[0071] The comparison component 545 may compare the first hash with the second hash based on identifying the second hash.[0072] The determination component 550 may determine whether the first hash is different than the second hash based at least on comparing the first hash with the second hash, where authenticating the software image is based on the first hash matching the second hash.[0073] The initiation component 555 may initiate a boot sequence of the electronic control unit, where identifying the portion of the software image of the electronic control unit to authenticate is based on initiating the boot sequence. In some examples, the initiation component 555 may initiate a boot sequence of one or more of a set of additional electronic control units.[0074] The assigning component 560 may assign one or more address ranges of the software image to each of the portions of the software image based on identifying the quantity of portions of the software image. In some examples, the assigning component 560 may assign the quantity of portions of the software image, the pattern identifier associated with each of the quantity of portions of the software image, and the one or more assigned address ranges to respective entries in a first table stored at the secure storage device based on assigning the one or more address ranges of the software image to each of the portions of the software image.[0075] In some examples, the assigning component 560 may assign an integer value to each of the respective second hashes based on assigning the respective entries in the first table. In some examples, the assigning component 560 may assign the generated second hashes and associated pattern identifiers to respective entries in a second table stored at the secure storage device based on assigning the integer value to each of the generated second hashes.[0076] In some examples, the selecting component 565 may select a first entry in the second table stored at the secure storage device based at least in part on identifying the flag, where the entire software image is to be authenticated based on selecting the first entry in the second table.[0077] FIG. 6 shows a flowchart illustrating a method or methods 600 that supports authenticating software images in accordance with examples as disclosed herein. The operations of method 600 may be implemented by a secure storage device or its components as described herein. For example, the operations of method 600 may be performed by a secure storage device as described with reference to FIG. 5. In some examples, a secure storage device may execute a set of instructions to control the functional elements of the secure storage device to perform the described functions. Additionally or alternatively, a secure storage device may perform aspects of the described functions using special-purpose hardware.[0078] At 605, the secure storage device may identify, by a secure storage device, a portion of a software image associated with an electronic control unit to authenticate using the secure storage device. The operations of 605 may be performed according to the methods described herein. In some examples, aspects of the operations of 605 may be performed by an identification component as described with reference to FIG. 5.[0079] At 610, the secure storage device may receive, by the secure storage device, data associated with the portion of the software image from the electronic control unit based on identifying the portion to authenticate. The operations of 610 may be performed according to the methods described herein. In some examples, aspects of the operations of 610 may be performed by a reception component as described with reference to FIG. 5.[0080] At 615, the secure storage device may calculate, by the secure storage device, a first hash associated with the data received from the electronic control unit. The operations of 615 may be performed according to the methods described herein. In some examples, aspects of the operations of 615 may be performed by a calculation component as described with reference to FIG. 5.[0081] At 620, the secure storage device may identify, by the secure storage device, a second hash stored by the secure storage device and associated with the portion of the software image of the electronic control unit based on calculating the first hash. The operations of 620 may be performed according to the methods described herein. In some examples, aspects of the operations of 620 may be performed by an identification component as described with reference to FIG. 5.[0082] At 625, the secure storage device may authenticate, by the secure storage device, the software image associated with the electronic control unit based on the first hash matching the second hash. The operations of 625 may be performed according to the methods described herein. In some examples, aspects of the operations of 625 may be performed by an authentication component as described with reference to FIG. 5.[0083] At 630, the secure storage device may transmit, to the electronic control unit, an indication of the authentication. The operations of 630 may be performed according to the methods described herein. In some examples, aspects of the operations of 630 may be performed by a transmission component as described with reference to FIG. 5.[0084] In some examples, an apparatus as described herein may perform a method or methods, such as the method 600. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for identifying, by a secure storage device, a portion of a software image associated with an electronic control unit to authenticate using the secure storage device, receiving, by the secure storage device, data associated with the portion of the software image from the electronic control unit based on identifying the portion to authenticate, calculating, by the secure storage device, a first hash associated with the data received from the electronic control unit, identifying, by the secure storage device, a second hash stored by the secure storage device and associated with the portion of the software image of the electronic control unit based on calculating the first hash, authenticating, by the secure storage device, the software image associated with the electronic control unit based on the first hash matching the second hash, and transmitting, to the electronic control unit, an indication of the authentication.[0085] In some examples of the method 600 and the apparatus described herein, identifying the second hash may include operations, features, means, or instructions for identifying the second hash stored by the secure storage device based on the identified portion of the software image.[0086] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for generating a set of second hashes associated with a set of portions of the software image of the electronic control unit, each second hash of the set of second hashes corresponding to one portion of the set of portions of the software image, and storing, by the secure storage device, the set of second hashes based on generating the set of second hashes, where identifying the second hash may be based on storing the set of second hashes.[0087] In some examples of the method 600 and the apparatus described herein, identifying the second hash may include operations, features, means, or instructions for calculating, by the secure storage device, the second hash using the portion of a second software image associated with the electronic control unit.[0088] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for receiving the portion of the second software image from the electronic control unit.[0089] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for comparing the first hash with the second hash based on identifying the second hash, and determining whether the first hash may be different than the second hash based at least on comparing the first hash with the second hash, where authenticating the software image may be based on the first hash matching the second hash.[0090] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for refraining from authenticating the software image based on determining that the first hash may be different than the second hash, and transmitting, to the electronic control unit, a second indication that the software image associated with the electronic control unit may be not authenticated based on refraining from authenticating the software image received from the electronic control unit.[0091] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for transmitting, to the electronic control unit, a copy of a second software image stored by the secure storage device based on refraining from authenticating the software image received from the electronic control unit.[0092] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for initiating a boot sequence of the electronic control unit, where identifying the portion of the software image of the electronic control unit to authenticate may be based on initiating the boot sequence.[0093] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for receiving, by the secure storage device, diagnostic information associated with the electronic control unit before identifying the portion of the software image, identifying a quantity of portions of the software image associated with the electronic control unit based on receiving the diagnostic information, where each of the portions of the software image may be associated with a different pattern identifier, and assigning one or more address ranges of the software image to each of the portions of the software image based on identifying the quantity of portions of the software image.[0094] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for assigning the quantity of portions of the software image, the pattern identifier associated with each of the quantity of portions of the software image, and the one or more assigned address ranges to respective entries in a first table stored at the secure storage device based on assigning the one or more address ranges of the software image to each of the portions of the software image.[0095] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for assigning an integer value to each of the respective second hashes based on assigning the respective entries in the first table.[0096] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for assigning the generated second hashes and associated pattern identifiers to respective entries in a second table stored at the secure storage device based on assigning the integer value to each of the generated second hashes.[0097] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for generating a random value corresponding an entry in the second table, where identifying the portion of the software image to authenticate may be based on the random value.[0098] In some examples of the method 600 and the apparatus described herein, the second table includes a set of pattern identifiers for portions of a set of software images of a set of electronic control units and a set of hashes associated with each of the portions of the software image.[0099] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for identifying a flag indicating that a third hash did not match before calculating the first hash and selecting a first entry in the second table stored at the secure storage device based at least in part on identifying the flag, where the entire software image is to be authenticated based on selecting the first entry in the second table.[0100] In some examples of the method 600 and the apparatus described herein, the secure storage device includes a circuit that may be inaccessible to the electronic control unit.[0101] Some examples of the method 600 and the apparatus described herein may further include operations, features, means, or instructions for initiating a boot sequence of one or more of a set of additional electronic control units, identifying, by the secure storage device, a portion of one or more software images that may be each associated with additional electronic control units configured to boot on a same system to authenticate using the secure storage device, receiving, by the secure storage device, data associated with the portions of the software images from a set of additional embedded systems electronic control units based on identifying the portions to authenticate, generating, by the secure storage device, a respective third hash associated with the data received from each of the set of additional electronic control units, identifying, by the secure storage device, respective second hashes stored by the secure storage device and associated with the respective portions of the software images of the set of additional electronic control units based on generating the respective third hashes, authenticating, by the secure storage device, the software images associated with the set of additional electronic control units based on the third hashes matching the second hashes, and transmitting, to the set of additional electronic control units, an indication of the respective authentication.[0102] In some examples of the method 600 and the apparatus described herein, the secure storage device may be configured to authenticate software images associated with a set of electronic control units within a system.[0103] It should be noted that the methods described herein are possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, portions from two or more of the methods may be combined.[0104] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0105] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components from one another, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0106] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on- glass (SOG) or silicon-on-sapphire (SOS), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0107] A switching component or a transistor discussed herein may represent a fieldeffect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0108] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0109] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0110] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[OHl] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0112] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0113] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of these are also included within the scope of computer-readable media.[0114] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
A manufacturing method for an integrated circuit is provided having a semiconductor substrate with a semiconductor device. A device dielectric layer is formed on the semiconductor substrate. A channel dielectric layer on the device dielectric layer has an opening formed therein. A barrier layer lines the channel opening. A seed layer is deposited over the barrier layer. The seed and barrier layers are then removed above the channel dielectric layer. A second seed layer is deposited over the semiconductor substrate. A conductor layer is electroplated over the second seed layer to fill the opening. The electroplated conductor layer and the second seed layer are removed above the channel dielectric layer.
What is claimed is: 1. A method of manufacturing an integrated circuit comprising:providing a semiconductor substrate having a semiconductor device provided thereon; forming a dielectric layer over the semiconductor substrate; forming an opening in the dielectric layer; depositing a barrier layer to line the opening; depositing a seed layer over the barrier layer; removing the seed layer and barrier layer above the dielectric layer; depositing a second seed layer over the seed layer and the dielectric layer; depositing a conductor core over the second seed layer by electroplating to fill the opening and connect to the semiconductor device wherein removing the seed layer and barrier layer are performed before depositing the conductor core; and removing the conductor core and the second seed layer above the dielectric layer. 2. The method of manufacturing an integrated circuit as claimed in claim 1 wherein removing the seed layer and barrier layer uses an abrasiveless removal process.3. The method of manufacturing an integrated circuit is claimed in claim 1 wherein removing the conductor core and the second seed layer uses a chemical-mechanical polishing process.4. The method of manufacturing an integrated circuit as claimed in claim 1 wherein forming the dielectric layer deposits a material having a dielectric constant under 3.9.5. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the barrier layer deposits a material selected from a group consisting of tantalum, titanium, tungsten, an alloy thereof, and a compound thereof.6. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the second seed layer deposits a material selected from a group consisting of copper, gold, silver, an alloy thereof, and a compound thereof.7. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the seed layer deposits a material selected from a group consisting of copper, gold, silver, an alloy thereof, and a compound thereof.8. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the conductor core deposits material selected from a group consisting of copper, gold, silver, an alloy thereof, and a compound thereof.9. A method of manufacturing an integrated circuit comprising: providing a semiconductor substrate having a semiconductor device provided thereon; providing a device dielectric layer over the semiconductor substrate;forming a channel dielectric layer over the device dielectric layer; forming an opening in the channel dielectric layer; depositing a barrier layer to line the opening; depositing a seed layer over the barrier layer; removing the seed layer and barrier layer above the channel dielectric layer by chemical-mechanical polishing; depositing a second seed layer over the seed layer and the channel dielectric layer, depositing a conductor core over the second seed layer by electroplating to fill the opening and connect to the semiconductor device wherein removing the seed layer and barrier layer are performed before depositing the conductor core; and removing the conductor core and the second seed layer above the channel dielectric layer by chemical mechanical polishing. 10. The method of manufacturing an integrated circuit as claimed in claim 9 wherein removing the seed layer and the barrier layer by chemical-mechanical polishing uses an abrasiveless chemical-mechanical polishing solution.11. The method of manufacturing an integrated circuit as claimed in claim 9 wherein forming the channel dielectric layer deposits a material having a dielectric constant under 3.9.12. The method of manufacturing an integrated circuit as claimed in claim 9 wherein depositing the barrier layer deposits a material selected from a group consisting of tantalum, titanium tungste, an alloy thereof, and a compound thereof.13. The method of manufacturing an integrated circuit as claimed in claim 9 wherein depositing the seed layer deposits a material selected from a group consisting of copper, gold, silver, an alloy thereof, and a compound thereof.14. The method of manufacturing integrated circuit as claimed in claim 9 wherein depositing the second seed layer deposits a material selected from a group consisting of copper, gold, silver an alloy thereof, and a compound thereof.15. The method of manufacturing an integrated circuit as claimed in claim 9 wherein depositing the conductor core deposits material selected from a group consisting of copper, gold, silver, an alloy thereof, and a compound thereof.16. A method of manufacturing an integrated circuit comprising:semiconductor substrate having a semiconductor device provided thereon; providing a device dielectric layer over the semiconductor substrate; forming a channel dielectric layer over the device dielectric layer, forming an opening in the channel dielectric layer; depositing a barrier layer to line the opening; depositing a seed layer over the barrier layer; removing the seed layer above the channel dielectric layer by chemical-mechanical polishing; depositing a second seed layer over the seed layer and the channel dielectric layer; depositing a conductor core over the second seed layer by electroplating to fill the opening and connect to the semiconductor device wherein removing the seed layer is performed before depositing the conductor core; and removing the conductor core and the second seed layer above the channel dielectric layer by chemical mechanical polishing.
CROSS-REFERENCE TO RELATED APPLICATION(S)The present application contains subject matter related to a concurrently filed U.S. Patent Application by Minh Quoc Tran and Christy Mei-Chu Woo entitled "PRE-FILL CMP AND ELECTROLESS PLATING METHOD FOR INTEGRATED CIRCUITS", identified by U.S. patent application Ser. No. 09/894,170 filed on Jun. 27, 2001, and commonly assigned to Advanced Micro Devices, Inc.TECHNICAL FIELDThe present invention relates generally to integrated circuits and more particularly to controlling interconnect channel thickness therein.BACKGROUND ARTIn the manufacture of integrated circuits, after the individual devices such as the transistors have been fabricated in and on the semiconductor substrate, they must be connected together to perform the desired circuit functions. This interconnection process is generally called "metallization" and is performed using a number of different photolithographic, deposition, and removal techniques.In one interconnection process, which is called a "dual damascene" technique, two interconnect channels of conductor materials are separated by interlayer dielectric layers in vertically separated planes perpendicular to each other and interconnected by a vertical connection, or "via", at their closest point. The dual damascene technique is performed over the individual devices which are in a device dielectric layer with the gate and source/drain contacts extending up through the device dielectric layer to contact one or more channels in a first channel dielectric layer.The first channel formation of the dual damascene process starts with the deposition of a thin first channel stop layer. The first channel stop layer is an etch stop layer which is subject to a photolithographic processing step which involves deposition, patterning, exposure, and development of a photoresist, and an anisotropic etching step through the patterned photoresist to provide openings to the device contacts. The photoresist is then stripped. A first channel dielectric layer is formed on the first channel stop layer. Where the first channel dielectric layer is of an oxide material, such as silicon oxide (SiO2), the first channel stop layer is a nitride, such as silicon nitride (SiN), so the two layers can be selectively etched.The first channel dielectric layer is then subject to further photolithographic process and etching steps to form first channel openings in the pattern of the first channels. The photoresist is then stripped.An optional thin adhesion layer is deposited on the first channel dielectric layer and lines the first channel openings to ensure good adhesion of subsequently deposited material to lo the first channel dielectric layer. Adhesion layers for copper (Cu) conductor materials are composed of compounds such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN).These nitride compounds have good adhesion to the dielectric materials and provide good barrier resistance to the diffusion of copper from the copper conductor materials to the dielectric material. High barrier resistance is necessary with conductor materials such as copper to prevent diffusion of subsequently deposited copper into the dielectric layer, which can cause short circuits in the integrated circuit.However, these nitride compounds also have relatively poor adhesion to copper and relatively high electrical resistance.Because of the drawbacks, pure refractory metals such as tantalum (Ta), titanium (Ti), or tungsten (W) are deposited on the adhesion layer to line the adhesion layer in the first channel openings. The refractory metals are good barrier materials, have lower electrical resistance than their nitrides, and have good adhesion to copper.In some cases, the barrier material has sufficient adhesion to the dielectric material that the adhesion layer is not required, and in other cases, the adhesion and barrier material become integral. The adhesion and barrier layers are often collectively referred to as a "barrier" layer herein.For conductor materials such as copper, which are deposited by electroplating, a seed layer is deposited on the barrier layer and lines the barrier layer in the first channel openings to act as an electrode for the electroplating process. Processes such as electroless, physical vapor, and chemical vapor deposition are used to deposit the seed layer.A first conductor material is deposited on the seed layer and fills the first channel opening. The first conductor material and the seed layer generally become integral, and are often collectively referred to as the conductor core when discussing the main current-carrying portion of the channels.A chemical-mechanical polishing (CMP) process is then used to remove the first conductor material, the seed layer, and the barrier layer above the first channel dielectric layer to form the first channels. When a layer is placed over the first channels as a final layer, it is called a "capping" layer and a "single" damascene process is completed. When the layer is processed further for placement of additional channels over it, the layer is a via stop layer.The via formation of the dual damascene process starts with the deposition of a thin via stop layer over the first channels and the first channel dielectric layer. The via stop layer to is an etch stop layer which is subject to photolithographic processing and anisotropic etching steps to provide openings to the first channels. The photoresist is then stripped.A via dielectric layer is formed on the via stop layer. Again, where the via dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The via dielectric layer is then As subject to further photolithographic process and etching steps to form the pattern of the vias. The photoresist is then stripped.A second channel dielectric layer is formed on the via dielectric layer. Again, where the second channel dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The second channel dielectric layer is then subject to further photolithographic process and etching steps to simultaneously form second channel and via openings in the pattern of the second channels and the vias. The photoresist is then stripped.An optional thin adhesion layer is deposited on the second channel dielectric layer and lines the second channel and the via openings.A barrier layer is then deposited on the adhesion layer and lines the adhesion layer in the second channel openings and the vias.Again, for conductor materials such as copper and copper alloys, a seed layer is deposited by electroless deposition on the barrier layer and lines the barrier layer in the second channel openings and the vias.A second conductor material is deposited on the seed layer and fills the second channel openings and the vias.A CMP process is then used to remove the second conductor material, the seed layer, and the barrier layer above the second channel dielectric layer to form the first channels. When a layer is placed over the second channels as a final layer, it is called a "capping" layer and the "dual" damascene process is completed.The layer may be processed further for placement of additional levels of channels and vias over it. Individual and multiple levels of single and dual damascene structures can be formed for single and multiple levels of channels and vias, which are collectively referred to as "interconnects".The use of the single and dual damascene techniques eliminates metal etch and dielectric gap fill steps typically used in the metallization process. The elimination of metal etch steps is important as the semiconductor industry moves from aluminum (Al) to other metallization materials, such as copper, which are very difficult to etch.One of the major problems encountered during the CMP process is that, when the thick conductor material and the barrier layer are polished away, both the channels and dielectric layers are subject to "erosion", or undesirable CMP of the channel and dielectric materials, which makes it difficult to control the channel thickness.Another major problem, during the same process, wide channels are subject to "dishing", or undesirable CMP of the conductor material, which also makes it difficult to control the channel thickness.Variable thickness channels are subject to increased resistance and shorter time to failure.Solutions to these problems have been long sought but have long eluded those skilled in the art.DISCLOSURE OF THE INVENTIONThe present invention provides a method for manufacturing an integrated circuit having a semiconductor substrate with a semiconductor device. A device dielectric layer is formed on the semiconductor substrate and a channel dielectric layer with an opening is formed on the device dielectric layer. A barrier layer is deposited to line the channel opening and a seed layer is deposited over the barrier layer. The seed and barrier layers are removed above the channel dielectric layer and a second seed layer is deposited over the semiconductor substrate. A conductor layer is electroplated over the second seed layer to fill the opening. The electroplated conductor layer and the second seed layer are removed above the dielectric layer. This results in erosion and dishing being eliminated, and uniform channels being produced without the drawbacks of increased resistance and shorter time to failure.The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 (PRIOR ART) is a plan view of aligned channels with a connecting via;FIG. 2 (PRIOR ART) is a cross-section of FIG. 1 (PRIOR ART) along line 2-2; andFIG. 3 (PRIOR ART) shows a step in the chemical-mechanical polishing process and depicts the channel erosion and dishing;FIG. 4 shows a cross-section of a semiconductor wafer in a step in the chemical-mechanical polishing process in accordance with the present invention;FIG. 5 is the structure of FIG. 4 after deposition of a second seed layer in accordance with the present invention.FIG. 6 is the structure of FIG. 5 with an electroplated conductor layer in accordance with the present invention; andFIG. 7 is the structure of FIG. 6 with a uniform channel thickness in accordance with the present invention.BEST MODE FOR CARRYING OUT THE INVENTIONReferring now to FIG. 1 (PRIOR ART), therein is shown a plan view of a semiconductor wafer 100 with a silicon semiconductor substrate (not shown) having as interconnects first and second channels 102 and 104 connected by a via 106. The first and second channels 102 and 104 are respectively disposed in first and second channel dielectric layers 108 and 110. The via 106 is an integral part of the second channel 104 and is disposed in a via dielectric layer 112.The term "horizontal" as used in herein is defined as a plane parallel to the conventional plane or surface of a wafer, such as the semiconductor wafer 100, regardless of the orientation of the wafer. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "side" (as in "sidewall"), "higher", "lower", "over", and "under", are defined with respect to the horizontal plane.Referring now to FIG. 2 (PRIOR ART), therein is shown a cross-section of FIG. 1 (PRIOR ART) along line 2-2. A portion of the first channel 102 is disposed in a first channel stop layer 114 and is on a device dielectric layer 116, which is on the silicon semiconductor substrate. Generally, metal contacts are formed in the device dielectric layer 116 to connect to an operative semiconductor device (not shown). This is represented by the contact of the first channel 102 with a semiconductor contact 118 embedded in the device dielectric layer 116. The various layers above the device dielectric layer 116 are sequentially: the first channel stop layer 114, the first channel dielectric layer 108, a via stop layer 120, the via dielectric layer 112, a second channel stop layer 122, the second channel dielectric layer 110, and a capping or next channel stop layer 124 (not shown in FIG. 1).The first channel 102 includes a barrier layer 126, which could optionally be a combined adhesion and barrier layer, and a seed layer 128 around a conductor core 130. The second channel 104 and the via 106 include a barrier layer 132, which could also optionally be a combined adhesion and barrier layer, and a seed layer 134 around a conductor core 136. The barrier layers 126 and 132 are used to prevent diffusion of the conductor materials into the adjacent areas of the semiconductor device. The seed layers 128 and 134 form electrodes on which the conductor material of the conductor cores 130 and 136 is deposited. The seed layers 128 and 134 are of substantially the same conductor material as the conductor cores 130 and 136 and become part of the respective conductor cores 130 and 136 after the deposition.In the past, for copper conductor material and seed layers, highly resistive diffusion barrier materials such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN) are used as barrier materials to prevent diffusion.Referring now to FIG. 3 (PRIOR ART), therein is shown a step in the CMP process in which a first channel surface of the semiconductor wafer 100 is planarized. Therein is thus shown the planarization of the first channel 102, other channels 140 through 143, and the first channel dielectric layer 108 with a conventional CMP slurry containing abrasive particles. There are a number of different slurries known which consist of sized abrasive particles carried by a CMP solution.Without tight process controls, the CMP will remove both the conductor material, such as copper, the barrier material, such as tantalum nitride, as well as the dielectric material, such as silicon oxide, and cause erosion "E". The erosion "E" is the formation of a concave depression in the other channels 140 through 142 and the first channel dielectric layer 108. Dishing "D" is the formation of concave depressions in the wider or longer channel 143 and he first channel 102, which is also due to the low chemical selectivity. Both erosion and dishing can dramatically change the thickness of the channels and reduce their current-carrying capability.Referring now to FIG. 4, therein is shown a semiconductor wafer in an intermediate stage of manufacture in accordance with the present invention. A device dielectric layer 216 has been deposited as part of a semiconductor wafer 200.A first channel dielectric layer 208 has been deposited, patterned, developed, and etched to form channel openings 230 through 234. The device dielectric layer 216 and the first channel dielectric 208 have been lined with a barrier layer 226 and a seed layer 228.As indicated by dotted lines in FIG. 4, the portions of the barrier layer 226 and the seed layer 228 above the first channel dielectric layer 208 has been removed by a chemical-mechanical polishing process. An abrasiveless chemical is used for the chemical-mechanical polishing process in order to prevent abrasives from being left in the channel openings 230 through 234.Referring now to FIG. 5, therein is shown the semiconductor wafer 200 after a deposition of a second seed layer 235 in accordance with the present invention.Referring now to FIG. 6, therein is shown the semiconductor wafer 200 with an electroplated conductor layer 238. The second seed layer 235 acts as the electrode for the plating of the electroplated conductor layer 238 in an electroplating process.Referring now to FIG. 7, therein is shown the structure of FIG. 6 after CMP of the electroplated conductor layer 238 and the second seed layer 235 to be co-planar with the first channel dielectric layer 208 to form the interconnect conductor channels 240 through 243 and the first channel 202. Since the second seed layer 235 adheres poorly to the first channel dielectric layer 208 as compared to adhesion to the barrier layer 226, the electroplated conductor layer 238 and the second seed layer 235 are easily removed by the conductor CMP without removing the first channel dielectric layer 208.This process results in less over-polishing with less dishing and erosion, which leads to the channels having uniform thicknesses "T".In various embodiments, the diffusion barrier layers are of materials such as tantalum (Ta), titanium (Ti), tungsten (W), alloys thereof, and compounds thereof. The seed layers (where used) are of materials such as copper (Cu), gold (Au), silver (Ag), alloys thereof and compounds thereof with one or more of the above elements. The conductor cores with or without seed layers are of conductor materials such as copper, aluminum (Al), gold, silver, alloys thereof, and compounds thereof. The dielectric layers are of dielectric materials such as silicon oxide (SiOx), tetraethoxysilane (TEOS), borophosphosilicate (BPSG) glass, etc. with dielectric constants from 4.2 to 3.9 or low dielectric constant dielectric materials such as fluorinated tetraethoxysilane (FTEOS), hydrogen silsesquioxane (HSQ), benzocyclobutene (BCB), TMOS (tetramethoxysilane), OMCTS (octamethyleyclotetrasiloxane), HMDS (hexamethyldisiloxane), SOB (trimethylsilil borxle), DADBS (diaceloxyditerliarybutoxsilane), SOP (trimethylsilil phosphate), etc. with dielectric constants below 3.9. The stop layers and capping layers (where used) are of materials such as silicon nitride (SixNx) or silicon oxynitride (SiON).While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
A memory device wherein a diode is serially connected to a programmable resistor and is in electrical communication with a buried digit line. An electrically conductive plug is electrically interposed between the digit line and a strapping layer, thereby creating a double metal scheme wherein the strapping layer is a second metal layer overlying metal wordlines. In a method of a first embodiment the strapping material is electrically connected to the digit line through a planar landing pad overlying the conductive plug. An insulative material is sloped to the planar landing pad in order to provide a surface conducive to the formation of the strapping material. In a method of a second embodiment diodes are formed, each having a maximum width equal to f, which is equal to the minimum photolithographic limit of the photolithographic equipment being used, and distanced one from the other along a length of the digit line by a maximum distance equal to f; at least portions of the diodes are masked; at least a portion of an insulative material interposed between two diodes is removed to expose the buried digit line; and the conductive plug is formed in contact with the exposed portion of the buried digit line. After the formation of a programmable resistor in series with the diode a wordline is formed in electrical communication with each of the programmable resistors, and an insulative layer is formed overlying each wordline. Next an insulative spacer layer is deposited and etched to expose the conductive plug. The strapping layer is then formed overlying and in contact with the conductive plug.
What is claimed is: 1. A method of fabricating a memory device comprising the acts of:(a) forming a first digit line on a substrate; (b) forming a plurality of pairs of memory cells in electrical communication with the first digit line; (c) forming a plurality of contacts in electrical communication with the first digit line, a respective one of the plurality of contacts being formed between each pair of memory cells; and (d) forming a strapping line in electrical communication with each of the plurality of contacts. 2. The method, as set forth in claim 1, wherein act (a) comprises the act of:forming a titanium silicide layer over the first digit line. 3. The method, as set forth in claim 1, wherein act (b) comprises the acts of:for each of the memory cells, forming an access device having a first terminal and a second terminal, the first terminal being in electrical communication with the first digit line; and for each of the memory cells, forming a memory element in electrical communication with the second terminal of the access device. 4. The method, as set forth in claim 1, wherein act (b) comprises the act of:forming each of the memory cells to have a width approximately equal to a minimum photolithographic limit. 5. The method, as set forth in claim 1, wherein act (c) comprises the act of:forming each contact from a doped semiconductive region of the substrate. 6. The method, as set forth in claim 1, wherein act (c) comprises the acts of:forming dielectric spacers between each pair of memory cells; and forming each contact between the respective dielectric spacers. 7. The method, as set forth in claim 1, wherein act (d) comprises the act of:isolating each of the plurality of memory cells from the plurality of contacts. 8. The, method, as set forth in claim 1, comprising the act of:forming a plurality of second digit lines, each of the plurality of second digit lines being in electrical communication with a respective one of the plurality of memory cells. 9. The method, as set forth in claim 3, wherein the act of forming an access device comprises the act of:forming a diode. 10. The method, as set forth in claim 3, wherein the act of forming a memory element comprises the act of:forming a chalcogenide memory element. 11. The method, as set forth in claim 4, wherein act (b) comprises the act of:forming each pair of memory cells to be spaced apart by a distance approximately equal to the minimum photolithographic limit. 12. The method, as set forth in claim 6, wherein each contact and its respective dielectric spacers have a combined width approximately equal to a minimum photolithographic limit.13. The method, as set forth in claim 7, wherein the act of isolating comprises the act of:disposing dielectric material on each of the plurality of memory cells. 14. The method, as set forth in claim 13, wherein act (d) comprises the act of:forming the strapping line through tapered holes extending through the dielectric material to the contacts.
This appln is a Div. of Ser. No. 08/604,751 filed Feb. 23, 1996The present invention relates generally to semiconductor devices; and more particularly relates to methods for forming digit lines of improved conductivity, such method having particular usefulness in the fabrication of memory devices, and particularly to memory devices having programmable elements accessible by a diode.BACKGROUND OF THE INVENTIONDiode arrays are well known memory storage arrays used in semiconductor memory devices. A selected diode is typically addressed via digit line and word line selection. A resistance of a programmable resistor in series with the selected diode is controlled to select a desired memory state. In one case the programmable resistor may be an ovonic element, such as a chalcogenide material. The internal structure of the chalcogenide is modified to alter its resistance and therefore its "logic" state. The modification of the structure is ovonic and is dependent on the current which is applied to the element through the diode. It is desirable to reduce stray resistance which may be in series with the diode, since by reducing the stray resistance the ovonics can be more closely controlled with less current, thereby reducing power requirements.SUMMARY OF THE INVENTIONThe invention includes a method for forming a semiconductor device wherein a conductive element within the substrate is strapped by another conductive layer above. In one currently envisioned embodiment, another conductive layer will be interposed between the substrate and the strapping layer. In one exemplary preferred implementation, the semiconductor device will be a memory device comprising a diode serially connected to a programmable resistor. The diode is in electrical communication with a buried digit line. An electrically conductive plug is electrically interposed between the digit line and a strapping layer, thereby creating a double metal structure wherein the strapping layer is a second metal layer overlying metal wordlines.In a method of a first embodiment, the strapping material is electrically connected to the digit line through a planar landing pad overlying the conductive plug. An insulative material is sloped to the planar landing pad in order to provide a surface conducive to the formation of the strapping material. Typically a layer of titanium silicide is formed on the buried digit line.In an exemplary method of forming a second embodiment in accordance with the present invention, diodes are formed, each having a maximum width equal to f, which may be equal to the minimum photolithographic limit of the photolithographic equipment being used, and distanced one from the other along a length of the digit line by a maximum distance equal to f; at least portions of the diodes are masked; at least a portion of an insulative material interposed between two diodes is removed to expose the buried digit line; and the conductive plug is formed in contact with the exposed portion of the buried digit line. After the formation of a programmable resistor in series with the diode a wordline is formed in electrical communication with each of the programmable resistors, and an insulative layer is formed overlying each wordline. Next an insulative spacer layer is deposited and etched to expose the conductive plug. The strapping layer is then formed overlying and in contact with the conductive plug.In the second embodiment the width of the diode is equal to f and the electrically conductive plug is formed within a distance f from a sidewall of the diode. An electrically insulative spacer is interposed between the plug and the sidewall of the diode. In this embodiment the diode and the plug are made of polycrystalline silicon, although it is possible to use any conceivable diode structure, for example a metal/semiconductor. In the second embodiment the cathode of the diode is fabricated in the substrate and the anode is fabricated overlying the substrate or vice versa.In the typical memory array of the invention the programmable resistor is ovonic and the array is a mesa type structure. The diodes are either planar or container structures.The invention provides redundancy since the digit line is a buried component and the strapping layer is an upper component. Thus, even if the metal of the strapping layer breaks, operation of the memory device is maintained through the buried digit line. Thus the device has better electromigration reliability, and there is no memory disturbance from cell to cell due to the collection of current in the digit line.There is space savings when using the structure of the second embodiment, since the area between cells is no longer just isolation space but is used instead for contact to the buried digit line, thereby providing efficient spacing of the cell for high compaction while at the same time. providing good cell to cell isolation.By using the double metal scheme of the invention the series resistance to the diode is reduced to the diode/prograiiuiable resistor structure. This resistance is decreased even further by providing a strapped conductive plug for every two diodes of the array and physically interposed therebetween. By using Titanium silicide on the buried digit line in conjunction with the strapped metal layer the best packing density is achieved with minimal processing steps. In addition the titanium silicide is used to minimize the number of connections needed to connect the strapping material and buried digit line.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a cross sectional view of a substrate in which digit lines have been formed. The cross sectional view of FIG. 1A is taken through the width of the digit lines.FIG. 1B is a cross sectional view taken through the length of one of the digit lines shown in FIG. 1A.FIGS. 2A and 2B are the cross sectional view shown in FIG. 1B following depositions of silicon dioxide and polycrystalline silicon.FIGS. 3A and 3B are the cross sectional views shown in FIGS. 2A and 2B, respectively, following a CMP.FIG. 4 is the cross sectional view of FIG. 3A following a doping of the polycrystalline silicon.FIG. 5 is the cross sectional view of FIG. 4 following the formation of a contact plug.FIG. 6 is a cross sectional view of FIG. 5 following the formation of programmable resistors, word lines and a landing pad.FIG. 7 is a cross sectional view of FIG. 6 following the formation and etch of an oxide layer.FIG. 8 is a cross sectional view of FIG. 7 following the formation of a strapping layer.FIG. 9A is a cross sectional view of a substrate in which digit lines have been formed. The cross sectional view of FIG. 9A is taken through the width of the digit lines.FIG. 9B is a cross sectional view taken through the length of one of the digit lines a shown in FIG. 9A.FIG. 10A is a cross sectional view of a the substrate of FIG. 9B following the deposition, planarization and masking of an oxide layer.FIG. 10B is a top planar view of FIG. 10A.FIGS. 11A and 11B are the cross sectional views of FIGS. 9A and 9B, respectively, following the formation of polycrystalline silicon regions in the oxide layer of FIGS. 10A and 10B.FIG. 12A is the cross sectional view of FIG. 11B following the masking of the polycrystalline silicon regions and the oxide layer and following the etching of the oxide layer in unmasked regions.FIG. 12B is a top planar view of FIG. 12A.FIG. 13 is a cross sectional view of FIG. 12A following removal of a masking layer and deposition of a spacer layer.FIG. 14 is the cross sectional view of FIG. 13 following the etching of the spacer layer to form spacers adjacent to sidewalls of the polycrystalline silicon regions.FIG. 15 is the cross sectional view of FIG. 14 following a deposition of polycrystalline silicon.FIG. 16 is the cross sectional view of FIG. 15 following a CMP.FIG. 17A is the cross sectional view of FIG. 16 following the formation of ovonic devices.FIG. 17B is the cross sectional view of FIG. 16 following the formation of ovonic devices in a recess of a nitride layer.FIGS. 18A and 18B are the cross sectional views of FIGS. 17A and 17B, respectively, following the deposition of a conductive layer and an oxide layer and the masking thereof.FIGS. 19A and 19B are the cross sectional views of FIGS. 18A and 18B, respectively, following removal of exposed portions of the conductive layer and the oxide layer and the mask of FIGS. 18A and 18B.FIGS. 20A and 20B are the cross sectional views of FIGS. 19A and 19B, respectively, following the deposition of an oxide layer.FIGS. 21A and 21B are the cross sectional views of FIGS. 20A and 20B, respectively, following etching of the oxide layer of FIGS. 20A and 20B and the deposition of a strapping layer.BRIEF DESCRIPTION OF THE PREFERRED EMBODIMENTSThe invention is described in reference to the exemplary embodiment of a memory device comprising a diode serially connected to a programmable resistor. The diode is in electrical communication with a buried digit line. An electrically conductive plug is electrically interposed between the digit line and a strapping layer, thereby creating a "double metal" structure wherein the strapping layer is a second metal layer overlying metal wordlines.In an exemplary memory array the programmable resistor includes an ovonic element and the array is a mesa type structure. Alternately, in a read only memory structure, the programmable resistor may be an anti-fuse device. The diode is either a planar or a container structure, and is used as a setting device to control current to the programmable resistor.The method of the first embodiment is discussed relative to FIGS. 1A-8. FIG. 1A depicts a p-type substrate 5 which was masked with a pattern which defined active areas. Field oxide 6 was then grown to electrically isolate the active areas, and the mask was removed. The exposed portions of the substrate 5 are implanted at a dose of 1-9E<13 >with a dopant such as phosphorous having an energy of 100-150 KeV to create n- regions 7 in the active area of the substrate. Titanium is deposited and a RTP sinter is performed during which the titanium reacts with the exposed portions of the silicon substrate to form titanium silicide 8. Unreacted portions of the titanium are removed with a low temperature piranha etch. The titanium silicide regions 8 and the n- region of the substrate 7 form the buried digit lines 10. One of the digit lines 10 is shown in longitudinal cross-section in FIG. 1B, while FIG. 1A depicts the entire column pattern in vertical cross-section lateral to the digit lines 10.The titanium silicide 8 remaining following the piranha etch is masked (mask not shown) to protect titanium silicide 8 in future contact areas during an etch which removes the titanium silicide 8 in unmasked regions. The masked is then removed (see FIG. 1B).FIGS. 2A and 2B depict the cross sectional view shown in FIG. 1B following further process steps. In FIG. 2A a relatively thick layer of silicon dioxide 15 is deposited to overlie the buried digit lines 10 and the field oxide 6, which is not shown in this cross section. The silicon dioxide 15 is masked with a contact pattern, not shown, defining polycrystalline silicon plugs and etched to create openings in which the polycrystalline silicon plugs may be formed. The openings expose the digit lines 10 in contact regions. After removal of the mask a layer of polycrystalline silicon 20 is deposited to fill the openings. The polycrystalline silicon 20 is doped. The dopant is selected from materials having n- impurities such as phosphorous, antinomy, and arsenic. The dopant may be implanted at 35-150 KeV and a dose of 3E<13>-1E<14>. The polycrystalline silicon 20 may be deposited in situ and doped between 1E<16 >and 1E<18 >atoms/cc or doped after the polycrystalline silicon 20 is deposited to the same dopant level.In an alternate embodiment shown in FIG. 2B the silicon dioxide 15 is masked and etched as in FIG. 2A. Following the etch of the silicon dioxide 15 the substrate is implanted with a dopant selected from materials having p- impurities, such as boron, gallium, and BF2 to form p- regions 17. The dopants have energies ranging from 50-100 KeV and dosages of 1E<13>-1E<14 >atoms/cm<2>. The polycrystalline silicon 20 is then deposited to fill the openings. In this embodiment the polycystalline silicon 20 is implanted or in situ doped with a dopant selected from materials having p+ impurities, such as boron, gallium, and BF2, to create a p+ polycrystalline silicon 20. The dopants have energies ranging from 35-50 KeV and dosages of 1E<15 >to 5E<15 >atoms/cm<2>.FIGS. 3A and 3B are the cross sectional views shown in FIGS. 2A and 2B, respectively, following further process steps. In FIGS. 3A and 3B the polycrystalline silicon layer 20 of FIGS. 2A and 2B, respectively, have been planarized, such as through chemical mechanical planarization (CMP) to remove portions of the polycrystalline silicon 20 overlying the silicon dioxide 15, while at the same time retaining the polycrystalline silicon 20 in the openings. The CMP is selective to the silicon dioxide 15. Thus the CMP action stops when the silicon dioxide 15 is exposed. In FIG. 3B the p+ polycrystalline silicon 20 and the p- region 17 together form a diode 30 with the digit line 10.FIG. 4 is the cross sectional view of FIG. 3A following further process steps. In FIG. 4 a p+ implant and an activation cycle, which includes a rapid thermal process (RTP) cycle and hydrogen cycle, has been performed to create a p+ region 25 at an upper portion of the polycrystalline silicon 20 of FIG. 3A. During the implant typical p-type dopants, such as Boron, Gallium, and BF2 are implanted at energy of 35-50 KeV and at a dosage of 1E<15 >to 5E<15 >atoms/cm<2>. The lower portion of the polycrystalline silicon remains n- thereby forming a diode 30 vertical to the buried digit line 10. For simplicity the remaining steps of this embodiment will pertain to diode 30 of FIG. 4 although the same steps would be applicable if the diode of FIG. 3B were used instead.Next the silicon dioxide 15 and diodes 30 are masked (mask not shown), to pattern a contact to the digit line 10. The silicon dioxide 15 is etched to form openings (not shown) to expose the digit lines 10, and the resist used for masking is removed.FIG. 5 depicts the cross sectional view of FIG. 4 following further process steps. In FIG. 5 a thin layer of titanium and titanium nitride 35 is deposited along the sidewalls of the openings and overlying the digit lines 10. Tungsten 40 is deposited to fill the opening and to overly the titanium. The titanium and titanium nitride 35 and tungsten 40 are chemically mechanically planarized to expose the silicon dioxide 15 and form a contact plug 45.FIG. 6 depicts the cross sectional view of FIG. 5 following further process steps. In FIG. 6 at least one layer has been deposited masked and etched to form programmable elements 50 (such as ovonic elements or antifuse elements) overlying each diode 30. In the case where an ovonic device is formed several deposition, mask, and etch steps may be utilized to layer Titanium tungsten, carbon, a first nitride layer, chalcogenide, and a second nitride layer. Various methods can be used when forming the ovonic device.A first metal layer or stack of approximately 5000 Angstrom is then deposited to overly the silicon dioxide 15, programmable resistors 50 and the contact plug 45. The metal layer is then patterned with a mask, not shown, and etched to form wordlines 60 in contact with the programmable resistors 50 and a planar landing pad 65 overlying the contact plug 45. The mask is then removed.FIG. 7 is a cross sectional view of FIG. 6 following further process steps. In FIG. 7 an interlevel dielectric oxide layer 70 is deposited, chemically mechanically planarized to create a planar surface, patterned, etched with a wet oxide 7:1 hydrofluoric dip for 15 seconds, and dry etched to expose the landing pad 65. The etch of the invention creates an opening 75 in the oxide 70 having a sloped sidewall 80. The direction of the slope is such that the upper portion of the opening has a larger perimeter than that of the lower portion.FIG. 8 is a cross sectional view of FIG. 7 following further process steps. In FIG. 8 a second metal layer or stack, which is well known to those skilled in the art, is deposited to overlie the oxide 70 and the landing pad 65. The sloped sidewalls 80 are conducive to good step coverage during the deposit of the second metal layer. Substantially vertical sidewalls 80 may be employed for tighter geometries. The second metal layer is patterned with a mask and etched to define and form a strapping layer 85. The mask is then removed. Although this cross section shows one strapping layer 85 in electrical communication with one landing pad 65 through one contact plug 45, it should be noted that a plurality of contact plugs 45 and landing pads 65 may be in electrical communication with the digit line 10 and the strapping layer 85 at a plurality of points to further reduce the resistance in series with the diodes 30. In addition, it should also be remembered that there are a plurality of digit lines formed along other cross sections.In a method of a second embodiment diodes are formed, each having a maximum width equal to f, which is equal to the minimum photolithographic limit of the photolithographic equipment being used, and distanced one from the other along a length of the digit line by a maximum distance equal to f; at least portions of the diodes are masked; at least a portion of an fits insulative material interposed between two diodes is removed to expose the buried digit line; and the conductive plug is formed in contact with the exposed portion of the buried digit line. After the formation of a programmable resistor in series with the diode a wordline is formed in electrical communication with each of the programmable resistors, and an insulative layer is formed overlying each wordline. Next an insulative spacer layer is deposited and etched to expose the conductive plug. The strapping layer is then formed self-aligned to the conductive plug.In this embodiment the diode and the plug are made of polycrystalline silicon, although it is possible that any conceivable diode structure may be used. In the second embodiment the P portion of the diode is fabricated in the substrate and the N portion is fabricated overlying the substrate.In an enhancement of the second embodiment a buried digit line is strapped at each memory cell to reduce the series resistance thereby creating greater drive. The self alignment feature of the invention facilitates a denser array.The second embodiment of the invention is depicted in FIGS. 9A-21B.In FIG. 9A p- digit lines 100 have been formed in an n- substrate 105 according to methods known in the art. The present embodiment is shown with LOCOS isolation having field oxide regions 110, but is adapted to trench isolation and modified LOCOS.FIG. 9B is a longitudinal cross section through the length of one of the digit lines 100 shown in lateral cross-section in FIG. 9A.FIG. 10A is a cross sectional view of a the substrate of FIG. 9B following further process steps. In FIG. 10A a conformal silicon dioxide layer 115 is deposited and planarized, preferably with CMP. The depth of the silicon dioxide layer 115 is selected to be greater than the desired height off electrical contact plugs to digit lines 100. The silicon dioxide layer 115 is patterned with a photoresist mask 120 to define the electrical contact plugs. Openings are etched in the exposed portions of the silicon dioxide layer 115 to expose the digit lines 100. By using the method of the invention it is possible to have the minimum width of both the masked and unmasked regions along the length of the digit line equal to f. Thus the method of the invention allows the fabrication of a dense memory array.FIG. 10B is a top planar view of the device of FIG. 10A. Since the digit lines underlie the photoresist mask 120 and silicon dioxide layer 115 they are outlined by dashed lines which also define active areas. The field oxide region underlies the silicon dioxide and lies between two digit lines.FIGS. 11A and 11B deposit the cross sectional views of FIGS. 9A and 9B, respectively, following further process steps. In FIGS. 11A and 11B the openings have been filled with N+ poly using standard fill techniques. The N+ poly is planarized preferably using CMP. The N+ poly forms contact plugs 125 to the digit lines 100, and the positive N+ electrode 130 of the diode is formed from out diffusion of the N type dopant from the N+ poly, thereby avoiding leakage current because the diode behaves as a single crystal diode.FIG. 12A depicts the cross sectional view of FIG. 11B following further process steps, and FIG. 12B is a top planar view of the device of FIG. 12A. In FIGS. 12A and 12B the contact plugs 125 and silicon dioxide 115 shown in FIGS. 11A and 11B are patterned with a mask 135, and the silicon dioxide 115 is etched in unmasked areas to form openings 140 to expose the digit lines 100 in the unmasked areas. The mask 135 may be misaligned with the contact plugs 125 since the method creates self aligned openings between the contact plugs 125. In one embodiment each opening eventually allows the strapping layer to be in electrical contact to the digit line 100 at each memory cell thereby decreasing series resistance to allow for a higher programming current to adequately set the logic state of a chalcogenide material in an ovoric device which will be fabricated overlying each of the contact plugs 125.However, the masking may be more selective in order to form fewer openings 140.FIG. 12B, as in FIG. 10B, the digit lines have been outlined with dashed lines. In addition portions of contact plugs 125 underlying mask 135 are shown with dotted lines.FIG. 13 is a cross sectional view of FIG. 12A following further process steps. In FIG. 13 the mask 135 has been removed and an oxide spacer layer 145 deposited.FIG. 14 is a cross sectional view of FIG. 13 following further process steps. In FIG. 14 the oxide spacer layer has been anisotropically dry etched to form spacers 150 on the sidewalls of the contact plug 125. A P+ region 155 is formed in the exposed portion of the digit line 100 during a shallow P+ implant, using a dopant from the group consisting of BF 2 and at an energy equal to 25-75 KeV and a dosage equal to 5B<14>-5E<15 >atoms/cm<2>, to lower the resistance of a future metal interconnect. During the implant it is necessary to protect the ni+ contact plug 125 with some form of mask (not shown) such as a hand mask.FIG. 15 is a cross sectional view of FIG. 14 following further processing steps. In FIG. 15 a layer of polycrystalline silicon 165 is deposited.FIG. 16 is a cross sectional view of FIG. 15 following further processing steps. In FIG. 16 the contact plugs 125, spacers 150, and polycrystalline silicon 165 are CMPed to create a planar surface and to eliminate portions of spacer 150 having non uniform thicknesses. The spacers 150 following the CNP process provide greater isolation properties than did the spacers existing before CMP.The polycrystalline silicon layer 165 forms a planar landing pad 170 following the CMP. A digit line strapping layer may be fabricated overlying the landing pad 170 as is explained below. The polycrystalline silicon 165 is doped P+ using a P+ implant subsequent to the planarization step.In one alternate embodiment which is shown in FIG. 17A the contact plugs 125 are fabricated to be larger than the photolithographic limit. FIG. 17A is similar to FIG. 16 except that the contact plugs 125 are larger and further processing steps have been performed. An ovonic device 175 is fabricated overlying each of the contact plugs 125 according to a method of layer fill and etching back according to a pattern (not shown) defining the ovonic device 175. The width of the ovonic device may be as small as the photolithographic limit thereby allowing more access to the landing pad 170. In this embodiment the ovonic device consists of the following layers: tungsten 176, a lower TiN or TiCxNy layer 177, a nitride layer 182, a chalcogenide layer 178, and an upper TiN layer 179. A pore opening 183 is created in the nitride layer 182 and the chalcogenide layer 178 fills the pore opening 183. In this method the chalcogenide material is applied using conventional thin film deposition methods and the other materials of the ovonic devices 175 are formed with various methods of layering and etching.Typical chalcogenide compositions for these memory cells include average concentrations of Te in the amorphous state well below 70%, typically below about 60% and ranging in general from as low as about 23% up to about 56% Te, and most preferably to about 48% to 56% Te. Concentrations of Ge are typically above about 15% and range from a low of about 17% to about 44% average, remaining generally below 50% Ge, with the remainder of the principal constituent elements in this class being Sb. The percentages given are atomic percentages which total 100% of the atoms of the constituent elements. In a particularly preferred embodiment, the chalcogenide compositions for these memory cells comprise a Te concentration of about 55%, a Ge concentration of about 22%, and a Sb concentration of about 22%. This class of materials are typically characterized as TeaGebSb100(a+b), where a is equal to or less than about 70% and preferably between about 60% to about 40%, b is above about 15% and less than 50%, preferably between about 17% to about 44%, and the remainder is Sb.An electrically insulative nitride layer 180 is deposited overlying the ovonic device 190. The nitride layer is patterned in order to expose at least a portion of the upper surface 181 of the ovonic device 175.FIG. 17B is a cross sectional view of FIG. 16 following further processing steps. An ovonic device 190 is fabricated by a second method. When using the second method it is necessary to deposit a nitride layer 185 or a combination silicon dioxide layer with an overlying nitride layer instead of the silicon dioxide layer 115, openings (not shown) are etched partially into the nitride layer 185 or the nitride of the nitride-silicon dioxide combination layer. Recessed ovonic devices 190 are then fabricated in the openings overlying the contact plugs 125. The fabrication comprises a layering, which includes deposition fill and etching back, of the following materials in the sequential order in which they are written: tungsten 191, a lower TiCxNy layer 192, chalcogenide layer 193, and an upper TiCxNy layer 194. By using this method the chalcogenide material fills the hole without patterning.Next wordlines are created. FIGS. 18A and 18B are the cross sectional views of FIGS. 17A and 17B, respectively, following the formation of a conformal conductive layer 200 in electrical contact with the ovonic devices 175 and 190, respectively. Typically the conductive layer 200 is a deposit of aluminum, copper, gold, silver, or refractory metals. An oxide layer 205 is then formed overlying the conductive layer 200. The wordlines are patterned with a mask 210 overlying the oxide layer 205, and exposed portions of the oxide layer 205 are removed during a first etch, and then exposed portions of the conductive layer 200 are removed during a second etch. The portions of the conductive layer 200 remaining subsequent to the etch form the word lines 215, see FIGS. 19A and 19B, respectively.The mask is then removed, and a conformal oxide layer 220 is deposited, see FIGS. 20A and 20B, respectively.In FIGS. 21A and 21B an oxide spacer 225 is formed to electrically insulate the wordlines 215 from a future strapping layer. The spacer 225 is formed by anisotropically etching of the oxide layer 220. The etch of the oxide layer 220 exposes the landing pads 170 in FIG. 21B. In addition to the oxide layer 220 the nitride layer 180, in FIG. 21A, is etched to expose the landing pads 170 shown in FIG. 21A.Further shown in FIGS. 21A and 21B is the strapping layer 230, typically aluminum, copper, or other conductive material, deposited in contact with the landing pad 170. The strapping layer is in electrical communication with the digit line 100 through the landing pad 170. Typically the strapping layer 230 is patterned to define desired interconnects and then etched according to the pattern. The photoresist (not shown) used for patterning is then removed and the metal is alloyed.The invention provides redundancy since the digit line is a buried component and the strapping layer is an upper component. Thus, even if the metal of the strapping layer breaks, operation of the memory device is maintained through the buried digit line. Thus the device has better electromigration reliability, and there is no memory disturbance from cell to cell due to the collection of current in the digit line.There is space savings when using the structure of the second embodiment, since the area between cells is no longer just isolation space but is used instead for contact to the buried digit. line, thereby providing efficient spacing of the cell for high compaction while at the same time providing good cell to cell isolation.By using the double metal scheme of the invention the series resistance to the diode is reduced to the diode/programmable resistor structure. This resistance is decreased even further by providing a strapped conductive plug for every two diodes of the array and physically interposed therebetween. By using titanium silicide on the buried digit line in conjunction with the strapped metal layer the best packing density is achieved with minimal processing steps.It should be noted that opposite doping may be used throughout the described embodiments without departing from the scope of the invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
To provide a giant magnetoresistive (GMR) device for use in a magnetic multi-turn sensor.SOLUTION: In a giant magnetoresistive (GMR) element, a free layer, that is, a layer that changes the magnetization direction in response to an external magnetic field to cause a resistance change is thick enough for providing good shape anisotropy without exhibiting the AMR effect. To achieve this, at least a portion of the free layer includes a plurality of layers of at least two different materials, specifically, a plurality of layers of at least a first material that is ferromagnetic, and a plurality of layers of at least a second material that are known not to exhibit an AMR effect and do not interfere with the GMR effect of the layer of a ferromagnetic material.SELECTED DRAWING: Figure 2
A giant magnetic resistance element (GMR) for a magnetic multi-turn sensor according to the first aspect, which is a reference layer, a non-magnetic layer adjacent to the reference layer, and a free layer made of a ferromagnetic material. It comprises at least a first layer of a ferromagnetic material adjacent to a non-magnetic layer, a plurality of layers made of a first material, and a plurality of layers made of a second material, wherein the first material is ferromagnetic. A giant magnetic resistance element, including a free layer, including a multi-layered arrangement.The giant magnetoresistive element according to claim 1, wherein the second material is a material having an anisotropic magnetoresistive (AMR) effect that is negligible or substantially negligible.The giant magnetoresistive element according to claim 1, wherein the plurality of layers made of the first material and the plurality of layers made of the second material are arranged in an alternating configuration.The giant magnetoresistive element according to claim 1, wherein the first material is one of NiFe and CoFe.The giant magnetoresistive element according to claim 1, wherein the second material is one of CoFeB, CoZrTa, CoZrTaB, CoZrNb, and CoZrO.The giant magnetoresistive element according to claim 1, wherein the thickness and / or composition of the first material and the second material is such that the free layer does not contain magnetostriction.The giant magnetoresistive element according to claim 1, wherein each of the plurality of layers of the first material and the plurality of layers of the second material have a thickness of about 0.5 nm to about 8 nm.The giant magnetoresistive element according to claim 1, wherein the first layer of the ferromagnetic material has a magnetization that is freely aligned with an externally applied magnetic field.The giant magnetoresistive element according to claim 1, wherein the first layer of the ferromagnetic material is CoFe.The giant magnetoresistive element according to claim 1, wherein at least a part of the reference layer has magnetization in a fixed direction.The giant magnetoresistive element according to claim 1, wherein the reference layer includes a series of layers that define an artificial antiferromagnetic material, and the layer of the artificial antiferromagnetic material has a magnetization in a fixed direction.A magnetic multi-turn sensor including one or more giant magnetic resistance elements, each giant magnetic resistance element being a reference layer, a non-magnetic layer adjacent to the reference layer, and a free layer made of a ferromagnetic material. , A first layer of a ferromagnetic material adjacent to the non-magnetic layer, a plurality of layers made of the first material, and a plurality of layers made of a second material, wherein the first material is ferromagnetic. A magnetic multi-turn sensor comprising a free layer, including a multi-layered arrangement.The magnetic multi-turn sensor according to claim 12, wherein the second material is a material having an anisotropic magnetoresistive (AMR) effect that is negligible or substantially negligible.The magnetic multi-turn sensor according to claim 12, wherein the first material is one of NiFe and CoFe.The magnetic multi-turn sensor according to claim 12, wherein the second material is one of CoFeB, CoZrTa, CoZrTaB, CoZrNb, and CoZrO.A method for manufacturing a giant magnetic resistance element, which is to form a reference layer, to form a non-magnetic layer adjacent to the reference layer, and to form a free layer of a ferromagnetic material. The free layer comprises at least a first layer of a ferromagnetic material adjacent to the non-magnetic layer, a plurality of layers made of the first ferromagnetic material, and a plurality of layers made of the second material, said first. A method of forming, including, including, multi-layered arrangement, in which the material is ferromagnetic.16. The aspect 16 further comprises forming the plurality of layers of the first material and the plurality of layers of the second material in an alternating sequence to provide the multi-layered arrangement of the free layers. The method described in.17. The method of claim 17, wherein the second material is a material having an anisotropic magnetoresistive (AMR) effect that is negligible or substantially negligible.16. The method of claim 16, wherein forming the reference layer comprises forming a plurality of layers to provide an artificial antiferromagnetic material.19. The method of claim 19, wherein the method further comprises providing a substrate and forming the artificial antiferromagnetic material or the free layer on the substrate.
Magnetoresistive sensor and manufacturing methodThe present disclosure relates to magnetic sensors. In particular, the present disclosure relates to giant magnetoresistive elements for use in magnetic multi-turn sensors.Magnetic multi-turn sensors are commonly used in applications where the number of times a device has been turned needs to be monitored. One example is the steering wheel of a vehicle. Magnetic multi-turn sensors typically include giant magnetoresistive (GMR) elements that are sensitive to the applied external magnetic field. The resistance of the GMR element can be changed by rotating the magnetic field in the vicinity of the sensor. Fluctuations in the resistance of the GMR device can be tracked to determine the number of turns in the magnetic field, which can be converted into the number of turns in the monitored device.The GMR element may often be based on a GMR spin valve stack using an artificial antiferromagnetic (AAF) material, such as stack 1 shown in FIG. Stack 1 includes a substrate 100 on the base, followed by a seed layer 102 to promote the growth of subsequent layers by providing a smooth surface and a favorable crystal structure to grow on it. The next layer is a natural antiferromagnetic layer (such as platinum manganese (PtMn) or iridium manganese (Irmn)), a ferromagnetic layer (typically cobalt iron (CoFe)), a non-magnetic spacer (ruthenium (Ru)), And an AAF multi-layer 104 comprising a series of layers consisting of another ferromagnetic layer (CoFe), also referred to as a "pinned" layer. The main purpose of the AAF layer 104 is to align and maintain the magnetization of the pinned layer in the orientation defined by the annealing process during manufacturing.The non-magnetic spacer layer 106 (typically copper (Cu)) is provided directly on top of the pinned layer of the AAF layer 104, followed by the so-called free layer 108. The free layer 108 is a ferromagnetic layer that freely aligns its magnetization with an external magnetic field. The free layer 108 is typically formed of two ferromagnetic layers, typically a CoFe layer, followed by a nickel iron (NiFe) layer. The GMR effect is observed as a change in film resistance associated with the relative angle between the magnetization of the free layer 108 and the magnetization of the pinned layer of the AAF layer 104. When the magnetizations are parallel, low resistance is observed, and when they are antiparallel, high resistance is observed. Therefore, the purpose of the non-magnetic layer 106 is to create a distance between the free layer 108 and the pinned layer, and the thickness of the spacer layer 106 is such that the pinned layer and the free layer 108 are thick. It is selected to minimize the magnetic coupling between them.The stack 1 is then typically covered with a capping layer 110, typically a non-magnetic metal layer, which protects the stack 1 and interconnects to connect the stack 1 to other components of the magnetic sensor. To reduce diffusion when connecting stack 1 to other metal layers (such as aluminum, copper, or gold).The present disclosure provides a giant magnetoresistive (GMR) element for use in a magnetic multi-turn sensor, wherein a free layer, i.e., a layer that changes its magnetization direction in response to an external magnetic field to cause a resistance change. , Sufficient to provide good shape anisotropy without exhibiting AMR effect. To achieve this, at least a portion of the free layer is a plurality of layers of at least two different conductive materials, specifically a plurality of layers of at least a first material that is ferromagnetic, and an AMR. It comprises a plurality of layers of at least a second material that are known to be ineffective and do not interfere with the GMR effect of the layer of ferromagnetic material.A first aspect of the present disclosure provides a giant magnetic resistance (GMR) element for a magnetic multi-turn sensor according to the first aspect, wherein the giant magnetic resistance element is a reference layer and a non-magnetic layer adjacent to the reference layer. The free layer comprises a first layer of the ferromagnetic material adjacent to the non-magnetic layer, and a plurality of layers made of the first material and a plurality of layers made of the second material. It comprises at least a layer and the first material is ferromagnetic, including a multi-layered arrangement.Preferably, the second material is a material having an anisotropic magnetoresistive (AMR) effect that is negligible or substantially negligible. Next, the plurality of layers made of the first material and the plurality of layers made of the second material may be arranged in an alternating configuration.Thus, by having a layer of two different materials, if one of these materials is ferromagnetic and the other material exhibits a negligible or almost negligible AMR effect, then the free layer is presented. It is thick enough to provide good shape anisotropy while exhibiting no or very little AMR effect with respect to the amount of GMR effect. In this regard, a layer of material exhibiting a negligible AMR effect will attenuate any AMR effect that may be present in the layer of ferromagnetic material.In some arrangements, the first material may be one of NiFe and CoFe. The second material may be one of CoFeB, CoZrTa, CoZrTaB, CoZrNb, and CoZrO.In some arrangements, the thickness and / or composition of the first and second materials may be configured such that the free layer does not contain magnetostriction. That is, as the magnetization changes, the free layer is free from mechanical strain or deformation.Each of the plurality of layers made of the first material and the plurality of layers made of the second material may have a thickness of about 0.5 nm to about 8 nm. It will also be appreciated that any suitable number of layers may be used, depending on the required thickness of the free layers and the thickness of the individual layers.It will be appreciated that the free layer is so called in that at least the first layer of ferromagnetic material has a magnetization that is freely aligned with the externally applied magnetic field. The first layer of the ferromagnetic material may be CoFe, or any other suitable ferromagnetic material with strong GMR properties.It will also be understood that the reference layer is so called in that at least part of the reference layer has a magnetization that is in the fixed direction. The portion of the reference layer having a fixed magnetization direction may be referred to as a "pinned" layer, which pinned layer is a layer of ferromagnetic material. The GMR effect is observed as a change in film resistance associated with the relative angle between the magnetization of the free layer and the magnetization of the pinned layer. The reference layer may include a series of layers defining the artificial antiferromagnetic material, the layer of the artificial antiferromagnetic material having magnetization in the fixed direction. The artificial antiferromagnetic material may include a natural antiferromagnetic layer, a first ferromagnetic layer, a non-magnetic spacer, and a second ferromagnetic layer, the second ferromagnetic layer being a pinned layer. be.In other arrangements disclosed herein, the second material may be a non-magnetic material. As before, the ferromagnetic material may be one of NiFe and CoFe, while the non-magnetic material may be one of Ta, Ru, and Cu.In such cases, each of the plurality of layers of the ferromagnetic material and the plurality of layers of the non-magnetic material may have a thickness of about 0.2 nm to about 0.4 nm.Other arrangements described herein provide a magnetic resistance element for a magnetic multi-turn sensor, where the magnetic resistance element is a reference layer of antiferromagnetic material, a non-magnetic layer adjacent to the reference layer, and a strong. It comprises a free layer of magnetic material, which comprises a first layer of ferromagnetic material adjacent to the non-magnetic layer and a second layer of amorphous ferromagnetic material.The amorphous ferromagnetic material may be one of CoFeB, CoZrTa, CoZrTaB, CoZrNb, and CoZrO, while the first layer may contain a crystalline ferromagnetic material. For example, the first layer of the ferromagnetic material may be CoFe.A further aspect of the present disclosure provides a magnetic multi-turn sensor with one or more giant magnetic resistance elements, each giant magnetic resistance element being free of reference layer, non-magnetic layer adjacent to reference layer, and ferromagnetic material. The free layer comprises at least a first layer of a ferromagnetic material adjacent to the non-magnetic layer, a plurality of layers of the first material, and a plurality of layers of the second material, the first. Includes a multi-layered arrangement, where the material is ferromagnetic.As mentioned above, the second material is preferably a material having an anisotropic magnetoresistive (AMR) effect that is negligible or substantially negligible. In some arrangements, the first material may be one of NiFe and CoFe. The second material may be one of CoFeB, CoZrTa, CoZrTaB, CoZrNb, and CoZrO.Further embodiments provide a method of manufacturing a giant magnetic resistance element, the formation of a reference layer, the formation of a non-magnetic layer adjacent to the reference layer, and the formation of a free layer of a ferromagnetic material. The free layer comprises at least a first layer of a ferromagnetic material adjacent to a non-magnetic layer, a plurality of layers of the first ferromagnetic material, and a plurality of layers of the second material, the first. Includes a multi-layered arrangement in which the material is ferromagnetic.The method may include forming a plurality of layers of the first material and a plurality of layers of the second material in an alternating sequence to provide a multi-layered arrangement of free layers.For example, the method is to form a first layer of the first material, to form a first layer of the second material on top of the first layer of the first material, and to form the first layer of the second material. Including forming a second layer of the first material on the first layer of the first material and forming a second layer of the second material on the second layer of the first material. good. It will be appreciated that, of course, this process can be continued for as many layers as needed.Again, the second material is a negligible or substantially negligible anisotropic magnetoresistor such that when alternated with a layer of ferromagnetic material, any AMR effect present in the layer of ferromagnetic material is attenuated. It is preferably a material having a (AMR) effect.Forming a reference layer may include forming multiple layers to provide an artificial antiferromagnetic material. The artificial antiferromagnetic material may include a natural antiferromagnetic layer, a first ferromagnetic layer, a non-magnetic spacer, and a second ferromagnetic layer, the second ferromagnetic layer having magnetization in the fixed direction. It is a layer.The method may further include providing a substrate and forming an antiferromagnetic material or free layer on the substrate. In doing so, the GMR stack is formed with a reference layer located at the bottom or top of the stack.Also, the stack protects the seed layer formed on the substrate to promote the growth of subsequent layers, and the capping layer to protect the stack and provide interconnection to other components of the magnetic multi-turn sensor. It will also be understood that it may include other layers.Any of the above layers may be formed using some suitable processing process such as sputtering or ion beam deposition.Here, the present disclosure will be described only as an example with reference to the accompanying drawings.It is a schematic side view of the GMR stack by the prior art. FIG. 3 is a schematic side view of a GMR stack according to an embodiment of the present disclosure. FIG. 3 is a schematic side view of a GMR stack according to a further embodiment of the present disclosure. It is an example of a magnetic multi-turn system including a GMR element according to the embodiment of the present disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure. It is a schematic side view which shows the method of manufacturing the GMR stack by embodiment of this disclosure.The magnetic multi-turn sensor can be used to monitor the turn count of the axis of rotation. To do this, a magnet is typically attached to the end of the axis of rotation, and the multi-turn sensor senses the rotation of the magnetic field as the magnet rotates with the axis. Such magnetic sensing can be applied to a variety of different applications, such as automotive applications, medical applications, industrial control applications, consumer applications, and hosts for other applications that require information about the location of rotating components. ..Magnetic multi-turn sensors typically include a giant magnetoresistive (GMR) element that is sensitive to an applied external magnetic field. GMR elements are often based on a GMR spin valve stack containing a free layer of ferromagnetic material that freely aligns its magnetization with an external magnetic field. In a typical GMR stack, the free layer thickness is typically less than 5 nm. However, the free layer must be very thick (> 30 nm) in order to create strong shape anisotropy in the long and narrow traces of the membrane. This thickness results in a strong anisotropic magnetoresistive (AMR) effect in the free layer, which further makes the electrical resistance dependent on the angle between the direction of the current and the direction of magnetization. In general, higher resistance is observed when the current is parallel to the magnetization, and lower resistance is observed when the current is perpendicular to the magnetization. As a result, this results in an undesired resistance change that overlaps with the desired resistance change due to the GMR effect, thereby distorting the sensor output.Accordingly, the present disclosure provides a giant magnetoresistive (GMR) element for use in a magnetic multi-turn sensor, where the free layer, i.e., changes its magnetization direction in response to an external magnetic field to result in a resistance change. The layer is thick enough to provide no AMR effect for the amount of GMR effect exhibited (less than 10%), or to provide good shape anisotropy while exhibiting a very small AMR effect. Is. To achieve this, at least a portion of the free layer exhibits no AMR effect with multiple layers of at least two different materials, specifically multiple layers of at least a first material that is ferromagnetic. It is known to include a plurality of layers of at least a second material that do not interfere with the GMR effect of the layers of the ferromagnetic material.An embodiment according to the present disclosure is shown in FIG. As described with reference to FIG. 1, the GMR element is configured as a spin valve stack 2 including a substrate 200, a seed layer 202, an AAF layer 204, and a non-magnetic spacer layer 206. The free layer, designated 208, comprises a first ferromagnetic layer 212, preferably a crystalline ferromagnetic material with a low AMR effect, such as CoFe, followed by a multi-layered arrangement 214. As mentioned above, the GMR effect is observed at the interface of the pinned layer of the AAF layer 204, the non-magnetic spacer layer 206, and the first ferromagnetic layer 212. The multilayer arrangement 214 includes both the crystalline ferromagnetic material 216 and the amorphous ferromagnetic material 218 arranged in alternating layers. The crystalline ferromagnetic layer 216 is formed of a crystalline ferromagnetic material such as NiFe, while the amorphous ferromagnetic layer 218 is cobalt iron boron (CoFeB), cobalt zirconium tantalum (CoZrTa), cobalt zirconium tantalum boron (CoZrTaB), cobalt. It may be formed of any suitable amorphous ferromagnetic material such as zirconium niob (CoZrNb) or cobalt zirconium oxide (CoZrO). Since the magnetization of the layer of amorphous ferromagnet 218 is also aligned with the externally applied magnetic field, the multilayer arrangement 214 functions as one ferromagnetic layer aligned with the external magnetic field, thus providing good shape anisotropy. However, with amorphous ferromagnetic materials, there is little effect on current as a result of changes in magnetization direction, and the layers of crystalline ferromagnetic material 216 are individually too thin or negligible to exhibit any AMR effect. Or, because it presents at least a very small amount, no AMR effect is observed or a very small AMR effect is observed. Thus, interspersing layers of crystalline ferromagnetic material 216 with layers of amorphous ferromagnetic material 218 is strong enough to provide the desired shape anisotropy without inducing any undesired AMR effects. Magnetic multilayer arrangement 214 is provided. In fact, the ferromagnetic multilayer arrangement 214 may include layers of any two ferromagnetic materials, at least one of these ferromagnetic materials exhibiting a negligible or substantially negligible amount of AMR effect, thereby the other. It will be appreciated by those skilled in the art to attenuate any ARM effect exhibited by the ferromagnetic material of.The individual layers 216 and 218 of the multilayer arrangement 214 may be about 0.5 nm to about 8 nm, and the total thickness of the free layer 208 is about 10 nm to 50 nm.When choosing the thickness of layers 216 and 218, as well as the composition, it may be necessary to consider the resulting magnetostriction experienced by the multi-layered arrangement 214. Magnetostriction is the relationship between the mechanical stress and magnetization of a material. This relationship works in both directions in that changes in magnetization result in mechanical strain or deformation, and mechanical deformation results in changes in magnetization. The magnetostriction measurement can have a positive or negative sign, depending on whether the material stretches or shortens when magnetized in a particular direction.Sensor applications require very low or ideally zero magnetostriction. Some crystalline ferromagnetic materials, such as NiFe, have very low magnetostriction. For example, NiFe with a Ni: Fe ratio of 81:19 does not contain magnetostriction. Therefore, such materials are typically preferred to provide a free layer for the GMR sensor. On the other hand, other crystalline ferromagnetic materials such as CoFe and amorphous ferromagnetic materials such as CoFeB have significant magnetostriction. Therefore, for a multi-layered arrangement 214 containing an amorphous ferromagnetic material 218 with positive magnetostriction, the layer thickness and composition of the crystalline ferromagnetic material 216 has negative magnetostriction to compensate for the positive magnetostriction of the other layers. It needs to be adapted so that the result is a free layer 208 that is totally free of magnetostriction.The stack 2 is then usually covered with a capping layer 210, typically a non-magnetic metal layer such as tantalum (Ta) or titanium tungsten (TiW), which protects the stack 2 and makes the stack 2 magnetic. Diffuse when connecting the stack 2 to other metal layers (eg, aluminum, copper, or gold) to provide interconnects for connecting to other components of the sensor.Further embodiments of the present disclosure are also shown in FIG. As before, the GMR element is configured as a spin valve stack 3 including a substrate 300, a seed layer 302, an AAF layer 304, and a non-magnetic spacer layer 306. The free layer, generally designated as 308, comprises a first ferromagnetic layer 312, preferably a crystalline ferromagnetic material with a low AMR effect, such as CoFe, followed by another multilayer arrangement 314. In this embodiment, the multilayer arrangement 314 is formed of a plurality of ferromagnetic layers 316 formed of a soft magnetic material such as NiFe and a plurality of non-magnetic layers 318, which are also arranged as alternating layers. The non-magnetic material layer 318 may be any suitable material, for example Ta, Ru, or Cu. The non-magnetic layer 318 does not impose any magnetic moment on the free layer 308 and thus does not exhibit any AMR effect, while the layer of Ferromagnetic 316 is individually too thin or negligible to exhibit any AMR effect. Or at least present in very small amounts. Therefore, interspersing layers of ferromagnetic material 316 and nonmagnetic material 318 is thick enough to provide the desired shape anisotropy without inducing any unwanted AMR effects. Multilayer arrangement 314 is provided.In such an arrangement, the ferromagnetic layer 316 is thin enough to prevent any AMR effect, and the non-magnetic layer 318 is thin enough to ensure strong ferromagnetic coupling between the ferromagnetic layers, i.e., the multilayer. It is important that the shape anisotropy exhibited by the arrangement 314 corresponds to the sum of the ferromagnetic layers 316 as opposed to each individual ferromagnetic layer 316. For example, the thickness of both sets of layers 316 and 318 may be from about 0.2 nm to 0.4 nm.Further, as described above, the ferromagnetic layer 316 may be formed of a soft magnetic material having very low or zero magnetostriction, such as NiFe. Since these ferromagnetic materials 316 are scattered with the non-magnetic layer 318, the free layer 308 is a soft magnet having no magnetostriction as a whole.The stack 3 is then typically covered with a capping layer 210, typically a non-magnetic metal layer, which protects the stack 3 and interconnects to connect the stack 3 to other components of the magnetic sensor. To provide, reduce diffusion when connecting the stack 3 to other metal layers (eg, aluminum, copper, or gold).It is understood that in any of the embodiments described above, any number of layers (and layer thicknesses) may be used in the multilayer arrangements 214,314, depending on the required thickness and shape anisotropy. Will be. For example, there may be a total of 4 layers, or a total of 20 layers.In addition, in any of the above embodiments, a natural antiferromagnetic layer such as platinum manganese (PtMn) may be used instead of the AAF layers 204 and 304 in which a measurable GMR effect is observed. Will be understood.FIG. 4 shows an example of a magnetic piece layout representation of a magnetic multi-turn sensor 4 including a plurality of GMR elements 400 according to an embodiment of the present disclosure. In the example of FIG. 4, the magnetic piece 400 is a giant magnetoresistive track physically placed in a spiral configuration. Therefore, the magnetic piece 400 has a plurality of segments formed from magnetoresistive elements 402 arranged in series with each other. The magnetoresistive element 402 functions as a variable resistor that changes the resistance according to the magnetic alignment state. One end of the magnetic piece 400 is coupled to a domain wall generator (DWG) 404. In this regard, it will be appreciated that the DWG 404 may be coupled to any end of the magnetic piece 400. The DWG 404 creates a domain wall in response to rotation in an external magnetic field or application of some other strong external magnetic field beyond the operating magnetic window of the sensor 4. These domain walls can then be injected into the magnetic piece 400. As the magnetic domain position changes, so does the resistance of the GMR element 402 due to the changes that occur as a result of the magnetic alignment.To measure the resistance of the GMR element 402, which changes as the domain wall is created, the magnetic piece 400 is electrically connected to the supply voltage VDD406 and ground GND408 to apply a voltage between the pair of opposing corners. An electrical connection 410 is provided in the middle corner between the voltage supplies to provide a half-bridge output. As such, the multi-turn sensor 4 comprises a number of Wheatstone bridge circuits, with each half-bridge 410 corresponding to one half-turn or 180 ° rotation of the external magnetic field. Therefore, the measured value of the voltage at the electrical connection portion 410 can be used to measure the change in the resistance of the GMR element 402, which indicates the change in the magnetic alignment of the free layer.The example shown in FIG. 4 comprises four spiral windings with eight half bridges 410 and is therefore configured to count four turns of the external magnetic field. However, it will be appreciated that the multi-turn sensor may have any number of spiral windings depending on the number of GMR elements. In general, a multi-turn sensor can count the same number of turns as a spiral winding. It will also be appreciated that the GMR element 402 may be electrically connected in any suitable manner so as to provide a sensor output that represents a change in the magnetic alignment state. For example, the GMR element 402 may be connected in a matrix arrangement as described in US2017 / 0261345, which is incorporated herein by reference in its entirety. As a further alternative, each reluctance segment may be connected individually rather than in a bridge arrangement.In this example, the magnetic multi-turn sensor 4 comprises an integrated circuit 412 in which the magnetic pieces 400 are disposed, which may include a processing circuit (not shown) that processes the sensor output.Here, a method of processing the GMR stack 2 will be described with reference to FIGS. 5A to 5I. However, it will be appreciated that the GMR stack 3 may be machined in the same way.FIG. 5A shows the first step of the machining process. A silicon wafer is used as the substrate 200. In the following, the process of forming one device will be described, but hundreds of devices may be formed in parallel on the same wafer. The substrate 200 is used for mechanical support and can be replaced with another type of material such as glass or sapphire. Typically, the silicon wafer may be oxidized to isolate the subsequent layer from bare silicon, or an isolator such as aluminum oxide may be used. In some arrangements, the substrate 200 may include electronic circuits.Next, as shown in FIG. 5B, the seed layer 202 is deposited on the substrate 200. The seed layer 202 provides a smooth surface for promoting the growth of subsequent layers and an advantageous crystal structure. The seed layer 202 may be a layer of tantalum, ruthenium, or tantalum nitride (TaN), or may include another layer of another compound.FIG. 5C shows the formation of the AAF layer 204 deposited on the seed layer 202. The AAF layer 204 is first formed by depositing a natural antiferromagnetic layer on the seed layer 202. The ferromagnetic layer is then deposited on the antiferromagnetic layer, followed by the non-magnetic spacer layer. Finally, a second ferromagnetic layer is deposited on the non-magnetic layer. This second ferromagnetic layer is a so-called "pinned" layer or "reference" layer. The antiferromagnetic material used in the AAF layer 204 may be PtMn, IrMn, NiMn, or any other suitable antiferromagnetic material, where the non-magnetic material is typically ruthenium, the ferromagnetic material is CoFe or It may be any other suitable ferromagnetic material.As shown in FIG. 5D, the non-magnetic spacer layer 206 is formed on the pinner layer of the AAF layer 204. It acts as a spacer between the pinned layer and the subsequent free layer to reduce any magnetic coupling.FIG. 5E shows the start of a free layer formed by first depositing the first ferromagnetic layer 212.The multi-layered arrangement is the first strength by depositing a layer of crystalline ferromagnetic material 216 as shown in FIG. 5F and then depositing a layer of amorphous ferromagnetic material 218 as shown in FIG. 5G. It is deposited on the magnetic layer. This process is repeated the required number of times until the entire multilayer arrangement 214 is formed, as shown in FIG. 5H.Finally, the capping layer 210 is placed on stack 2, as shown in FIG. 5I. As mentioned above, the capping layer 210 typically consists of a non-magnetic metal layer, which is used when connecting the stack 2 to another metal layer to protect the stack 2 and provide interconnection. Reduce diffusion.Once the deposition has taken place, the GMR membrane is then annealed in a magnetic field and patterned using standard photolithography techniques and subsequent ion milling to remove excess material and obtain the required resistor shape. Can be formed.In the embodiment of FIG. 3, the stack 3 may be machined in substantially the same manner and the ferromagnetic layer 316 and the non-magnetic layer 318 of the multilayer arrangement 314 are substantially the same as those shown in FIGS. 5F-5H. It will be understood that it is formed by the method.It will be appreciated that each of the layers in the stacks 2 and 3 described above may be formed using any suitable physical deposition method such as sputtering. Similarly, the deposition of each stack 2, 3 may be carried out in one vacuum step so that there is no exposure to the surrounding atmosphere between the individual steps, thereby avoiding contamination or oxidation of various layers. Will be done. For example, all stacks 2, 3 from seed layers 202, 302 to capping layers 210, 310 are one tool without breaking the vacuum between different layers to prevent surface contamination and changes due to exposure to atmospheric gas. Is deposited by either sputtering or ion beam deposition.In a further embodiment of the present disclosure, a GMR stack is provided in which the free layer comprises a first layer of a crystalline ferromagnetic material with a low AMR effect, such as CoFe, and a second layer of an amorphous ferromagnetic material, such as CoFeB. May be. Such an arrangement exhibits a high AMR effect, but also eliminates the use of any ferromagnetic material that experiences more magnetostriction.The above arrangement shows AAF layers 204, 304 (so-called "bottom pinned") at the bottom of stacks 2, 3 where stacks 2, 3 are non-magnetic spacers 206, 306, followed by free layers 208. , 308 are below, and it will be appreciated that the AAF layers 204, 304 (so-called "top pinned") may be placed alternative so that they are at the top of the stacks 2, 3.Similarly, the arrangement described above illustrates the use of AAF layer 204, but instead serves as a layer of antiferromagnetic material such as, for example, PtMn, IrMn, NiMn, and a "pinned" layer. It will be appreciated that simple antiferromagnetic layers may be used, including one layer of ferromagnetic material such as CoFe.Applications Any of the principles and advantages discussed herein can be applied not only to the systems described above, but to other systems as well. Some embodiments may include a subset of the features and / or benefits described herein. Further embodiments can be provided by combining the elements and operations of the various embodiments described above. The actions of the methods discussed herein may be performed in any order, as appropriate. Moreover, the actions of the methods discussed herein can be performed continuously or in parallel as appropriate. The circuit is shown in a particular arrangement, but other equivalent arrangements are possible.Any of the principles and advantages discussed herein can be implemented in connection with any other system, device, or method that can benefit from any of the teachings herein. For example, any of the principles and advantages described herein can be implemented in connection with any device that has the need to correct rotation angle position data derived from a rotating magnetic field. In addition, the device may include any reluctance or Hall effect device capable of sensing a magnetic field.Aspects of the present disclosure may be implemented in various electronic devices or systems. For example, phase correction methods and sensors implemented according to any of the principles and advantages discussed herein may be included in a variety of electronic devices and / or a variety of applications. Examples of electronic devices and applications include servos, robotics, aircraft, submarines, toothbrushes, biomedical sensing devices, and parts of consumer electronic products such as semiconductor dies and / or packaged modules, electronic test equipment. You get, but you are not limited to these. In addition, electronic devices may include unfinished products, including those for industrial, automotive, and / or medical applications.Words such as "prepared", "prepared", "contains", and "contains" are not exclusive or exhaustive meanings throughout the scope of the description and claims, unless expressly required by the context. , Should be interpreted in an inclusive sense, that is, in the sense of "including, but not limited to,". As generally used herein, the words "coupled" or "connected" are two or more that can be directly connected or connected via one or more intermediate elements. Refers to the element of. As such, the various schematics illustrated show exemplary arrangements of elements and components, but additional intervening elements, devices, features, or components may be present in actual embodiments (circuits shown). Suppose that the functionality of is not adversely affected). The terms "based on" as used herein are generally intended to include "only based on" and "at least partially based on". Additionally, as used in this application, words such as "in the present specification", "above", "below", and words having similar meanings refer to the entire application and are referred to in the present application. It does not refer to any particular part. To the extent permitted by the context, words in the "form for carrying out an invention" that use the singular or plural may also include the plural or singular, respectively. A word such as "or" that refers to a list of two or more items is the entire interpretation of the following word: any of the items in the list, all of the items in the list, and any of the items in the list. Intended to cover all combinations. All numbers or distances provided herein are intended to include similar values within the measurement error.Although specific embodiments have been described, these embodiments are presented by way of example only and are not intended to limit the scope of the present disclosure. In fact, the novel devices, systems, and methods described herein may be embodied in various other forms. Moreover, various omissions, substitutions, and changes in the methods and forms of the system described herein may be made without departing from the spirit of the present disclosure.2, 3 Spin valve stack 200, 300 Substrate 202, 302 Seed layer 204, 304 AAF layer 206, 306 Non-magnetic spacer layer 208, 308 Free layer 210, 310 Capping layer 212, 312 First ferromagnetic layer 214, 314 multi-layer Arrangement 216 Layers of crystalline ferromagnetic material 218 Layers of amorphous ferromagnetic material 316 Multiple ferromagnetic layers 318 Multiple non-magnetic layers 4 Magnetic multi-turn sensor 400 GMR element, magnetic piece 402 Magnetic resistance element 404 Domain wall generator (DWG) 406 Supply voltage VDD408 Ground GND410 Half bridge 410 Electrical connection 412 Integrated circuit
Examples described herein relate to migrating a virtualized execution environment from a first platform to a second platform while retaining use of namespace identifiers and permitting issuance of storage transactions by the virtualized execution environment. The first platform can include a first central processing unit or a first network interface. The second platform can include a central processing unit that is different that the first central processing unit and a network interface that is the same or different than the first network interface. The second platform can retain access permissions and target media format independent of one or more identifiers associated with the migrated virtualized execution environment at the second platform. Unperformed storage transactions can be migrated to the second platform for execution.
CLAIMSWhat is claimed is:1. An apparatus comprising: an interface comprising circuitry and logic, the interface to: generate packets for storage transactions using a transport protocol and in connection with commencement of a virtual execution environment on a second computing platform, provide capability of a first computing platform at the second computing platform for the virtual execution environment to continue storage transactions and maintain use of same name space identifiers (NSIDs).2. The apparatus of claim 1, wherein the interface is to perform one or more of: migrate parameters associated with storage transaction permissions to the second computing platform or migrate format of a target media drive.3. The apparatus of claim 2, wherein permission comprise one or more of: per-requester permission or per-target media permission.4. The apparatus of claim 3, wherein per-requester permission comprises one or more of: read enable, write enable, read and write enabled and per-target media permission comprises one or more of: read enable, write enable, read and write enabled.5. The apparatus of claim 2, wherein a format of the target media drive comprises one or more of: sector or block format, read or write enablement, or end-to-end protection.6. The apparatus of claim 2, wherein the interface is to: prior to migration of the virtual execution environment to the second computing platform: execute at least one received storage commands and identify unexecuted commands for migration to the second computing platform.7. The apparatus of claim 1, wherein the commencement is initiated based on one on one or more of: virtual execution environment migration, server maintenance, or load balancing.8. The apparatus of claim 1, wherein the virtual execution environment is to request a storage transaction that is translated to a transaction over a transport protocol.9. The apparatus of claim 8, wherein to translate the storage transaction to a transaction over a transport protocol, the virtual execution environment is to execute a driver that supports storage transactions using Non-Volatile Memory Express (NVMe).10. The apparatus of claim 1 , comprising: one of more of: a server, data center, or rack, wherein the server, data center, or rack is to initiate a migration of the virtual execution environment.11. An apparatus comprising : a computing system comprising at least one processor and at least one memory device and an interface to: determine access rights in response to requested access by a requester to a namespace identifier associated with a target media, wherein the access rights for the requester and namespace identifier are independent of an identifier of the requester.12. The apparatus of claim 11, wherein after migration of the requester to another computing system or network interface, one or more of the computing system or network interface are to apply same access rights for the requester as were applied before the migration based on received parameters and independent of an identifier of the requester after migration.13. The apparatus of claim 11, wherein the access rights comprise one or more of: read and write enabled, read enabled, or write enabled.14. The apparatus of claim 11, wherein the access rights comprise one or more of: access rights based on a requester of a storage transaction or access rights based on a target storage device.15. The apparatus of claim 11 , wherein the interface is to provide a target media format for the requester and namespace identifier independent of the identifier of the requester and the target media format comprises one or more of: sector or block format, read or write enablement, or end- to-end protection.16. The apparatus of claim 11, wherein the interface is to receive unexecuted storage commands associated with the requester generated on a prior platform and the interface is to store the unexecuted storage commands for execution.17. A computer-implemented method comprising: migrating a virtualized execution environment from a first platform to a second platform while retaining use of a namespace identifier and permitting issuance of storage transactions by the virtualized execution environment by use of the namespace identifier.18. The computer-implemented method of claim 17, comprising: retaining access permissions and target media format independent of one or more identifiers associated with the migrated virtualized execution environment at the second platform. 19. The computer-implemented method of claim 17, wherein permitting issuance of storage transactions by the virtualized execution environment comprises: performing a storage transaction in a queue associated with the virtualized execution environment and migrating an unperformed storage transaction to a queue in the second platform. 20. The computer-implemented method of claim 19, comprising: executing the unperformed storage transaction using the second platform.
MAINTAINING STORAGE NAMESPACE IDENTIFIERS FOR LIVE VIRTUALIZED EXECUTION ENVIRONMENT MIGRATIONCLAIM OF PRIORITYThis application claim priority under 35 U.S.C. § 365(c) to US Application No. 16/814,788 filed March 10, 2020, entitled “MAINTAINING STORAGE NAMESPACE IDENTIFIERS FOR LIVE VIRTUALIZED EXECUTION ENVIRONMENT MIGRATION”, which is incorporated in its entirety herewith.DESCRIPTIONDistributed block storage systems provide block device functionality to applications by presenting logical block devices that are stored in segments scattered across a large pool of remote storage devices. To use these logical block devices, applications need to determine the location of all the segments they need to access. A computing platform can access a storage device using a fabric or network. Example schemes for accessing storage using a fabric or network include Non volatile Memory Express over Fabrics (NVMe-oF) or other proprietary storage over fabrics or network specifications. NVMe-oF is described at least in NVM Express, Inc., “NVM Express Over Fabrics,” Revision 1.0, Jun. 5, 2016, and variations and revisions thereof.BRIEF DESCRIPTION OF THE DRAWINGSFIGs. 1A and IB depict high level examples of storage network topologies.FIG. 2 depicts an example system.FIG. 3 depicts an example of determination of whether a command to access a particular namespace identifier can proceed.FIG. 4 depicts an example of command execution in connection with storage transaction.FIG. 5 depicts examples of various source and destination environments associated with migration of a virtualized execution environment.FIG. 6 depicts a process for a migration of a virtualized execution environment.FIG. 7 depicts a system.FIG. 8 depicts an environment.FIG. 9 depicts a network interface. DETAILED DESCRIPTIONThe Non-Volatile Memory Express (NVMe) Specification describes a system for accesses to data storage systems through a Peripheral Component Interconnect Express (PCIe) port. NVMe is described for example, in NVM Express™ Base Specification, Revision 1.3c (2018), which is incorporated by reference in its entirety. NVMe allows a host to specify regions of storage as separate namespaces. A namespace can be an addressable domain in a non-volatile memory having a selected number of storage blocks that have been formatted for block access. A namespace can include an addressable portion of a media in a solid state drive (SSD), or a multi device memory space that spans multiple SSDs or other data storage devices. A namespace ID (NSID) is a unique identifier for an associated namespace. A host device can access a particular non-volatile memory by specifying the namespace, the controller ID and an associated logical address for the block or blocks (e.g., logical block addresses (LB As)).In some cases, a smart network interface controllers (SmartNICs) support offload of a storage infrastructure protocol processing stack from host computers. Virtualized execution environments and bare metal instances (e.g., single client running on a server) can ran NVMe drivers to handle storage transactions without NVMe SSDs being directly attached to the host (e.g., disaggregated storage) or a hypervisor translating the NVMe protocol into another transport layer protocol. For example, the SmartNIC can use an NVMe-oF offload engine to issue storage transactions to network connected SSDs. A SmartNIC can provide a configuration interface for managing SmartNIC resources and configurations of the NVMe-oF offload engine.FIGs. 1A and IB depict high level examples of storage network topologies in which NVMe-oF can be used. In FIG. 1 A, a scenario 100 is shown in which various host devices, shown as H, issue storage transactions using initiator NICs over a network to one or more target NICs. A target NIC can have one or multiple NVMe compatible storage devices 102. In FIG. IB, a scenario 150 is shown in which a hyper-converged topology is shown whereby a host device can access storage 152 that is locally connected to the host or remotely connected to the host device through a network. If a host device requests a storage transaction through the network, the host device’s NIC forms a packet that includes the storage transaction and transmits the packet to the target storage device’s NIC. Topologies of FIG. 1A and IB can be mixed. These are merely illustrative higher level examples that show possible environments in which various embodiments are used.An NVMe namespace is a quantity of non-volatile memory (NVM) or other type of memory that can be formatted into logical blocks. A namespace can include N logical blocks with logical block addresses from 0 to (N-l). Thin provisioning and deallocation of capacity may be supported, so that capacity of the NVM for a namespace may be less than the size of the namespace. A namespace ID (NSID) is an identifier used at least by a host to identify a namespace for access. In some cases, an NSID can be allocated to a virtual execution environment. For example, a virtual machine VM1 can be allocated NSID1, virtual machine VM2 allocated NSID2, and so forth. When SR-IOV is used, an NSID can be unique to a function (e.g., physical function (PF) or virtual function (VF)). When SIOV is used, a namespace can be unique to a function and a group of queues. NVMe provides access to namespaces through multiple controllers. For a virtual machine (or other isolated domain or virtualized execution environment) that runs an NVMe driver, namespaces appear as standard-block devices on which file systems and applications can be deployed.In some scenarios, an isolated domain or virtualized execution environment is migrated from a server or computing platform to another computing platform or uses a different network interface, including a composable or composite node. Migration can also involve changing a core or processor on which a virtualized execution environment runs even if in the same CPU node, server, data center, or rack. When an isolated domain or virtualized execution environment is migrated to a different computing platform or uses a different network interface, under some versions of NVMe or other specifications, namespace IDs (NSIDs) are to be preserved.Various embodiments permit migration of an isolated domain or virtualized execution environment that are allocated use of an NSID from a source platform while preserving requester and target access permissions and exclusions when run on a second platform. For example, a source platform can use a network interface that supports transmission and receipt of storage commands that use mappings of input/output ( I/O) commands and responses to shared memory in a host computer and permit parallel I/O data paths to the underlying media with multicore processors to facilitate high throughput and mitigate central processing unit (CPU) bottlenecks. In some examples, the network interface supports NVMe-oF transactions. NVMe-oF transactions with a storage device can use any of a variety of protocols (e.g., remote direct memory access (RDMA), InfiniBand, FibreChannel, TCP/IP, RDMA over Converged Ethernet (RoCE), iWARP, quick UDP Internet Connections (QUIC), and so forth). The source platform can be a computing platform that supports NVMe or NVMe-oF transactions using its host central processing unit and/or an offload of support of NVMe-oF transactions to a network interface. Similarly, the second platform can be a computing platform that supports NVMe or NVMe-oF transactions using its host central processing unit and/or an offload of support of NVMe-oF transactions to a network interface.Various embodiments allow live migration of an isolated domain or virtualized execution environment running a NVMe driver to a second platform and/or NIC. In a cloud data center, an isolated domain or virtualized execution environment may be migrated from one core to another core or one compute node to another without any functional disruption to the NVMe driver so that it can continue to issue storage transactions to one or more NSIDs. As part of live migration, a different physical function (PF) identifier, virtual function (VF) identifier, or submission queue identifier can be assigned to the migrated isolated domain or virtualized execution on a second platform and/or NIC, but the NSID does not change. Parameters related to determining permissions and exclusions (e.g., requester and target) and format of target media referenced by the NSID for one or more NSIDs are unchanged despite use of the second platform and potential changes to an allocated PF identifier, VF identifier, submission queue identifier, or other parameters at the second platform. Accordingly, by migration of NSID and permissions/exclusions and target media format to the second platform and/or NIC, storage transactions to the storage device with logical blocks corresponding to the NSID can continue without disruption after migration of an isolated domain or virtualized execution environment.FIG. 2 depicts an example system. Host system 200 can include various processors 202 and memory 204. Processors 202 can be an execution core or computational engine that is capable of executing instructions. A core can have access to its own cache and read only memory (ROM), or multiple cores can share a cache or ROM. Cores can be homogeneous and/or heterogeneous devices. Any type of inter-processor communication techniques can be used, such as but not limited to messaging, inter-processor interrupts (IPI), inter-processor communications, and so forth. Cores can be connected in any type of manner, such as but not limited to, bus, ring, or mesh. Processors 102 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instmction(s) described herein.Processors 202 can execute an operating system and one or more virtualized execution environments (e.g., VM 206). A virtualized execution environment can include at least a virtual machine, process containers, machine containers, or application processes. A virtual machine (VM) can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can be an OS or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to ran Linux® and Windows® Server operating systems on the same underlying physical host.A container can be a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A container can be a software package that contains everything the software needs to ran such as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. Isolation can include access of memory by a particular container but not another container. The isolated nature of containers provides several benefits. First, the software in a container will ran the same in different environments. For example, a container that includes PHP and MySQL can ran identically on both a Linux computer and a Windows machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows registry, a container can only modify settings within the container.A single-root I/O virtualization (SR-IOV) extension can be used to enable multiple virtualized execution environments (e.g., system images) to share PCIe hardware resources under a single-node system (e.g., single root complex). SR-IOV is compatible at least with specifications available from Peripheral Component Interconnect Special Interest Group (PCI SIG) including specifications such as Single Root I/O Virtualization and Sharing specification Revision 1.1 (2010) and variations thereof and updates thereto. A SR-IOV device provides a bus device function (BDF) identifier for a virtual function within a PCIe hierarchy; a unique memory address space for a virtual function (VF) within a PCIe hierarchy; a unique error logging and escalation scheme; a unique MSI/MSI-X capability for each VF within a PCIe hierarchy; and power- management capabilities for each VF within a PCIe hierarchy. In addition, SR-IOV provides the capability to discover and configure virtualization capabilities which include a number of VFs that the PCIe device will associate with a device and the type of base address register (BAR) mechanism supported by the VFs.Scalable I/O Virtualization (SIOV) can be used by the system. SIOV is a PCIe-based virtualization technique that provides for scalable sharing across virtualized execution environments of I/O devices, such as network controllers, storage controllers, graphics processing units, and other hardware accelerators across a large number of virtualized execution environments. Unlike the coarse-grained device partitioning approach of SR-IOV to create multiple VFs on a PF, SIOV enables software to flexibly compose virtual devices utilizing the hardware-assists for device sharing at finer granularity. Performance critical operations on the composed virtual device are mapped directly to the underlying device hardware, while non-critical operations are emulated through device-specific composition software in the host. A technical specification for SIOV is Intel® Scalable I/O Virtualization Technical Specification, revision 1.0, June 2018.According to some embodiments, network interface drivers 208 and 212 can provide for use of storage commands to remote storage devices. For example, a virtualized execution environment (e.g., VM 206) can issue a storage command and driver 208 can issue storage commands to a remote storage device using NVMe or NVMe-oF by issuing commands to queues in SmartNIC 250. If an operating system kernel 210 or virtualized execution environment (e.g., VM 206) use SIOV or SR-IOV, driver 212 and driver 208 access respective physical function (PF) and particular virtual function (VF). Examples of storage commands include, but are not limited to: read, write, add queue, remove queue, error log, enable controller, or disable controller.SmartNIC 250 can provide a hardware offload that is higher performance and lower power use than a software solution running on host 200, enabling offload of I/O operations. SmartNIC 250 can greatly scale a number of supported NSIDs and virtual functions. For example, to issue a storage command to the SmartNIC 250, a virtualized execution environment (e.g., VM 206) can write a tail pointer to a doorbell register using interface 252. Interface 252 can be compatible with PCIe in some examples, although other interfaces can be used. In some examples, the virtualized execution environment specifies a table key with a storage command for SmartNIC 250 to identify relevant permissions and target drive format. For example, a table entry can specify {FType[l:0], PF[2:0], ID[11:0], NSID[11:0]}. FType can identify a source of a storage transaction (e.g., PF or VF). PF can indicate a physical function number. ID can identify a unique NSID scope for a queue. NSID can represent an NSID for a storage transaction.SmartNIC 250 can include a remote storage transaction circuit 254 to enable virtualized execution environments to define NSIDs. Remote storage transaction circuit 254 can use a look- up-table (e.g., content addressable memory (CAM) or hash table) to access content from one or multiple linked tables to determine whether a storage command from a requester is permitted or not and whether the storage command is permitted or not to be issued to the target media device. Table(s) can be configured by a control plane software when a virtualized execution environment is installed on host 200 or migrated from another compute node to host 200. In some examples, a scheme described with respect to FIG. 3 can be used to determine whether a storage command is permitted or not and provide a target media format if the storage command is permitted.Note that in some examples, an interface can refer to one or more of: a network interface controller, network interface card, smart network interface, fabric interface, interface to an interconnect or bus, and so forth.In some examples, a table key, provided with a storage transaction, can be converted into a pointer to an entry in a first table. The entry in the first table can indicate what permissions if any are afforded the issuer of the storage command and whether the storage command is permitted or declined. If the storage command is permitted, the entry can also refer to an entry in a second table. The entry in the second table can indicate what permissions, if any, are given in the target media and whether the storage command is permitted or declined. If the storage command is permitted to be issued by the requester and permitted to access the media at addresses corresponding to an NSID provided with the table key, the target media format is provided for use and SmartNIC 250 generates and transmits a packet to a destination NIC that is connected to the target storage device.Transport layer processor 256, packet processing pipeline 258, encrypt/decrypt circuitry 260, and a port 262 can be used to form and transmit a packet with the proper headers with the storage command over a network or other connection (e.g., fabric, interconnect) to the storage device with the media associated with the NSID for the storage command. In some examples, remote direct copy host controller 270 to support RDMA transactions with the remote storage device. Ethernet host controller 272 can be used to manage multiple communications channels involving Ethernet with host 200.Processors 202 and memory 204 of host 200 or processors 280 and memory 282 of SmartNIC 250 can be used to handle exception paths that are not handled using remote transaction circuit 254 such as when permission is not granted for a storage command or for other exception handling or processing described herein prior to packet transmission.FIG. 3 depicts an example of determination of whether there is permission for a requester to access a particular address region associated with an NSID. Various embodiments provide a layer of indirection between a virtualized execution environment (which could be migrated) and access permissions (e.g., read or write) and target media access information. Indirection can be provided using look-up operations configured for a particular host platform, NIC or SmartNIC. As described earlier, a source NIC or SmartNIC can use an NSID and format of target drive in connection with an NVMe-oF transaction. Tables 304 and 306 can be configured by control plane software when a new virtual execution environment is assigned to a host.A table key can have a format of {FType[l:0], PF[2:0], ID[11:0], NSID[11:0]}. The table key can index a much larger table than what is implemented to support multiple NSID scopes. When SR-IOV is used, the table key includes the virtual function and physical function identifier numbers so that a function (virtual or physical) has its own NSID scope. When S-IOV is used, the table key supports using a Submission Queue ID (SQID) to provide a unique NSID scope for a queue. For example, FType can have the following values.Conversion of the table key to a pointer 302 into NSID table 304 can convert a table key to a pointer to a value in table 304. For example, a CAM or hash table can be used to generate the pointer based on the table key value. The pointer can refer to an entry in NISD attachment lookup table 304. In some examples, the pointer is 12 bits, but other sizes can be used.NISD attachment lookup table 304 can use the pointer to identify permission per source or requester and a primary NSID (pNSID) value assigned to an NSID. NSID attachment lookup table 304 attaches a driver’s assigned NSID to an internally assigned pNSID. In some examples, multiple entries in table 304 can be associated with the same pNSID. For example, multiple NSIDs used by different functions may point to the same namespace used as a boot partition. Permission per source can identify properties of a storage drive that are unique to a virtualized execution environment and function queue group. Permission per source can refer to configuration settings for the namespace, such as such as read and write enable (Enable), Read Enable, or Write Enable.The following provides an example of an output from NSID Attachment Lookup Table 304 in response to a received pointer.The pNSID can be used as a pointer to Primary Namespace ID (pNSID) Lookup Table 306. pNSID Lookup Table 306 can store configuration information about the logical blocks assigned to a pNSID and indicate permissions per target media device. By contrast, NSID attachment lookup table 304 can indicate permissions for a requester. For a pNSID value, table 306 provides drive format table (e.g., meta data size, sector or block format, end-to-end protection format, encryption enabled/disabled, target (e.g., software queue or hardware offload) and so forth) as well as target permissions (e.g., read and write enabled, read enabled, write enabled). In other embodiments, a single table can be used to indicate requester and target permissions, instead of using two lookups.In some cases, no match of the pointer derived from table key is found in table 304 and an exception path can be followed for special handling by the host or NIC. In some examples, some pNSID values are associated with special handling or exceptions that can be performed by a Target such as a SmartNIC’s processor or the host. For example, exceptions can occur when storage commands for some pNSID values are to be transported using a protocol not supported by the SmartNIC or are to use an encryption, compression, and operations performed using a Target such as host or processor-executed software at the NIC are performed.For example, if a media is a boot drive and is shared by 9 VMs (with different table keys), all 9 VMs can be assigned the same pNSID and access a same entry in table 306. However, table 304 can indicate whether a particular VM is able to read or write or not to the target media at the addresses associated with the NSID. Table 304 provides PF or VF-specific read and write permissions whereas table 306 has pNSID-level permissions and provides a manner to further restrict access privileges for all PFs or VFs accessing a shared target media beyond the PF or VF- specific settings.FIG. 4 depicts an example of command execution in connection with a storage transaction. Actions 402 and 404 can be performed by a host system that supports remote storage IO (input output) transactions from a requester. In some examples, the host and its NIC can support SR-IOV or SIOV. At 402, an IO submission is received at a queue from a requester. An IO submission can indicate that a queue was written-to and a doorbell is written-to with a write tail pointer. A requester can be a one or more virtualized execution environment, driver, operating system, or its application.At 404, a queue is selected from multiple queues from which to execute a next IO transaction. Selection of a queue can be made based on applicable Quality of Service (QoS) whereby certain queues may have priority over other queues or the selection of a transaction from queues is based on a round robin or weighted round robin scheme. A memory resource can be selected to store the command. For example, a memory resource can be SRAM and used to store a storage command. A storage command can be one or more of a read, write, admin, or exception and have an associated NSID.At 406, the NIC fetches a submission queue entry (SQE) over an interface. For example, the interface can be compatible with PCIe. At 408, the NIC parses the storage command from the memory resource to determine its action. At 410, the NIC looks up the requester’s permission rights to determine if the requester is permitted to read-from or write-to the storage region associated with the NSID. A table key can be converted to a pointer to a table that identifies requester permissions. The NIC also provides an identifier for a look-up of permission at the target media device. The process continues to 412. However, a permission violation can occur where the requested operation is not permitted, and the host or NIC handle such exceptions at 420. In some cases, where there is no indication of whether the requester and its command has permission, the host or NIC can handle the situation at 420.At 412, for a command from a requester that is permitted to be performed, the NIC performs primary namespace lookup for permission at the target media device. A table can be accessed to determine if the permission is granted using an identifier (e.g., pNSID) from a prior table. If there is permission to proceed, the process proceeds to 414. However, a permission violation can occur where the requested operation is not permitted at the target, and the host or NIC handle such exceptions at 420.At 414, packet formation and transmission are performed. A drive format table (e.g., meta data size, sector or block format, end-to-end protection format, encryption enabled/disabled, and so forth) is provided for use in connection with a packet transmission of the storage command. In some cases, the NIC performs packet formation by copying the payload or content to transmit (e.g., using direct memory access (DMA)) and sending the packet to the destination NIC that can receive storage commands for the target media. In some examples, the NIC can perform encryption of contents of packets prior to transmission. In some cases, the host handles payload fetch and packet formation and instructs the NIC to transmit the packet. Any combination of use of the NIC or host for packet formation and transmission can be used.At 420, exception handling can be performed. For example, exceptions can occur when storage commands for some combinations of NSID and requesters are to be transported using a protocol not supported by a NIC or are to use an encryption, compression, and operations performed using a specified target host or processor-executed software at the NIC are performed. For example, a table can indicate use of a specific target based on a key or pointer (e.g., pNSID value). In some cases, an exception occurs where a requested action is not permitted and in such case, the host can potentially alert an administrator and check if the requester is malicious. In some cases, a requester and its storage transaction are not identified and in such case, the host and or NIC can issue an error message to an administrator or determine if permission should be granted.FIG. 5 depicts examples of various source and destination environments associated with migration of a virtualized execution environment. Migration of a virtualized execution environment can occur in a variety of circumstances. For example, some reasons include: a server malfunctioning, workload load balancing, server maintenance or other reasons. A hypervisor and/or orchestrator can manage the virtualized execution environment migration. Control plane software can manage transfer of parameters used to maintain permission rights and target drive format at the next platform and/or NIC to provide continuity of use of the same NSID for storage transactions. The control plane can execute on any platform connected to the source and destination host and NIC.At 500, a virtualized execution environment can be migrated from use of a first NIC to use of a second NIC for remote storage transactions. In connection with a change to use the second NIC, NSID-related access permissions (for requesters and target NSID) and target drive format are shared and used by the second NIC for remote storage transactions. The second NIC can utilize a similar per-request and per-target permission scheme as that of the first NIC but the Ftype, PF number or ID (or other parameters) could change while retaining support for use of the same NSID at the second NIC. In other words, the destination platform (e.g., host and/or second NIC) could assign a different Ftype, PF number or ID (or other parameters) but the NSID and format of target drive are preserved for use at the second NIC. In some examples, both the first and second NICs use the conversion format of FIG. 3 to determine access permissions and target drive format for a particular storage transaction.At 525, a virtualized execution environment can be migrated from use of a first host to use of a second NIC for remote storage transactions. In this example, the first host uses a CPU to generate packets for a remote storage transaction (e.g., NVMe-oF) for virtualized execution environment. After migration, the virtualized execution environment can use the second NIC for packet formation for remote storage transactions instead of its host system. The second NIC can utilize a similar per-request and per-target permission scheme as that of the first host but the Ftype, PF number or ID (or other parameters) could change while retaining support for use of the same NSID at the second NIC. In other words, the destination platform (e.g., second NIC and/or associated host that runs the virtualized execution environment) could assign a different Ftype, PF number or ID (or other parameters) but the NSID and format of target drive are preserved for use at the second NIC.At 550, a virtualized execution environment can be migrated from use of a first host to use of a second host for remote storage transactions. The second host can be a different platform (e.g., different server, rack, or data center) than that of the first host. In some cases, the second host can be the same platform (e.g., same server, rack, or data center) as that of the first host but a different CPU core or different CPU node. In this example, the first host uses a CPU to generate packets for a remote storage transaction (e.g., NVMe-oF) for virtualized execution environment. After migration, the virtualized execution environment can use a different CPU to generate packets for a remote storage transaction (e.g., NVMe-oF) for virtualized execution environment. The second host can utilize a similar per-request and per-target permission scheme as that of the first host but the Ftype, PF number or ID (or other parameters) could change while retaining support for use of the same NSID at the second NIC. In other words, the destination platform (e.g., second host that runs the virtualized execution environment) could assign a different Ftype, PF number or ID but the NSID (or other parameters) and format of target drive are preserved for use at the second host.At 575, a virtualized execution environment can be migrated from use of a first NIC to use of a second host for remote storage transactions. The second host can be a different platform (e.g., different server, rack, or data center) than that of the first host. In some cases, the second host can be the same platform (e.g., same server, rack, or data center) as that of the first host but a different CPU core or different CPU node. In this example, the first NIC generates packets for a remote storage transaction (e.g., NVMe-oF) for a virtualized execution environment. After migration, the virtualized execution environment can use a CPU to generate packets for a remote storage transaction (e.g., NVMe-oF) for virtualized execution environment. The second host can utilize a similar per-request and per-target permission scheme as that of the first NIC (and its host) but the Ftype, PF number or ID (or other parameters) could change while retaining support for use of the same NSID at the second host. In other words, the destination platform (e.g., second host that runs the virtualized execution environment) could assign a different Ftype, PF number or ID (or other parameters) but the NSID and format of target drive are preserved for use at the second host.FIG. 6 depicts a process for a migration of a virtualized execution environment. The process can be performed by any or a combination of a hypervisor, orchestrator, virtual machine manager, driver, or control plane software. At 602, a stall point is set at a command in a queue associated with a virtual execution environment. The stall point can be set at the most recently received command or other command. At 604, commands in the queue are permitted to execute until reaching the stall point command. In other words, the queue is drained until reaching the stall point. New commands can be added to the queue but those commands are not executed until after migration.At 606, states of the queue are migrated to the destination device. For example, state (e.g., head pointer position, tail pointer position) and unexecuted storage commands can be migrated to the destination NIC for execution. As described earlier, an originating device from which a virtualized execution environment is migrated can use a host and/or NIC to perform generate packets for remote storage transactions whereas the destination device that is to ran the migrated virtualized execution environment can use a host and/or NIC to generate packets for remote storage transactions. Accordingly, the queue states and commands from the prior platform are available on the next platform so that the commands can execute on the next platform.At 608, NSID-related access permissions and target drive format for remote storage transactions are shared with the destination platform. If a look-up scheme involving source permission and target permission is used, as described with respect to FIG. 3, permissions and exclusions (e.g., requester and target), exceptions, and format of target media referenced by the NSID for one or more NSIDs are unchanged despite use of the destination platform and potential changes to an allocated PF identifier, VF identifier, or submission queue identifier at the destination platform.At 610, migrated commands can be performed at the destination device to continue performance of storage commands.Accordingly, various embodiments can provide continued use of namespaces for remote storage transactions to allow migration of a virtualized execution environment without interrupting storage transactions to the same namespace.FIG. 7 depicts a system. The system can use embodiments described herein to allow migration of a virtual execution environment to another processor or network interface with NVMe-oF offload engine while maintaining at least access rights/denials and sector or block format information and NSIDs assigned to the migrated virtual execution environment. System 700 includes processor 710, which provides processing, operation management, and execution of instructions for system 700. Processor 710 can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system 700, or a combination of processors. Processor 710 controls the overall operation of system 700, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.In one example, system 700 includes interface 712 coupled to processor 710, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem 720 or graphics interface components 740, or accelerators 742. Interface 712 represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface 740 interfaces to graphics components for providing a visual display to a user of system 700. In one example, graphics interface 740 can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface 740 generates a display based on data stored in memory 730 or based on operations executed by processor 710 or both. In one example, graphics interface 740 generates a display based on data stored in memory 730 or based on operations executed by processor 710 or both.Accelerators 742 can be a fixed function or programmable offload engine that can be accessed or used by a processor 710. For example, an accelerator among accelerators 742 can provide compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators 742 provides field select controller capabilities as described herein. In some cases, accelerators 742 can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators 742 can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs) or programmable logic devices (PLDs). Accelerators 742 can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include any or a combination of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models.Memory subsystem 720 represents the main memory of system 700 and provides storage for code to be executed by processor 710, or data values to be used in executing a routine. Memory subsystem 720 can include one or more memory devices 730 such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory 730 stores and hosts, among other things, operating system (OS) 732 to provide a software platform for execution of instructions in system 700. Additionally, applications 734 can execute on the software platform of OS 732 from memory 730. Applications 734 represent programs that have their own operational logic to perform execution of one or more functions. Processes 736 represent agents or routines that provide auxiliary functions to OS 732 or one or more applications 734 or a combination. OS 732, applications 734, and processes 736 provide software logic to provide functions for system 700. In one example, memory subsystem 720 includes memory controller 722, which is a memory controller to generate and issue commands to memory 730. It will be understood that memory controller 722 could be a physical part of processor 710 or a physical part of interface 712. For example, memory controller 722 can be an integrated memory controller, integrated onto a circuit with processor 710.While not specifically illustrated, it will be understood that system 700 can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (Firewire).In one example, system 700 includes interface 714, which can be coupled to interface 712. In one example, interface 714 represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface 714. Network interface 750 provides system 700 the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface 750 can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface 750 can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface 750 can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface 750, processor 710, and memory subsystem 720. Various embodiments of network interface 750 use embodiments described herein to receive or transmit timing related signals and provide protection against circuit damage from misconfigured port use while providing acceptable propagation delay.In one example, system 700 includes one or more input/output (I/O) interface(s) 760. I/O interface 760 can include one or more interface components through which a user interacts with system 700 (e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface 770 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 700. A dependent connection is one where system 700 provides the software platform or hardware platform or both on which operation executes, and with which a user interacts.In one example, system 700 includes storage subsystem 780 to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage 780 can overlap with components of memory subsystem 720. Storage subsystem 780 includes storage device(s) 784, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 784 holds code or instructions and data 786 in a persistent state (i.e., the value is retained despite interruption of power to system 700). Storage 784 can be generically considered to be a "memory," although memory 730 is typically the executing or operating memory to provide instructions to processor 710. Whereas storage 784 is nonvolatile, memory 730 can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 700). In one example, storage subsystem 780 includes controller 782 to interface with storage 784. In one example controller 782 is a physical part of interface 714 or processor 710 or can include circuits or logic in both processor 710 and interface 714.A volatile memory is memory whose state (and therefore the data stored in it) is indeterminate if power is interrupted to the device. Dynamic volatile memory uses refreshing the data stored in the device to maintain state. One example of dynamic volatile memory incudes DRAM (Dynamic Random Access Memory), or some variant such as Synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (Double Data Rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on June 27, 2007). DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4), LPDDR3 (Low Power DDR version3, JESD209-3B, August 2013 by JEDEC), LPDDR4) LPDDR version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide Input/output version 2, JESD229-2 originally published by JEDEC in August 2014, HBM (High Bandwidth Memory, JESD325, originally published by JEDEC in October 2013, LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC, or others or combinations of memory technologies, and technologies based on derivatives or extensions of such specifications. The JEDEC standards are available at www.jedec.org. A non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable memory device, such as NAND technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad- Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND). A NVM device can also comprise a byte-addressable write-in-place three dimensional cross point memory device, or other byte addressable write-in-place NVM device (also referred to as persistent memory), such as single or multi-level Phase Change Memory (PCM) or phase change memory with a switch (PCMS), NVM devices that use chalcogenide phase change material (for example, chalcogenide glass), resistive memory including metal oxide base, oxygen vacancy base and Conductive Bridge Random Access Memory (CB-RAM), nano wire memory, ferroelectric random access memory (FeRAM, FRAM), magneto resistive random access memory (MRAM) that incorporates memristor technology, spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.A power source (not depicted) provides power to the components of system 700. More specifically, power source typically interfaces to one or multiple power supplies in system 700 to provide power to the components of system 700. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion- based power supply, solar power supply, or fuel cell source.In an example, system 700 can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect express (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, Compute Express Link (CXL), HyperTransport, high-speed fabric, NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3 GPP 5G, and variations thereof. Data can be copied or stored to virtualized storage nodes using a protocol such as NVMe over Fabrics (NVMe-oF) or NVMe.Embodiments herein may be implemented in various types of computing and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet, part of the Internet, public cloud, private cloud, or hybrid cloud. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, each blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.FIG. 8 depicts an environment 800 includes multiple computing racks 802, each including a Top of Rack (ToR) switch 804, a pod manager 806, and a plurality of pooled system drawers. Generally, the pooled system drawers may include pooled compute drawers and pooled storage drawers. Optionally, the pooled system drawers may also include pooled memory drawers and pooled Input/Output ( I/O) drawers. In the illustrated embodiment the pooled system drawers include an Intel® Xeon® processor pooled computer drawer 808, and Intel® ATOM™ processor pooled compute drawer 810, a pooled storage drawer 812, a pooled memory drawer 814, and a pooled I/O drawer 816. Each of the pooled system drawers is connected to ToR switch 804 via a high-speed link 818, such as a 40 Gigabit/second (Gb/s) or lOOGb/s Ethernet link or a 100+ Gb/s Silicon Photonics (SiPh) optical link. In one embodiment high-speed link 818 comprises an 800 Gb/s SiPh optical link.FIG. 9 depicts a network interface that can use embodiments or be used by embodiments. Various processors of network interface 900 can use techniques described herein to support determination of permissions and target sector or block format despite virtualized execution environment migration and provide packet formation and transmission of packets for remote storage transactions including NVMe-oF. For example, if a first core of processors 904 performs packet processing and a second core of processor 904 performs a power management process, the second core can modify operating parameters of the first core in accordance with embodiments described herein.Network interface 900 can include transceiver 902, processors 904, transmit queue 906, receive queue 908, memory 910, and bus interface 912, and DMA engine 926. Transceiver 902 can be capable of receiving and transmitting packets in conformance with the applicable protocols such as Ethernet as described in IEEE 802.3, although other protocols may be used. Transceiver 902 can receive and transmit packets from and to a network via a network medium (not depicted). Transceiver 902 can include physical layer (PHY) circuitry 914 and media access control (MAC) circuitry 916. PHY circuitry 914 can include encoding and decoding circuitry (not shown) to encode and decode data packets according to applicable physical layer specifications or standards. MAC circuitry 916 can be configured to assemble data to be transmitted into packets, that include destination and source addresses along with network control information and error detection hash values. MAC circuitry 916 can be configured to process MAC headers of received packets by verifying data integrity, removing preambles and padding, and providing packet content for processing by higher layers.Processors 904 can be any combination of a: CPU, core, graphics processing unit (GPU), field programmable gate array (FPGA), application specific integrated circuit (ASIC), or programmable hardware device, or fixed function hardware device that allow programming of network interface 900. For example, processors 904 can provide for allocation or deallocation of intermediate queues. For example, a “smart network interface” can provide packet processing capabilities in the network interface using processors 904.Packet allocator 924 can provide distribution of received packets for processing by multiple CPUs or cores using timeslot allocation described herein or RSS. When packet allocator 924 uses RSS, packet allocator 924 can calculate a hash or make another determination based on contents of a received packet to determine which CPU or core is to process a packet.Interrupt coalesce 922 can perform interrupt moderation whereby network interface interrupt coalesce 922 waits for multiple packets to arrive, or for a time-out to expire, before generating an interrupt to host system to process received packet(s). Receive Segment Coalescing (RSC) can be performed by network interface 900 whereby portions of incoming packets are combined into segments of a packet. Network interface 900 provides this coalesced packet to an application.Direct memory access (DMA) engine 926 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer.Memory 910 can be any type of volatile or non-volatile memory device and can store any queue or instructions used to program network interface 900. Transmit queue 906 can include data or references to data for transmission by network interface. Receive queue 908 can include data or references to data that was received by network interface from a network. Descriptor queues 920 can include descriptors that reference data or packets in transmit queue 906 or receive queue 908. Bus interface 912 can provide an interface with host device (not depicted). For example, bus interface 912 can be compatible with peripheral connect Peripheral Component Interconnect (PCI), PCI Express, PCI-x, Serial ATA (SATA), and/or Universal Serial Bus (USB) compatible interface (although other interconnection standards may be used).In some examples, network interface and other embodiments described herein can be used in connection with a base station (e.g., 3G, 4G, 5G and so forth), macro base station (e.g., 5G networks), picostation (e.g., an IEEE 802.11 compatible access point), nanostation (e.g., for Point- to-MultiPoint (PtMP) applications), on-premises data centers, off-premises data centers, edge network elements, fog network elements, and/or hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments).Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. A processor can be one or more combination of a hardware state machine, digital control logic, central processing unit, or any hardware, firmware and/or software elements.Some examples may be implemented using or as an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer-readable medium may include a non- transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.Some examples may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of steps may also be performed according to alternative embodiments. Furthermore, additional steps may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof.Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.”’Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below.Example 1 includes an apparatus comprising: an interface comprising circuitry and logic, the interface to: generate packets for storage transactions using a transport protocol and in connection with commencement of a virtual execution environment on a second computing platform, provide capability of a first computing platform at the second computing platform for the virtual execution environment to continue storage transactions and maintain use of same name space identifiers (NSIDs).Example 2 includes any example, wherein the interface is to perform one or more of: migrate parameters associated with storage transaction permissions to the second computing platform or migrate format of a target media drive.Example 3 includes any example, wherein permission comprise one or more of: per- requester permission or per-target media permission.Example 4 includes any example, wherein per-requester permission comprises one or more of: read enable, write enable, read and write enabled and per-target media permission comprises one or more of: read enable, write enable, read and write enabled.Example 5 includes any example, wherein a format of the target media drive comprises one or more of: sector or block format, read or write enablement, or end-to-end protection.Example 6 includes any example, wherein the interface is to: prior to migration of the virtual execution environment to the second computing platform: execute at least one received storage commands and identify unexecuted commands for migration to the second computing platform.Example 7 includes any example, wherein the commencement is initiated based on one on one or more of: virtual execution environment migration, server maintenance, or load balancing.Example 8 includes any example, wherein the virtual execution environment is to request a storage transaction that is translated to a transaction over a transport protocol.Example 9 includes any example, wherein to translate the storage transaction to a transaction over a transport protocol, the virtual execution environment is to execute a driver that supports storage transactions using Non-Volatile Memory Express (NVMe).Example 10 includes any example, and including one of more of: a server, data center, or rack.Example 11 includes an apparatus comprising: a computing system comprising at least one processor and at least one memory device and an interface to: determine access rights in response to requested access by a requester to a namespace identifier associated with a target media, wherein the access rights for the requester and namespace identifier are independent of an identifier of the requester.Example 12 includes any example, wherein after migration of the requester to another computing system or network interface, one or more of the computing system or network interface are to apply same access rights for the requester as were applied before the migration based on received parameters and independent of an identifier of the requester after migration.Example 13 includes any example, wherein the access rights comprise one or more of: read and write enabled, read enabled, or write enabled.Example 14 includes any example, wherein the access rights comprise one or more of: access rights based on a requester of a storage transaction or access rights based on a target storage device.Example 15 includes any example, wherein the interface is to provide a target media format for the requester and namespace identifier independent of the identifier of the requester and the target media format comprises one or more of: sector or block format, read or write enablement, or end-to-end protection.Example 16 includes any example, wherein the interface is to receive unexecuted storage commands associated with the requester generated on a prior platform and the interface is to store the unexecuted storage commands for execution.Example 17 includes a computer-implemented method comprising: migrating a virtualized execution environment from a first platform to a second platform while retaining use of a namespace identifier and permitting issuance of storage transactions by the virtualized execution environment by use of the namespace identifier.Example 18 includes any example, and includes retaining access permissions and target media format independent of one or more identifiers associated with the migrated virtualized execution environment at the second platform.Example 19 includes any example, wherein permitting issuance of storage transactions by the virtualized execution environment comprises: performing a storage transaction in a queue associated with the virtualized execution environment and migrating an unperformed storage transaction to a queue in the second platform.Example 20 includes any example, and includes executing the unperformed storage transaction using the second platform.
A low noise amplifier (LNA) device includes a first transistor on a semiconductor on insulator (SOI) layer. The first transistor includes a source region, a drain region, and a gate. The LNA device also includes a first-side gate contact coupled to the gate. The LNA device further includes a second-side source contact coupled to the source region. The LNA device also includes a second-side drain contact coupled to the drain region.
CLAIMSWhat is claimed is:1. A low noise amplifier (LNA) device, comprising:a first transistor on a semiconductor on insulator (SOI) layer, the first transistor including a source region, a drain region, and a gate;a first-side gate contact coupled to the gate;a second-side source contact coupled to the source region; anda second-side drain contact coupled to the drain region.2. The LNA device of claim 1, in which a first-side comprises a front-side of the first transistor, and a second-side comprises a backside of the first transistor.3. The LNA device of claim 1, in which a second-side comprises a front- side of the first transistor, and a first-side comprises a backside of the first transistor.4. The LNA device of claim 1, in which the second-side source contact and/or the second-side drain contact comprises a silicide contact layer.5. The LNA device of claim 1, further comprising a first-side back-end-of- line (BEOL) interconnect coupled to the first-side gate contact and arranged in a first- side dielectric layer.6. The LNA device of claim 1, in which the first transistor further comprises:a first via coupled to the source region through the second-side source contact, the first via extending towards a second-side dielectric layer;a second via coupled to the drain region through the second-side drain contact, the second via extending towards the second-side dielectric layer; anda handle substrate on a first-side dielectric layer or the second side dielectric layer.7. The LNA device of claim 1, further comprising at least one radio frequency (RF) component coupled to the second-side source contact and/or the second- side drain contact.8. The LNA device of claim 7, in which the at least one RF component comprises at least one of a resistor, an inductor, a capacitor, or an antenna.9. The LNA device of claim 1, integrated into an RF front end module, the RF front end module incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, a mobile phone, and a portable computer.10. A method of constructing a low noise amplifier (LNA) device, comprising:fabricating a first transistor on a first surface of an isolation layer supported by a sacrificial substrate, the first transistor comprising a gate coupled to a first-side gate contact;depositing a first-side dielectric layer on the first transistor;bonding a handle substrate to the first-side dielectric layer;removing the sacrificial substrate;exposing a second-side of a source region and a second-side of a drain region of the first transistor through a second surface opposite the first surface of the isolation layer;depositing a second-side source contact on the second-side of the source region; anddepositing a second-side drain contact on the second-side of the drain region.11. The method of claim 10, further comprising coupling at least one radio frequency (RF) component to the second-side source contact and/or the second-side drain contact.12. The method of claim 11, in which the at least one RF component comprises at least one of a resistor, an inductor, a capacitor, or an antenna.13. The method of claim 10, further comprising:fabricating a first via coupled to the source region through the second-side source contact, the first via extending through the isolation layer and into a second-side dielectric layer supporting the isolation layer and distal from the first-side dielectric layer; andfabricating a second via coupled to the drain region through the second-side drain contact, the second via extending through the isolation layer and into the second- side dielectric layer.14. The method of claim 13, further comprising fabricating a post-layer transfer metallization layer in the second-side dielectric layer and coupled to the second- side source contact and/or the second-side drain contact of the first transistor through the first via and/or the second via.15. The method of claim 10, further comprising integrating the LNA device into an RF front end module, the RF front end module incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, acommunications device, a personal digital assistant (PDA), a fixed location data unit, a mobile phone, and a portable computer.16. A radio frequency (RF) front end module, comprising:a low noise amplifier, comprising a first transistor on a semiconductor on insulator (SOI) layer, the first transistor including a source region, a drain region, and a gate, a first-side gate contact coupled to the gate, a second-side source contact coupled to the source region, and a second-side drain contact coupled to the drain region; and an antenna coupled to an output of the low noise amplifier.17. The RF front end module of claim 16, in which a first-side comprises a front-side of the first transistor, and a second-side comprises a backside of the first transistor, the second-side being distal from the first-side.18. The RF front end module of claim 16, in which a second-side comprises a front-side of the first transistor, and a first-side comprises a backside of the first transistor, the first-side being distal from the second-side.19. The RF front end module of claim 16, in which the first transistor further comprises:a first via coupled to the source region through the second-side source contact, the first via extending towards a second-side dielectric layer;a second via coupled to the drain region through the second-side drain contact, the second via extending towards the second-side dielectric layer; anda handle substrate on a first-side dielectric layer or the second-side dielectric layer.20. The RF front end module of claim 16, further comprising at least one radio frequency (RF) component coupled to the second-side source contact and/or the second-side drain contact.
LOW PARASITIC CAPACITANCE LOW NOISE AMPLIFIERCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims the benefit of U.S. Patent Application No.15/976,710, filed on May 10, 2018, entitled "LOW PARASITIC CAPACITANCE LOW NOISE AMPLIFIER," which claims benefit of U.S. Provisional PatentApplication No. 62/564,155, filed on September 27, 2017, entitled "LOW PARASITIC CAPACITANCE LOW NOISE AMPLIFIER," the disclosures of which are expressly incorporated by reference herein in their entireties.TECHNICAL FIELD[0002] The present disclosure generally relates to integrated circuits (ICs). More specifically, the present disclosure relates to a low parasitic capacitance low noise amplifier.BACKGROUND[0003] A wireless device (e.g., a cellular phone or a smartphone) in a wireless communication system may include a radio frequency (RF) transceiver to transmit and receive data for two-way communication. A mobile RF transceiver may include a transmit section for data transmission and a receive section for data reception of a communication signal. For data transmission, the transmit section may modulate an RF carrier signal with data to obtain a modulated RF signal, amplify the modulated RF signal to obtain an amplified RF signal having the proper output power level, and transmit the amplified RF signal via an antenna to a base station. For data reception, the receive section may obtain a received RF signal via the antenna. The receive section may amplify and process the received RF signal to recover data sent by a base station in a communication signal.[0004] A mobile RF transceiver may include one or more circuits for amplifying these communication signals. The amplifier circuits may include one or more amplifier stages that may have one or more driver stages and one or more amplifier output stages. Each of the amplifier stages includes one or more transistors configured in various ways to amplify the communication signals. Various options exist for fabricating the transistors that are configured to amplify the communication signals transmitted and received by mobile RF transceivers.[0005] The design of these mobile RF transceivers may include the use of semiconductor on insulator (SOI) technology for transistor fabrication. SOI technology replaces conventional semiconductor substrates with a layered semiconductor- insulator-semiconductor substrate to reduce parasitic capacitance and improve performance. SOI-based devices differ from conventional, silicon-built devices because a silicon junction is above an electrical isolator, typically a buried oxide (BOX) layer. A reduced thickness of the BOX layer, however, may not sufficiently reduce the parasitic capacitance caused by the proximity of an active device on the semiconductor layer and a semiconductor substrate supporting the BOX layer.[0006] The active devices on the SOI layer may include complementary metal oxide semiconductor (CMOS) transistors. Unfortunately, successful fabrication of transistors using SOI technology is complicated by parasitic capacitance. For example, parasitic capacitance in the form of contact/interconnect-to-gate capacitance is caused by proximity of back-end-of-line (BEOL) interconnects and/or middle-of-line (MOL) contacts and the transistor gates. This additional capacitance causes adverse effects, such as circuit delays and losses. This additional capacitance is especially problematic for low noise amplifiers (LNAs), which may prevent support for 5G applications.SUMMARY[0007] A low noise amplifier (LNA) device may include a first transistor on a semiconductor on insulator (SOI) layer. The first transistor may include a source region, a drain region, and a gate. The LNA device may also include a first-side gate contact coupled to the gate. The LNA device may further include a second-side source contact coupled to the source region. The LNA device may also include a second-side drain contact coupled to the drain region.[0008] A method of constructing a low noise amplifier (LNA) device may include fabricating a first transistor on a first surface of an isolation layer supported by a sacrificial substrate. The first transistor comprises a gate coupled to a first-side gate contact. The method may also include depositing a first-side dielectric layer on the first transistor. The method may further include bonding a handle substrate to the first-side dielectric layer. The method may also include removing the sacrificial substrate. The method may further include exposing a second-side of a source region and a second-side of a drain region of the first transistor through a second surface opposite the first surface of the isolation layer. The method may also include depositing a second-side source contact on the second-side of the source region, and depositing a second-side drain contact on the second-side of the drain region.[0009] A radio frequency (RF) front end module may include a low noise amplifier. The low noise amplifier may include a first transistor on a semiconductor on insulator (SOI) layer. The first transistor may include a source region, a drain region, and a gate. The low noise amplifier may also include a first-side gate contact coupled to the gate, a second-side source contact coupled to the source region, and a second-side drain contact coupled to the drain region. The RF front end module may also include an antenna coupled to an output of the low noise amplifier.[0010] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the present disclosure will be described below. It should be appreciated by those skilled in the art that this present disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.BRIEF DESCRIPTION OF THE DRAWINGS[0011] For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings. [0012] FIGURE 1 is a schematic diagram of a wireless device having a wireless local area network module and a radio frequency (RF) front end module for a chipset.[0013] FIGURE 2 shows a block diagram of an exemplary design of a wireless device, such as the wireless device shown in FIGURE 1.[0014] FIGURE 3 shows a cross-sectional view of a radio frequency (RF) integrated circuit fabricated using a layer transfer process, according to aspects of the present disclosure.[0015] FIGURE 4 is a cross-sectional view of a radio frequency (RF) integrated circuit fabricated using a layer transfer process.[0016] FIGURE 5 illustrates routing for the source, drain and gate contacts of the RF integrated circuit of FIGURE 4.[0017] FIGURES 6A and 6B are cross-sectional views of an RF integrated circuit (RFIC), including a transistor of a low parasitic capacitance low noise amplifier (LNA), according to aspects of the present disclosure.[0018] FIGURES 7A and 7B illustrate front-side routing for a low parasitic capacitance LNA, according to aspects of the present disclosure.[0019] FIGURES 8A and 8B illustrate backside routing for a low parasitic capacitance LNA, according to aspects of the present disclosure.[0020] FIGURE 9 is a process flow diagram illustrating a method of a backside silicidation process with layer transfer for constructing a RF integrated circuit including an LNA, according to an aspect of the present disclosure.[0021] FIGURE 10 is a block diagram showing an exemplary wirelesscommunication system in which an aspect of the present disclosure may beadvantageously employed.[0022] FIGURE 11 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the RF devices disclosed above. DETAILED DESCRIPTION[0023] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent, however, to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0024] As described herein, the use of the term "and/or" is intended to represent an "inclusive OR", and the use of the term "or" is intended to represent an "exclusive OR". As described herein, the term "exemplary" used throughout this description means "serving as an example, instance, or illustration," and should not necessarily be construed as preferred or advantageous over other exemplary configurations. As described herein, the term "coupled" used throughout this description means"connected, whether directly or indirectly through intervening connections (e.g., a switch), electrical, mechanical, or otherwise," and is not necessarily limited to physical connections. Additionally, the connections can be such that the objects are permanently connected or releasably connected. The connections can be through switches. As described herein, the term "proximate" used throughout this description means"adjacent, very near, next to, or close to." As described herein, the term "on" used throughout this description means "directly on" in some configurations, and "indirectly on" in other configurations.[0025] Fabricating mobile radio frequency (RF) chips (e.g., mobile RF transceivers) is complex at deep sub-micron process nodes due to cost and power consumption considerations. A wireless device (e.g., a cellular phone or a smartphone) in a wireless communication system may include a mobile RF transceiver for transmitting and receiving data for two-way communication. A mobile RF transceiver may include a transmit section for transmitting data and a receive section for receiving data. For transmitting data, the transmit section may modulate an RF carrier signal with data to obtain a modulated RF signal. The transmit section amplifies the modulated RF signal for obtaining an amplified RF signal having the proper output power level and transmits the amplified RF signal to a base station through an antenna. For receiving data, the receive section may obtain a received RF signal via the antenna and may amplify and process the received RF signal to recover data sent by the base station in acommunication signal.[0026] A mobile RF transceiver may include one or more circuits for amplifying these communication signals. The amplifier circuits may include one or more amplifier stages that may have one or more driver stages and one or more amplifier output stages. Each of the amplifier stages includes one or more transistors configured in various ways to amplify the communication signals. Various options exist for fabricating the transistors that are configured to amplify the communication signals transmitted and received by mobile RF transceivers.[0027] The design of these mobile RF transceivers may include semiconductor on insulator (SOI) technology for fabricating transistors. SOI technology replaces conventional semiconductor substrates with a layered semiconductor-insulator- semiconductor substrate for reducing parasitic capacitance and improving performance. SOI-based devices differ from conventional, silicon-built devices because a silicon junction is above an electrical isolator, typically a buried oxide (BOX) layer. A reduced thickness of the BOX layer in sub-micron process nodes, however, may not sufficiently reduce the parasitic capacitance caused by the proximity of an active device on the semiconductor layer and a semiconductor substrate supporting the BOX layer.[0028] The active devices on the SOI layer may include complementary metal oxide semiconductor (CMOS) transistors. Unfortunately, successful fabrication of transistors using SOI technology is complicated by parasitic capacitance. For example, a parasitic capacitance in the form of contact/interconnect-to-gate capacitance may be caused by a proximity between back-end-of-line (BEOL) interconnects/middle-of-line (MOL) contacts and the transistor gates. This additional capacitance causes adverse effects, such as circuit delays and losses. This additional capacitance is especially problematic for low noise amplifiers (LNAs).[0029] Various aspects of the present disclosure provide techniques for fabricating a low parasitic capacitance LNA in an RF integrated circuit. The process flow for semiconductor fabrication of the RF integrated circuit may include front-end-of-line (FEOL) processes, middle-of-line (MOL) processes, and back-end-of-line (BEOL) processes. It will be understood that the term "layer" includes film and is not to be construed as indicating a vertical or horizontal thickness unless otherwise stated. As described herein, the term "substrate" may refer to a substrate of a diced wafer or may refer to a substrate of a wafer that is not diced. Similarly, the terms chip and die may be used interchangeably.[0030] The middle-of-line or MOL is the set of process steps that enable connection of the transistors to the back-end-of-line or BEOL interconnects (e.g., Ml, M2, etc.) using MOL contacts. As noted, parasitic capacitance in the form ofcontact/interconnect-to-gate capacitance is caused by proximity of the BEOLinterconnects/MOL contacts and the transistor gate contacts. This additional capacitance causes adverse effects, such as circuit delays and losses, which is especially problematic for LNAs. For example, drain-to-gate contact parasitic capacitance in LNAs is a substantial barrier to achieving 5G performance in RF mobile transceivers. A layer transfer process may reduce the additional capacitance by removing some of the routing from a front-side to a backside of an RF integrated circuit. Removing some of the routing, however, may not sufficiently reduce the parasitic capacitance.[0031] Aspects of the present disclosure describe a backside silicidation design to reduce parasitic capacitance of a low noise amplifier (LNA) in an RF integrated circuit. One aspect of the present disclosure uses a backside silicidation process with layer transfer for forming a backside contact layer to the source/drain regions of an LNA transistor. The backside silicidation process may form a contact plug (e.g., a via) coupled to the source and drain regions of the LNA transistor through the backside contact layer. In this arrangement, a backside source contact plug and a backside drain contact plug extend through an isolation layer and into a backside dielectric layer (e.g., a second-side dielectric layer) supporting the isolation layer.[0032] A post-layer transfer metallization process forms a backside metallization (e.g., a backside BEOL interconnect Ml) coupled to the contact plug. In addition, a front-side metallization, distal from the backside metallization, may be coupled to a front-side gate contact of the gate of the LNA transistor. In this manner, the front-side interconnects (e.g., BEOL interconnects/MOL contacts) to the source and drain regions are moved to a backside of the LNA transistor. Rearrangement of the BEOL interconnects/MOL contacts may reduce the additional capacitance caused by the proximity of the BEOL interconnects/MOL contacts and the transistor gate contacts. Although described with respect to backside source/drain contacts and front-side gate contacts, the present disclosure is not so limited. For example, backside gate contacts and front-side source/drain contacts are contemplated.[0033] FIGURE 1 is a schematic diagram of a wireless device 100 (e.g., a cellular phone or a smartphone), having a low parasitic capacitance low noise amplifier, according to aspects of the present disclosure. The wireless device may include a wireless local area network (WLAN) (e.g., WiFi) module 150 and an RF front end module 170 for a chipset 110. The WiFi module 150 includes a first diplexer 160 communicably coupling an antenna 162 to a wireless local area network module (e.g., WLAN module 152). The RF front end module 170 includes a second diplexer 190 communicably coupling an antenna 192 to the wireless transceiver 120 (WTR) through a duplexer 180 (DUP). The wireless transceiver 120 and the WLAN module 152 of the WiFi module 150 are coupled to a modem (MSM, e.g., a baseband modem) 130 that is powered by a power supply 102 through a power management integrated circuit (PMIC) 140. The chipset 110 also includes capacitors 112 and 114, as well as an inductor(s) 116 to provide signal integrity. The PMIC 140, the modem 130, the wireless transceiver 120, and the WLAN module 152 each include capacitors (e.g., 142, 132, 122, and 154) and operate according to a clock 118. The geometry and arrangement of the various inductor and capacitor components in the chipset 110 may reduce the electromagnetic coupling between the components.[0034] FIGURE 2 shows a block diagram of an exemplary design of a wireless device 200, such as the wireless device 100 shown in FIGURE 1, including a low parasitic capacitance low noise amplifier, according to aspects of the present disclosure. FIGURE 2 shows an example of a mobile RF transceiver 220, which may be a wireless transceiver (WTR). In general, the conditioning of the signals in a transmitter 230 and a receiver 250 may be performed by one or more stages of amplifier(s), filter(s), upconverters, downconverters, and the like. These circuit blocks may be arranged differently from the configuration shown in FIGURE 2. Furthermore, other circuit blocks not shown in FIGURE 2 may also be used to condition the signals in the transmitter 230 and receiver 250. Unless otherwise noted, any signal in FIGURE 2, or any other figure in the drawings, may be either single-ended or differential. Some circuit blocks in FIGURE 2 may also be omitted.[0035] In the example shown in FIGURE 2, the wireless device 200 generally includes the mobile RF transceiver 220 and a data processor 210. The data processor 210 may include a memory (not shown) to store data and program codes, and may generally include analog and digital processing elements. The mobile RF transceiver 220 may include the transmitter 230 and receiver 250 that support bi-directional communication. In general, the wireless device 200 may include any number of transmitters and/or receivers for any number of communication systems and frequency bands. All or a portion of the mobile RF transceiver 220 may be implemented on one or more analog integrated circuits (ICs), radio frequency (RF) integrated circuits (RFICs), mixed-signal ICs, and the like.[0036] A transmitter or a receiver may be implemented with a super-heterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency-converted between radio frequency and baseband in multiple stages, for example, from radio frequency to an intermediate frequency (IF) in one stage, and then, from intermediate frequency to baseband in another stage for a receiver. In the direct-conversion architecture, a signal is frequency converted between radio frequency and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the example shown in FIGURE 2, the transmitter 230 and the receiver 250 are implemented with the direct-conversion architecture.[0037] In a transmit path, the data processor 210 processes data to be transmitted. The data processor 210 also provides in-phase (I) and quadrature (Q) analog output signals to the transmitter 230 in the transmit path. In an exemplary aspect, the data processor 210 includes digital-to-analog-converters (DACs) 214a and 214b for converting digital signals generated by the data processor 210 into the in-phase (I) and quadrature (Q) analog output signals (e.g., I and Q output currents) for further processing.[0038] Within the transmitter 230, lowpass filters 232a and 232b filter the in-phase (I) and quadrature (Q) analog transmit signals, respectively, to remove undesired images caused by the prior digital-to-analog conversion. Amplifiers 234a and 234b (Amp) amplify the signals from lowpass filters 232a and 232b, respectively, and provide in- phase (I) and quadrature (Q) baseband signals. Upconverters 240 include an in-phase upconverter 241a and a quadrature upconverter 241b that upconvert the in-phase (I) and quadrature (Q) baseband signals with in-phase (I) and quadrature (Q) transmit (TX) local oscillator (LO) signals from a TX LO signal generator 290 to provide upconverted signals. A filter 242 filters the upconverted signals to reduce undesired images caused by the frequency upconversion as well as interference in a receive frequency band. A power amplifier (PA) 244 amplifies the signal from filter 242 to obtain the desired output power level and provides a transmit radio frequency signal. The transmit radio frequency signal is routed through a duplex er/switch 246 and transmitted via an antenna 248.[0039] In a receive path, the antenna 248 receives communication signals and provides a received radio frequency (RF) signal, which is routed through theduplexer/switch 246 and provided to a low noise amplifier (LNA) 252. Theduplexer/switch 246 is designed to operate with a specific receive (RX) to transmit (TX) (RX-to-TX) duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified by the LNA 252 and filtered by a filter 254 to obtain a desired RF input signal. Downconversion mixers 261a and 261b mix the output of the filter 254 with in-phase (I) and quadrature (Q) receive (RX) LO signals (i.e., LO I and LO Q) from an RX LO signal generator 280 to generate in-phase (I) and quadrature (Q) baseband signals. The in-phase (I) and quadrature (Q) baseband signals are amplified by amplifiers 262a and 262b and further filtered by lowpass filters 264a and 264b to obtain in-phase (I) and quadrature (Q) analog input signals, which are provided to the data processor 210. In the exemplary configuration shown, the data processor 210 includes analog-to-digital-converters (ADCs) 216a and 216b for converting the analog input signals into digital signals for further processing by the data processor 210.[0040] In FIGURE 2, the transmit local oscillator (TX LO) signal generator 290 generates the in-phase (I) and quadrature (Q) TX LO signals used for frequency upconversion, while a receive local oscillator (RX LO) signal generator 280 generates the in-phase (I) and quadrature (Q) RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A phase locked loop (PLL) 292 receives timing information from the data processor 210 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from the TX LO signal generator 290. Similarly, a PLL 282 receives timing information from the data processor 210 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from the RX LO signal generator 280.[0041] The wireless device 200 may support carrier aggregation and may (i) receive multiple downlink signals transmitted by one or more cells on multiple downlink carriers at different frequencies and/or (ii) transmit multiple uplink signals to one or more cells on multiple uplink carriers. For intra-band carrier aggregation, the transmissions are sent on different carriers in the same band. For inter-band carrier aggregation, the transmissions are sent on multiple carriers in different bands. Those skilled in the art will understand, however, that aspects described herein may be implemented in systems, devices, and/or architectures that do not support carrier aggregation.[0042] The mobile RF transceiver 220 of the wireless device 200 generally includes the transmitter 230 and the receiver 250 to transmit and receive data for two-way communication. The receiver 250 may include one or more circuits for amplifying communication signals, such as the LNA 252. The LNA 252 may include one or more amplifier stages that may have one or more driver stages and one or more amplifier output stages. Each of the amplifier stages includes one or more transistors configured in various ways to amplify the communication signals. Various options exist for fabricating the transistors that are configured to amplify the communication signals transmitted and received by the mobile RF transceiver 220.[0043] The mobile RF transceiver 220 and the RF front end module 170 (FIGURE 1) may be implemented using semiconductor on insulator (SOI) technology for fabricating transistors of the mobile RF transceiver 220 and the RF front end module 170. Using SOI technology helps reduce high order harmonics in the RF front end module 170. SOI technology replaces conventional semiconductor substrates with a layered semiconductor-insulator-semiconductor substrate for reducing parasitic capacitance and improving performance. SOI-based devices differ from conventional, silicon-built devices because a silicon junction is above an electrical isolator, typically a buried oxide (BOX) layer. A reduced thickness of the BOX layer in sub-micron process nodes, however, may not sufficiently reduce the parasitic capacitance caused by the proximity of an active device on the semiconductor layer and a semiconductor substrate supporting the BOX layer. As a result, a layer transfer process is introduced to further separate the active device from the substrate, as shown in FIGURE 3.[0044] FIGURE 3 shows a cross-sectional view of a radio frequency (RF) integrated circuit 300 fabricated using a layer transfer process, according to aspects of the present disclosure. As shown in FIGURE 3, an RF SOI device includes an active device 310 on a buried oxide (BOX) layer 320 that is initially supported by a sacrificial substrate 301 (e.g., a bulk wafer). The RF SOI device also includes interconnects 350 coupled to the active device 310 within a first dielectric layer 304. In this configuration, a handle substrate 302 is bonded to the first dielectric layer 304 of the RF SOI device and the sacrificial substrate 301 is removed (see arrows). In addition, bonding of the handle substrate 302 enables removing of the sacrificial substrate 301. Removal of the sacrificial substrate 301 using the layer transfer process enables high-performance, low- parasitic RF devices by increasing the dielectric thickness. That is, a parasitic capacitance of the RF SOI device is proportional to the thickness of the first dielectric layer 304, which determines the distance between the active device 310 and the handle substrate 302.[0045] The active device 310 on the BOX layer 320 may be a complementary metal oxide semiconductor (CMOS) transistor. Unfortunately, successful fabrication of CMOS transistors using SOI technology is complicated by parasitic capacitance. For example, parasitic capacitance in the form of contact/interconnect to-gate-capacitance may be caused by a proximity between back-end-of-line (BEOL) interconnects/middle- of-line (MOL) contacts and the transistor gate contacts, for example, as shown in FIGURE 4. This additional capacitance causes adverse effects, such as circuit delays and losses. This additional capacitance is especially problematic for low noise amplifiers (LNAs), such as the LNA 252 of the mobile RF transceiver 220 of FIGURE 2.[0046] FIGURE 4 is a cross-sectional view of an RF integrated circuit 400 fabricated using a layer transfer process. The RF integrated circuit 400 includes an active device 410 having a gate, a source region, a drain region, and a channel region. The channel region is between the source and drain regions of a semiconductor layer (e.g., a semiconductor on insulator (SOI) layer) that is formed on an isolation layer 420. In SOI implementations, the isolation layer 420 is a buried oxide (BOX) layer, and the channel, source and drain regions are formed from an SOI layer (e.g., silicon) including shallow trench isolation (STI) regions supported by the isolation layer 420.[0047] The RF integrated circuit 400 also includes middle-of-line (MOL) interconnects (e.g., a front-side drain contact 430 and a front-side source contact 432) and back-end-of-line (BEOL) interconnects (e.g., Ml, M2) coupled to the source/drain regions of the active device 410. As described, the MOL/BEOL layers are referred to as front-side layers. By contrast, the layers supporting the isolation layer 420 may be referred to as backside layers. According to this nomenclature, a front-sidemetallization Ml is coupled to the source region and drain region of the active device 410 and arranged in a front-side dielectric layer 404 (e.g., a first-side dielectric layer) to which a handle substrate 402 is coupled. In this example, a backside dielectric 440 is adjacent to and possibly supports the isolation layer 420. A backside metallization 434 is coupled to the front-side metallization Ml . The front-side metallization Ml is a front-side back-end-of-line (BEOL) interconnect (e.g., a first-side back-end-of-line (BEOL) interconnect) and the backside metallization 434 is a backside BEOL interconnect (e.g., a second-side BEOL interconnect).[0048] Operation of the active device 410 is adversely affected by drain-to-gate parasitic capacitance 406 and source-to-gate parasitic capacitance 408. In this example, contact/interconnect-to-gate parasitic capacitance (e.g., 406 and 408) is caused by a proximity of the front-side drain contact 430 and the front-side source contact 432 to a gate contact 412 to the gate of the active device 410. The drain-to-gate parasitic capacitance 406 leads to adverse effects, such as circuit delays and losses. The drain-to- gate parasitic capacitance 406 is especially problematic for low noise amplifiers, such as the LNA 252 shown in FIGURE 2.[0049] FIGURE 5 illustrates routing 500 for the source, drain and gate contacts of the RF integrated circuit 400 of FIGURE 4. Conventionally, access to active devices, formed during a front-end-of-line process, is limited to a front-side of the active device. For example, middle-end-of-line processing provides contacts between the gates and source/drain regions of the active devices and back-end-of-line interconnect layers (e.g., Ml , M2, etc.). FIGURE 5 illustrates routing of the gate contact 412, the front-side drain contact 430 and the front-side source contact 432 on a diffusion region 510 to a gate connection 570, a drain connection 550, and a source connection 560, respectively.[0050] Conventionally, transistor gates are routed through connections at a second BEOL interconnect layer (M2), and source/drain connections are routed using a first BEOL interconnect layer (Ml). When these source/drain contacts, as well as the gate contacts, are located on a front-side of a transistor, the Ml BEOL interconnects and the M2 BEOL interconnects crisscross multiple times. In particular, overlapping of the gate contact 412 and the front-side drain contact 430 when routing to a drain connection 550 and a gate connection 570 is especially problematic. Overlapping routing of the gate contact 412 and the front-side drain contact 430 produces significant drain-to-gate capacitance (CDG) as well as increased gate resistance, thereby substantially degrading LNA performance.[0051] Various aspects of the disclosure provide techniques for post layer transfer processing on a backside of active devices of an RF integrated circuit (RFIC). By contrast, access to active devices, formed during a front-end-of-line process, is conventionally provided from a front-side during middle-of-line processing that creates contacts between the gates and source/drain regions of the active devices and back-end- of-line interconnect layers (e.g., Ml, M2, etc.). Aspects of the present disclosure involve post layer transfer processing for forming a backside contact layer and backside contact plugs to source/drain regions of LNA transistors. The backside contact layer and backside contact plugs enable moving the source/drain contacts to a backside of the LNA transistors, which eliminates the contact-to-gate parasitic coupling noted above. These transistor structures may be used in LNAs, such as the LNA 252 of FIGURE 2.[0052] The layer transfer process shown in FIGURE 3 may reduce the parasitic capacitance by moving some of the routing from the front-side to the backside of the RF integrated circuit 400. Various aspects of the present disclosure provide techniques for a low parasitic capacitance LNA in an RF integrated circuit, as described in FIGURES 6A - 8B.[0053] FIGURE 6A is a cross-sectional view of an RF integrated circuit (RFIC) 600, including a transistor of a low parasitic capacitance low noise amplifier (LNA), according to aspects of the present disclosure. In this configuration, a post-layer transfer process is performed on a backside of source/drain (S/D) regions of an active device 610 (e.g., an LNA transistor). Representatively, the RFIC 600 includes the active device 610 having a gate, source/drain (S/D) regions, and a channel region between the source/drain regions, formed on an isolation layer 620. The isolation layer 620 may be a buried oxide (BOX) layer for a silicon on insulator (SOI) implementation, in which the channel and source/drain regions are formed from an SOI layer. In this configuration, shallow trench isolation (STI) regions are also on the isolation layer 620.[0054] The RFIC 600 includes a gate contact 612 (e.g., zero interconnect (M0)/zero via (V0) of a middle-of-line layer) in a front-side dielectric layer 604. The gate contact 612 (e.g., a first-side gate contact) is coupled to a front-side contact layer 614 on the gate, which may be composed of a silicide contact layer (e.g., a front-side silicide layer). In this configuration, a handle substrate 602 is coupled to the front-side dielectric layer 604 to enable post-layer transfer processing on a backside of the active device 610. For example, the post-layer transfer processing enables access to a backside 618 opposite a front-side 616 of the source/drain regions of the active device 610. As a result, the front-side 616 of the source/drain regions is exposed to enable direct contact by the front-side dielectric layer 604.[0055] According to aspects of the present disclosure, the handle substrate 602 may be composed of a semiconductor material, such as silicon. In this configuration, the handle substrate 602 may include at least one other active device. Alternatively, the handle substrate 602 may be a passive substrate to further improve harmonics by reducing parasitic capacitance. In this configuration, the handle substrate 602 may include at least one other passive device. As described, the term "passive substrate" may refer to a substrate of a diced wafer or panel, or may refer to the substrate of a wafer/panel that is not diced. In one configuration, the passive substrate is comprised of glass, quartz, sapphire, high-resistivity silicon, or other like passive material. The passive substrate may also be a coreless substrate.[0056] According to aspects of the present disclosure, a layer transfer process, for example, as shown in FIGURE 3, enables forming of a backside contact layer 630 on the backside 618 of the source/drain regions of the active device 610. The backside contact layer 630 may be composed of a backside silicide layer. Once formed, the backside contact layer 630 allows moving of front-side source/drain contacts (e.g., the front-side drain contact 430 and the front-side source contact 432 of FIGURE 4) to the backside 618 of the source/drain regions. Moving the front-side source/drain contacts (e.g., the front-side drain contact 430 and the front-side source contact 432 of FIGURE 4) to the backside 618 of the source/drain regions eliminates the contact/interconnect-to- gate parasitic capacitance (e.g., 406 and 408) shown in FIGURE 4.[0057] In an alternative configuration, the gate contact 612 is moved to the backside of the active device 610 and the front-side source/drain contacts are unchanged. In addition, a backside dielectric layer 640 is adjacent to and possibly supports the isolation layer 620. In this configuration, a post layer transfer metallization process forms a backside contact layer 630 on the backside 618 of the source/drain regions of the active device 610. As shown in FIGURE 6A, a backside drain contact 650 (e.g., a second-side drain contact) is coupled to the backside 618 of the drain region through the backside contact layer 630. In addition, a backside source contact 660 (e.g., a second- side source contact) is coupled to the backside 618 of the source region through the backside contact layer 630. The backside drain contact 650 may be a contact plug (e.g., a middle-of-line (MOL) zero via (V0)) coupled to a backside back-end-of-line (BEOL) drain interconnect 652. Similarly, the backside source contact 660 may be a contact plug coupled to a backside BEOL source interconnect 662.[0058] FIGURE 6B is a cross-sectional view of an RFIC 680, in which a post-layer transfer process is also performed on the backside 618 of source/drain regions of an active device 610 (e.g., an LNA transistor), according to aspects of the present disclosure. As will be recognized, a configuration of the RFIC 680 is similar to the configuration of the RFIC 600 of FIGURE 6 A. In the configuration shown in FIGURE 6B, however, the RFIC 680 includes a front-side metallization (e.g., a first BEOL interconnect (Ml)) in the front-side dielectric layer 604. The front-side metallization Ml is coupled to a backside metallization 642 through a via V0. The backside metallization 642 is within the backside dielectric layer 640.[0059] As shown in FIGURES 6A and 6B, the backside contact layer 630 is within the isolation layer 620 and enables contact with the backside drain contact 650 and the backside source contact 660. The relocating of the contacts/interconnects (e.g., the front-side drain contact 430 and the front-side source contact 432 of FIGURE 4) to the backside 618 of the source/drain regions of the active device 610 helps prevent parasitic capacitance between the gate contact 612 of the active device 610 and conventional front-side source/drain contacts/interconnects. In this configuration, routing of the gate contact 612 is simplified, as shown in FIGURES 7 A and 7B. Similarly, routing of the backside drain contact 650 and the backside source contact 660 is simplified, as shown in FIGURES 8 A and 8B.[0060] FIGURES 7A and 7B illustrate front-side routing for a low parasitic capacitance LNA, according to aspects of the present disclosure. In the configuration shown in FIGURE 7A, a front-side routing 700 of an LNA is shown for a single diffusion island configuration. In this example, the LNA is configured to include an LNA transistor, for example, as shown in FIGURE 6A. Representatively, each gate contact 612 on a diffusion island 710 is routed to a gate connection 770. This configuration helps eliminate the parasitic capacitance shown in FIGURE 5.[0061] FIGURE 7B shows a front-side routing 750 of an LNA for a dual diffusion island configuration. In this example, the LNA is also configured to include the LNA transistor shown in FIGURE 6 A. Representatively, each gate contact 612 on a first diffusion island 710-1 and a second diffusion island 710-2 is routed to the gate connection 770 for eliminating drain-to-gate parasitic capacitance. This LNA configuration uses multiple diffusion islands (e.g., 710-1 and 710-2) to compensate for increased gate resistance.[0062] Routing the source/drain connections opposite from the gate connections supports LNAs fabricated using multiple diffusion islands by simplifying routing of the active devices. In particular, contact-to-gate capacitance and parasitic resistance are incurred due to the overlapping source/drain and gate contact routing. Substantially reducing parasitic capacitance, as well as gate resistance, provides a substantial improvement (e.g., a 20% to 40% improvement) in a gain bandwidth product (FT) as well as a maximum frequency of oscillation (Fmax) for supporting 5G communication enhancements.[0063] FIGURES 8A and 8B illustrates backside routing for a low parasitic capacitance LNA, according to aspects of the present disclosure. In the configuration shown in FIGURE 8A, a backside routing 800 of the LNA in the single diffusion island configuration of FIGURE 7 A is shown. This example also incorporates the LNA transistor as shown in FIGURE 6A. Representatively, each backside drain contact 650 on the diffusion island 710 is routed to a drain interconnect 652. In addition, each backside source contact 660 on the diffusion island 710 is routed to the backside BEOL source interconnect 662. Replacing front-side source and drain contacts with the backside source and drain contacts eliminates the parasitic capacitance (e.g., drain-to- gate capacitance (CDG)) shown in FIGURE 5.[0064] FIGURE 8B shows a backside routing 850 of the LNA for the dual diffusion island configuration shown in FIGURE 7B. Representatively, each backside drain contact 650 on the first diffusion island 710-1 and the second diffusion island 710-2 is routed to a drain connection 880. Similarly, each backside source contact 660 on the first diffusion island 710-1 and the second diffusion island 710-2 is routed to a source connection 890 for eliminating parasitic capacitance. This LNA configuration uses multiple diffusion islands (e.g., 710-1 and 710-2) for decreasing gate resistance to enable support for 5G communication enhancements.[0065] This configuration of the LNA further illustrates radio frequency (RF) components 860 coupled to the drain connection 880 and optionally to the source connection 890. The RF components 860 may include resistor (R), inductor (L) and capacitor (C) (RLC) components. The RF components 860 may also include antennas, and other like RF components, for example, as shown in FIGURE 2. Additional details regarding the RF components 860 for completing formation of the LNA are omitted to avoid obscuring the inventive features. It should be recognized that aspects of the present disclosure may include LNAs configured in cascode configurations, resistive configuration, or other like arrangements. Although the preceding description was with respect to planar transistors, the present disclosure also applies to other configurations, such as FinFETs.[0066] One aspect of the present disclosure uses a backside silicidation process with layer transfer to form backside source/drain contacts to the source/drain regions of an LNA transistor, for example, as shown in FIGURE 9.[0067] FIGURE 9 is a process flow diagram illustrating a method 900 of constructing a low noise amplifier (LNA) device using a backside silicidation process with layer transfer, according to an aspect of the present disclosure. The method 900 begins at block 902, in which a first transistor is fabricated on a first surface of an isolation layer. The isolation layer is supported by a sacrificial substrate. For example, as shown in FIGURE 3, an active device 310 is fabricated on a buried oxide (BOX) layer 320. In block 904, the front-side dielectric layer is deposited on the first transistor. For example, as shown in FIGURE 6 A, the front-side dielectric layer 604 is deposited on the active device 610.[0068] Referring again to FIGURE 9, in block 906, a handle substrate is bonded to the front-side dielectric layer. For example, as shown in FIGURE 6A, a handle substrate 602 is bonded to the front-side dielectric layer 604. In block 908 of FIGURE 9, the sacrificial substrate is removed. As shown in FIGURE 3, the layer-transfer process includes removal of the sacrificial substrate 301. In block 910, a backside of a source region and a backside of a drain region of the first transistor are exposed through a second surface opposite the first surface of the isolation layer. For example, as shown in FIGURE 6A, the backside 618 of the drain region and the source region are exposed by the post-layer transfer process.[0069] In block 912 of FIGURE 9, a backside source contact is deposited on the backside of the source region. In block 914, a backside drain contact is deposited on the backside of the drain region. For example, as shown in FIGURE 6A, a backside contact layer 630 is deposited on the backside 618 of the source region and the drain region. In addition, a backside drain contact 650 is coupled to the backside 618 of the drain region through the backside contact layer 630. Similarly, a backside source contact 660 is coupled to the backside 618 of the source region through the backside contact layer 630. In block 916 of FIGURE 9, at least one of a resistor, an inductor, a capacitor, an antenna and/or an RF components is optionally coupled with the first transistor and/or a second transistor, for example, as shown in the RF components 860 of FIGURE 8B.[0070] Aspects of the present disclosure describe a backside silicidation design to reduce parasitic capacitance of a low noise amplifier in an RF integrated circuit. One aspect of the present disclosure uses a backside silicidation process with layer transfer to form backside source/drain contacts (e.g., a backside silicide contact) to the source/drain regions of a transistor. The backside silicidation process may form a via coupled to a first source/drain region of the transistor through the backside source/drain contact. The via may extend through an isolation layer and into a backside dielectric layer supporting the isolation layer. In addition, a post-layer transfer metallization process enables the formation of a backside metallization coupled to the via. A front-side metallization, distal from the backside metallization, may be coupled to a gate contact of the gate of the transistor.[0071] Rearrangement of the BEOL interconnects/MOL contacts may reduce the parasitic capacitance caused by the proximity of the BEOL interconnects/MOL contacts and the transistor gate contacts. The front-side and backside may each be referred to as a first-side or a second-side. In some cases, the front-side will be referred to as the first- side. In other cases, the backside will be referred to as the first-side. Although the description is with respect to an LNA, it is contemplated that these structures would also improve a power amplifier (PA).[0072] According to a further aspect of the present disclosure, RF integrated circuitry, including backside silicide contacts on source/drain regions of transistors, is described. The RF integrated circuitry includes a transistor on a first surface of an isolation layer, including a front-side dielectric layer on the transistor. The RF integrated circuit structure also includes means for handling the RF integrated circuitry on the front-side dielectric layer. The handling means may be the handle substrate, shown in FIGURE 3. In another aspect, the aforementioned means may be any layer, module or any apparatus configured to perform the functions recited by theaforementioned means.[0073] FIGURE 10 is a block diagram showing an exemplary wirelesscommunication system 1000 in which an aspect of the present disclosure may be advantageously employed. For purposes of illustration, FIGURE 10 shows three remote units 1020, 1030, and 1050 and two base stations 1040. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 1020, 1030, and 1050 include IC devices 1025 A, 1025C, and 1025B that include the disclosed low noise amplifier (LNA) device. It will be recognized that other devices may also include the disclosed LNA, such as the base stations, switching devices, and network equipment. FIGURE 10 shows forward link signals 1080 from the base station 1040 to the remote units 1020, 1030, and 1050 and reverse link signals 1090 from the remote units 1020, 1030, and 1050 to base stations 1040. [0074] In FIGURE 10, remote unit 1020 is shown as a mobile telephone, remote unit 1030 is shown as a portable computer, and remote unit 1050 is shown as a fixed location remote unit in a wireless local loop system. For example, a remote units may be a mobile phone, a hand-held personal communication systems (PCS) unit, a portable data unit such as a personal digital assistant (PDA), a GPS enabled device, a navigation device, a set top box, a music player, a video player, an entertainment unit, a fixed location data unit such as a meter reading equipment, or other communications device that stores or retrieve data or computer instructions, or combinations thereof. Although FIGURE 10 illustrates remote units according to the aspects of the present disclosure, the present disclosure is not limited to these exemplary illustrated units. Aspects of the present disclosure may be suitably employed in many devices, which include the disclosed LNA.[0075] FIGURE 11 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the RFIC disclosed above. A design workstation 1100 includes a hard disk 1101 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 1100 also includes a display 1102 to facilitate a circuit design 1110 or an LNA design 1112 of an RF device. A storage medium 1 104 is provided for tangibly storing the circuit design 1110 or the LNA design 11 12. The circuit design 1110 or the LNA design 1112 may be stored on the storage medium 1104 in a file format such as GDSII or GERBER. The storage medium 1104 may be a CD- ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 1100 includes a drive apparatus 1103 for accepting input from or writing output to the storage medium 1104.[0076] Data recorded on the storage medium 1104 may specify logic circuit configurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 1104 facilitates the circuit design 1110 or the LNA design 1112 by decreasing the number of processes for designing semiconductor wafers. [0077] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. A machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to any types of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to a particular type of memory or number of memories, or type of media upon which memory is stored.[0078] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be an available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0079] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.[0080] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the technology of the present disclosure as defined by the appended claims. For example, relational terms, such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, above becomes below, and vice versa. Additionally, if oriented sideways, above and below may refer to sides of a substrate or electronic device. Moreover, the scope of the present application is not intended to be limited to the particular configurations of the process, machine, manufacture, and composition of matter, means, methods, and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding configurations described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
A cache system, having a first cache, a second cache, and a logic circuit coupled to control the first cache and the second cache according to an execution type of a processor. When an execution type of a processor is a first type indicating non-speculative execution of instructions and the first cache is configured to service commands from a command bus for accessing a memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. The cache system can include a configurable data bit. The logic circuit can be coupled to control the caches according to the bit. Alternatively, the caches can include cache sets. The caches can also include registers associated with the cache sets respectively. The logic circuit can be coupled to control the cache sets according to the registers.
1.A cache system includes:first cache;second cache;a connection to a command bus coupled between the cache system and a processor;a connection to an address bus coupled between the cache system and the processor;a connection to a data bus coupled between the cache system and the processor;a connection to an execution type signal line from the processor, the execution type signal line identifying the execution type; andlogic circuitry coupled to control the first cache and the second cache according to the execution type;wherein the cache system is configured to be coupled between the processor and a memory system; andwherein when the execution type is a first type indicative of non-speculative execution of instructions by the processor and the first cache is configured to service a cache from the command bus for accessing the memory system When commanded, the logic circuit is configured to copy a portion of the content cached in the first cache to the second cache.2.The cache system of claim 1, wherein the logic circuit is configured to copy the portion of the content cached in the first cache independently of a current command received in the command bus to the second cache.3.2. The cache system of claim 1, wherein when the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache is configured to serve data from the When a command of the command bus is used to access the memory system, the logic circuit is configured to respond to the execution type changing from the first type to a first type indicating speculative execution of instructions by the processor Type 2 uses the second cache to service subsequent commands from the command bus.4.4. The cache system of claim 3, wherein the logic circuit is configured to complete the processing of the content before servicing the subsequent command after the execution type is changed from the first type to the second type The portion is synchronized from the first cache to the second cache.5.3. The cache system of claim 3, wherein the logic circuit is configured to continue to synchronize the portion of the content from the first cache to the second cache when servicing the subsequent command .6.3. The cache system of claim 3, further comprising: configurable data bits, wherein the logic circuit is further coupled to control the first cache and the second cache in accordance with the configurable data bits.7.The cache system of claim 6,wherein when the configurable data bit is in the first state, the logic circuit is configured to:when the execution type is the first type, implementing a command received from the command bus for accessing the memory system via the first cache; andwhen the execution type is a second type, implementing a command received from the command bus for accessing the memory system via the second cache; andwherein when the configurable data bit is in the second state, the logic circuit is configured to:when the execution type is the first type, implementing a command received from the command bus for accessing the memory system via the second cache; andWhen the execution type is the second type, a command received from the command bus for accessing the memory system via the first cache is implemented.8.The cache system of claim 7, further comprising:a connection to a speculative state signal line from the processor that identifies the state of speculative execution of instructions by the processor;wherein the connection to the speculative state signal line is configured to receive the state of speculative execution,wherein said state of speculative execution indicates whether the results of speculative execution will be accepted or rejected, andwherein when the execution type is changed from the second type to the first type, the logic circuit is configured to:if the state of speculative execution indicates that the results of speculative execution are to be accepted, toggling the configurable data bit; andIf the state of speculative execution indicates that the outcome of speculative execution will be rejected, then the configurable data bits are maintained unchanged.9.The cache system of claim 1, wherein the first cache and the second cache collectively comprise:a plurality of cache sets including a first cache set and a second cache set; anda plurality of registers, respectively associated with the plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a first register associated with the second cache set two registers; andwherein the logic circuit is further coupled to control the plurality of cache sets in accordance with the plurality of registers.10.A cache system includes:a plurality of cache sets including a first cache set and a second cache set;a plurality of registers, respectively associated with the plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a first register associated with the second cache set two registers;a connection to a command bus coupled between the cache system and a processor;a connection to an address bus coupled between the cache system and the processor;a connection to a data bus coupled between the cache system and the processor;a connection to an execution type signal line from the processor, the execution type signal line identifying the execution type; andlogic circuitry coupled to control the plurality of cache sets according to the execution type;wherein the cache system is configured to be coupled between the processor and a memory system; andwherein when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service accesses from the command bus to the memory system , the logic is configured to copy a portion of the content cached in the first cache set to the second cache set.11.11. The cache system of claim 10, wherein the logic circuit is configured to copy the portion of the content cached in the first cache independently of a current command received in the command bus to the second cache.12.11. The cache system of claim 10, wherein when the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to serve data from When a command of the command bus is used to access the memory system, the logic circuit is configured to be responsive to the execution type changing from the first type to indicating speculative execution of instructions by the processor. The second type uses the second cache set to service subsequent commands from the command bus.13.13. The cache system of claim 12, wherein the logic circuit is configured to complete the processing of the content before servicing the subsequent command after the execution type is changed from the first type to the second type The portion is synchronized from the first cache set to the second cache set.14.13. The cache system of claim 12, wherein the logic circuit is configured to continue to synchronize the portion of the content from the first cache set to the second cache when servicing the subsequent command Cache collection.15.The cache system of claim 10,wherein the logic circuit is further coupled to control the plurality of cache sets in accordance with the plurality of registers;wherein when the connection to the address bus receives a memory address from the processor, the logic circuit is configured to:generating a set index from at least the memory address; anddetermining whether the generated set index matches the content stored in the first register or the content stored in the second register; andwherein the logic circuit is configured to implement the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register and implementing the command via the second cache set in response to the generated set index matching the content stored in the second register.16.16. The cache system of claim 15, wherein in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit is configured to allocate all the first cache set for caching the data set and storing the generated set index in the first register.17.The cache system of claim 16, further comprising:a connection to an execution type signal line from the processor, the execution type signal line identifying the execution type;Wherein, when the first register and the second register are in the first state, the logic circuit is configured to:when the execution type is a first type, implementing a command received from the command bus for accessing the memory system via the first cache set; andwhen the execution type is a second type, implementing a command received from the command bus for accessing the memory system via the second cache set; andWherein, when the first register and the second register are in the second state, the logic circuit is configured to:When the execution type is the first type, implementing the command received from the command bus for storing via another cache set of the plurality of cache sets other than the first cache set a command to fetch the memory system; andWhen the execution type is the second type, implementing the data received from the command bus for passing through another cache set other than the second cache set of the plurality of cache sets A command to access the memory system.18.18. The cache system of claim 17, wherein the generated set index is generated further based on a type identified by the execution type signal line.19.The cache system of claim 18, further comprising:a connection to a speculative state signal line from the processor that identifies the state of speculative execution of instructions by the processor;wherein the connection to the speculative state signal line is configured to receive the state of speculative execution;wherein said state of speculative execution indicates whether the results of speculative execution are to be accepted or rejected; andwherein when the execution type is changed from the second type to the first type, the logic circuit is configured to:altering the content stored in the first register and the content stored in the second register if the state of speculative execution indicates that a result of the speculative execution is to be accepted; andThe contents stored in the first register and the contents stored in the second register are maintained unchanged if the status of speculative execution indicates that the results of speculative execution are to be rejected.20.A cache system includes:multiple caches;a plurality of cache sets divided among the plurality of caches, the plurality of cache sets including a first cache set and a second cache set;a plurality of registers, respectively associated with the plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a first register associated with the second cache set two registers;a connection to a command bus coupled between the cache system and a processor;a connection to an address bus coupled between the cache system and the processor;a connection to a data bus coupled between the cache system and the processor;a connection to an execution type signal line from the processor, the execution type signal line identifying the execution type; andlogic circuitry coupled to control the plurality of cache sets according to the execution type;wherein the cache system is configured to be coupled between the processor and a memory system; andwherein when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service accesses from the command bus to the memory system , the logic is configured to copy a portion of the content cached in the first cache set to the second cache set.
Cache system and circuit for synchronizing a cache or set of cachesRelated applicationsThis application claims US Patent Application No. 16/528,479, filed July 31, 2019, and entitled "CACHE SYSTEMS AND CIRCUITS FOR SYNCING CACHES OR CACHESETS" priority of No. , the entire disclosure of said application is hereby incorporated by reference.technical fieldAt least some embodiments disclosed herein relate generally to cache architectures, and more particularly, but not limited to, cache architectures for main execution and speculative execution by computer processors.Background techniqueA cache is a memory component that stores data closer to the processor than main memory so that the data stored in the cache can be accessed by the processor. Data may be stored in cache due to earlier computation or earlier access to data in main memory. A cache hit occurs when data requested by a processor using a memory address can be found in the cache, and a cache miss occurs when the data cannot be found in the cache.In general, a cache is a memory that holds data that has recently been used by a processor. Memory blocks placed in the cache are correspondingly limited by the cache lines of the placement policy. There are three generally known placement strategies: direct mapping, full association, and set association. In a direct-mapped cache structure, the cache is organized into multiple sets with a single cache line per set. Based on the address of the memory block, the memory block may occupy only a single cache line. For a directly mapped cache, the cache can be designed as a (n*1) column matrix. In a fully associative cache structure, the cache is organized into a single cache set with multiple cache lines. A memory block may occupy any of the cache lines in a single cache set. A cache with a fully associative structure can be designed as a (1*m) row matrix.A set associative cache is an intermediate-design cache with an intermediate structure between a direct-mapped cache and a fully associative cache. A set associative cache can be designed as a (n*m) matrix, where neither n nor m is 1. The cache is divided into n cache sets, and each set contains m cache lines. A memory block can be mapped to a cache set and then placed into any cache line of the set. When considering the continuation of the hierarchy of set associativity, set associative caches may encompass a range from direct-mapped to fully associative caches. For example, a directly mapped cache can also be described as a one-way set associative cache, and a fully associative cache with m blocks can be described as an m-way set associative cache. Directed-mapped caches, two-way set associative caches, and four-way set associative caches are common in cache systems.Speculative execution is a computing technique in which a processor executes one or more instructions before a determination is available as to whether such instructions should be executed, based on speculation that such instructions need to be executed under some conditions.A memory address in the computing system identifies a memory location in the computing system. Memory addresses are fixed-length sequences of bits that are conventionally displayed and manipulated as unsigned integers. The length of a digit or sequence of bits can be thought of as the width of a memory address. Memory addresses are available to certain structures of the central processing unit (CPU), such as the instruction pointer (or program counter) and memory address registers. The size or width of such structures of a CPU generally determines the length of memory addresses used in such CPUs.Description of drawingsEmbodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings, in which like reference numerals refer to like elements.1A-1E show various ways of partitioning a memory address into portions that can be used with execution types to control the operation of a cache, in accordance with some embodiments of the present disclosure.2, 3A, and 3B show example aspects of example computing devices, each including a cache system having interchangeable caches for a first type and a second type of execution, according to some embodiments of the present disclosure.4, 5A, and 5B specifically show example aspects of example computing devices, each including a cache system having interchangeable caches for main-type and speculative-type execution, in accordance with some embodiments of the present disclosure.6, 7A, 7B, 8A, 8B, 9A, and 9B show example aspects of example computing devices, each computing device including a type and speculative type execution) cache system with an interchangeable set of caches.10 particularly shows an example aspect of an example computing device including a cache system with interchangeable cache sets for main type and speculative type execution, in accordance with some embodiments of the present disclosure.11A and 11B illustrate a background for synchronizing content between the main cache and the shadow cache to save content cached in the main cache in preparation for accepting content in the shadow cache, according to some embodiments of the present disclosure Synchronous circuit system.12 shows example operation of the example synchronization circuitry of FIGS. 11A and 11B in accordance with some embodiments of the present disclosure.Figures 13, 14A, 14B, 14C, 15A, 15B, 15C, and 15D show a cache system having a cache system with an interchangeable cache set that includes a spare cache set to speed up speculative execution, according to some embodiments of the present disclosure Example aspects of an example computing device.16 and 17 show high-speed caches with interchangeable cache sets with extended tags that utilize different types of execution by the processor (eg, speculative and non-speculative execution), according to some embodiments of the present disclosure. An example aspect of an example computing device of a cache system.18 shows an example aspect of an example computing device having a cache system with an interchangeable cache set that utilizes circuitry to map physical cache set outputs to logical cache set outputs, according to some embodiments of the present disclosure.Figures 19, 20, and 21 show cache systems with interchangeable cache sets with the circuitry shown in Figure 18 to map physical cache set outputs to logical cache set outputs, according to some embodiments of the present disclosure Example aspects of an example computing device.22 and 23 show a method for using an interchangeable set of caches for speculative and non-speculative execution by a processor in accordance with some embodiments of the present disclosure.detailed descriptionThe present disclosure encompasses techniques for using multiple caches or cache sets of caches interchangeably with different types of execution by connected processors. Types of execution can include speculative and non-speculative threads of execution. Non-speculative execution can be called main execution or normal execution.For enhanced safety, when the processor performs conditional speculative execution of instructions, the processor may be configured to use a shadow cache during speculative execution of instructions, where the shadow cache is associated with the main or normal execution of the instruction Main cache used during detach. Some techniques for using shadow caches to improve security can be found in a US patent filed on July 6, 2018 and titled "ShadowCache for Securing Conditional Speculative Instruction Execution" Application No. 16/028,930, the entire disclosure of which is hereby incorporated by reference herein. The present disclosure includes techniques that allow caches to be dynamically configured as shadow caches or main caches; uniform sets of cache resources can be dynamically allocated for shadow caches or for main caches; Change assignments during execution.In some embodiments, a system may include a memory system (eg, including main memory), a processor, and a cache system coupled between the processor and the memory system. A cache system may have a cache collection. Also, the caches in the cache collection can be designed in various ways. For example, caches in a cache set may include cache sets through cache set associativity (which may include physical or logical cache set associativity).In some embodiments, the cache of the system may be variable between being configured for execution by a processor of a first type of instructions and being configured for execution by a processor of a second type of instructions. The first type may be non-speculative execution of instructions by the processor. The second type may be speculative execution of instructions by the processor.In some embodiments, a cache set of caches may be variable between being configured for execution of a first type of instructions by a processor and configured for execution of a second type of instructions by a processor of. The first type may be non-speculative execution of instructions by the processor. Also, the second type may be speculative execution of instructions by the processor.In some embodiments, speculative execution is a situation where a processor executes one or more instructions before a determination is available as to whether such instructions should be executed, based on speculation that such instructions need to be executed under certain conditions. Non-speculative execution (or main execution, or normal execution) is when instructions are executed sequentially according to their program sequence.In some embodiments, the system's set of caches may include at least a first cache and a second cache. In such an example, the system may include a command bus configured to receive read commands or write commands from the processor. The system may also include an address bus configured to receive a memory address from the processor for accessing the memory for a read command or a write command. Also, a data bus may be included that is configured to: communicate data to the processor for reading by the processor; and receive data from the processor for writing in memory. Memory access requests from the processor can be defined by a command bus, address bus, and data bus.In some embodiments, a common command and address bus may replace the command and address bus described herein. Furthermore, in such embodiments, common connections to common command and address buses may be substituted for corresponding connections to the command and address buses described herein.The system may also include an execution type signal line configured to receive the execution type from the processor. The execution type may be an indication of normal or non-speculative execution or an indication of speculative execution.The system may also include configurable data bits configured to be set to a first state (eg, "0") or a second state (eg, "1") as opposed to non-speculative execution and speculation Execution changes the usage of the first cache and the second cache.The system may also include logic configured to select from the processor a first request for a memory access request when the configurable data bit is set to the first state and the execution type signal line receives an indication of non-speculative execution. a cache. The logic circuit may also be configured to select a second cache from the processor for the memory access request when the configurable data bit is set to the first state and the execution type signal line receives an indication of speculative execution. The logic circuit may also be configured to select a second cache from the processor for the memory access request when the configurable data bit is set to the second state and the execution type signal line receives an indication of non-speculative execution. The logic circuit may also be configured to select the first cache from the processor for the memory access request when the configurable data bit is set to the second state and the execution type signal line receives an indication of speculative execution.The system may also include speculative state signal lines configured to receive speculative state from the processor. The speculative state may be an acknowledgment or rejection of a condition with nested instructions that were initially executed by speculative execution and subsequently executed by non-speculative execution when the speculative state was an acknowledgment of the condition.The logic circuit may also be configured to select a second cache as identified by the first state of the configurable data bit when a signal received by the execute type signal line changes from an indication of non-speculative execution to an indication of speculative execution , and restricts the first cache from being used or changed as identified by the first state of the configurable data bits.Also, the logic circuit may be configured to change the configurable data bits from the first state to the second state and select the second cache for the memory access request when the execute type signal line receives an indication of non-speculative execution. This can occur when the signal received by the execute type signal line changes from an indication of speculative execution to an indication of non-speculative execution, and when the speculative state received by the speculative state signal line is an acknowledgement of the condition.The logic circuit may also be configured to maintain the first state of the configurable data bits and select the first cache for the memory access request when the execution type signal line receives an indication of non-speculative execution. This may occur when the signal received by the execute type signal line changes from an indication of speculative execution to an indication of non-speculative execution, and when the speculative state received by the speculative state signal line is a rejection of the condition. Furthermore, the logic circuit may be configured to change when a signal received by the execute type signal line changes from an indication of speculative execution to an indication of non-speculative execution, and when the speculative state received by the speculative state signal line is a rejection of the condition , invalidate the content of the second cache and discard the content.The system may also include a second command bus configured to communicate read commands or write commands to main memory connected to the cache system. A read command or a write command may be received from the processor by the cache system. The system may also include a second address bus configured to communicate the memory address to the main memory. The memory address may be received from the processor by the cache system. The system may also include a second data bus configured to communicate data to main memory for writing in the memory and to receive data from the main memory to communicate to the processor for reading by the processor. Memory access requests to main memory from the cache system may be defined by a second command bus, a second address bus, and a second data bus.As mentioned, the caches in a cache set can be designed in a number of ways, and one of those ways includes partitioning into cache sets by cache set associativity (which can include physical or logical cache set associativity) A cache of collections. The benefit of cache design with set associativity is that a single cache with set associativity can have multiple cache sets within a single cache, and thus, different parts of a single cache can be allocated for use by the processor without The entire cache is not allocated. Therefore, a single cache can be used more efficiently. This is especially true when the processor executes multiple types of threads or has multiple execution types. For example, instead of using interchangeable caches, a set of caches within a single cache can be used interchangeably with different execution types. Common examples of cache partitioning include having two, four, or eight cache sets within the cache.Furthermore, set associative cache designs outperform other common cache designs when the processor executes main and speculative threads. Since speculative execution can use less additional cache capacity than normal or non-speculative execution, the selection mechanism can be implemented below the cache set level and thus reserve less space than the entire cache (ie, cache part of the cache) for speculative execution. A cache with set associativity may have multiple cache sets within a set (eg, a partition of two, four, or eight cache sets within a cache). For example, as shown in Figure 7A, there are at least four cache sets in the caches of the cache system (eg, see cache sets 702, 704, and 706). Ordinary or non-speculative execution that typically requires most of the cache capacity may have a larger number of cache sets delegated to it. Also, speculative execution with modifications to non-speculative execution may use one cache set or a smaller number of cache sets, since speculative execution typically involves fewer instructions than non-speculative execution.As shown in FIG. 6 or 10, a cache system may include multiple caches for a processor (eg, caches 602a, 602b, and 602c depicted in FIG. 6), and the caches of the cache system may include a cache Cache sets, such as cache sets 610a, 610b, and 610c depicted in FIG. 6, to further divide the organization of the cache system. This instance contains a cache system with set associativity.At the cache set level of the cache, a first cache set (eg, see cache set 702 depicted in Figures 7A, 8A, and 9A) may hold content for communication with a first type of processing by the processor Execute or use with the second type. For example, the first set of caches may hold content for use with non-speculative type or speculative type execution by the processor. Additionally, a second set of caches (eg, see cache sets 704 or 706 depicted in Figures 7A, 8A, and 9A) may hold content for use with executions of the first type or the second type by the processor use.For example, in a first time instance, a first set of caches is used for normal or non-speculative execution, and a second set of caches is used for speculative execution. In the second time instance, the second cache set is used for normal or non-speculative execution, and the first cache set is used for speculative execution. The manner of delegating/swapping cache sets for non-speculative and speculative execution may use sets via a cache set index within or outside the memory address tag or via a cache set indicator within a memory address tag other than the cache set index Association (eg, see Figures 7A, 7B, 8A, 8B, 9A, and 9B).As shown at least in Figures IB, 1C, ID, IE, 7A, 7B, 8A, 8B, 9A, and 9B, a cache set index or cache set indicator may be included in cache block addressing to implement a cache set Addressing and associativity. Cache block addressing may be stored in memory (eg, SRAM, DRAM, etc., depending on the design of the computing device, ie, the design of the processor registers, cache system, another intermediate memory, main memory, etc.).As shown in Figures 6, 7A, 7B, 8A, 8B, 9A, 9B, and 10, each cache set of caches (eg, level 1, 2, or level 3 caches) has a corresponding register (eg, shown 6 and 10, or registers 712, 714, or 716 shown in FIGS. 7A, 7B, 8A, 8B, 9A, and 9B) and set indices (see, for example, those shown in one of set indices 722, 724, 726, and 728 in 7B, 8A, 8B, 9A, and 9B) that can be swapped between corresponding registers to implement non-speculative and speculative execution for the processor (or, in general, a swap of cache sets for the first and second types of execution of the processor). For example, with respect to Figures 7A and 7B, a first type of execution may use cache sets 702 and 704 and a second type of execution may use cache set 706 during a first time period. Then, for a second time period, the first type of executions may use cache sets 704 and 706 and the second type of executions may use cache set 702 . It should be noted that this is only one example use of a cache set, and it should be understood that any of the cache sets with no predetermined limit can be set by the first or the first, depending on the time period or the set index or indicator stored in the register. Two types of implementations are used.In some embodiments, several cache sets may be initially allocated for execution of the first type (eg, non-speculative execution). During execution of the second type (eg, speculative execution), whether or not one of the cache sets originally used for the execution of the first type (eg, the reserved cache set) may be used in the execution of the second type. Essentially, the set of caches allocated for executions of the second type may be initially a set of free caches waiting to be used, or selected from several sets of caches for executions of the first type (eg, less likely to be used further). Cache collections executed in another first type).Generally speaking, in some embodiments, a cache system includes multiple cache sets. The plurality of cache sets may include a first cache set, a second cache set, and a plurality of registers respectively associated with the plurality of cache sets. The plurality of registers may include a first register associated with the first set of caches and a second register associated with the second set of caches. The cache system may also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection to an address bus coupled between the cache system and the processor connection between the data bus. The cache system may also include logic coupled to the processor to control the plurality of cache sets according to the plurality of registers.In such an embodiment, the cache system may be configured to be coupled between the processor and the memory system. Also, when the connection to the address bus receives a memory address from the processor, the logic circuit may be configured to generate a set index from at least the memory address (see eg, shown in FIGS. 7A, 7B, 8A, 8B, 9A, and 9B, respectively). Collection indexes generate 730, 732, 830, 832, 930, and 932). Furthermore, when the connection to the address bus receives the memory address from the processor, the logic circuit may be configured to determine whether the generated set index matches the content stored in the first register or the content stored in the second register. Further, the logic circuit may be configured to implement, via the first cache set, commands received in connection with the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set index matching the content stored in the first register. The set index matches the content stored in the second register to implement the command via the second cache set. Furthermore, in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit may be configured to allocate the first cache set for caching the data set and to store the resulting set The index is stored in the first register. The generated set index may include a predetermined segment of bits in the memory address.The cache system may also include connections to execute-type signal lines from the processor that identify the execution type (eg, see connection 604d depicted in Figures 6 and 10). In such an embodiment, the generated set index may be generated further based on the type identified by the execution type signal line. Additionally, the generated set index may include a predetermined segment of bits in the memory address and bits representing the type identified by the execution type signal line (eg, the generated set index may include a predetermined segment of bits in the memory address 102e and one or more bits representing the type identified by the execution type signal line shown in FIG. 1E as execution type 110e, or derived from the predetermined segment and the one or more bits).Furthermore, when the first and second registers are in the first state, the logic circuit may be configured to implement a command received from the command bus for accessing the memory system via the first cache set when the execution type is the first type command; and when the execution type is the second type, implementing the command received from the command bus for accessing the memory system via the second cache set. In addition, when the first and second registers are in the second state, the logic circuit may be configured to implement, when the execution type is the first type, implement a divide-by-first function received from the command bus for use in the plurality of cache sets a command that accesses the memory system by another cache set other than the cache set; and when the execution type is the second type, implementing a command received from the command bus for use in a cache set other than the second cache set of the plurality of cache sets A cache other than another set of commands to access the memory system. In such an instance, each of the plurality of registers may be configured to store the set index, and when the execution type is changed from the second type to the first type, the logic circuit may be configured to change the content stored in the first register and The content stored in the second register.In some embodiments, the first type is configured to indicate non-speculative execution of instructions by the processor; and the second type is configured to indicate speculative execution of instructions by the processor. In such embodiments, the cache system may further include a connection to a speculative state signal line from the processor that identifies the state of speculative execution of an instruction by the processor (see, eg, shown in FIG. Connection 1002 in 10). The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected. Each of the plurality of registers may be configured to store the set index, and when the execution type is changed from the second type to the first type, the logic circuit may be configured to be in a condition where the state of the speculative execution indicates that the results of the speculative execution will be accepted Next, change what is stored in the first register and what is stored in the second register (see, for example, shown between Figures 7A and 7B, shown between Figures 8A and 8B, and shown in Figures 9A and 9A 9B changes in the contents of the registers stored). Also, when the execution type is changed from the second type to the first type, the logic circuit may be configured to maintain the contents stored in the first register and the storage if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected The content in the second register does not change.Additionally, the cache systems described herein (eg, cache systems 200, 400, 600, and 1000) may each include or be connected to background sync circuitry (eg, see background sync circuitry shown in FIGS. 11A and 11B ) 1102). Background synchronization circuitry may be configured to synchronize a cache or set of caches prior to reconfiguring the shadow cache as the primary cache and/or reconfiguring the primary cache as the shadow cache.For example, the contents of a cache or set of caches initially delegated for speculative execution (eg, an additional cache or set of alternate caches delegated for speculative execution) may be Corresponding caches or cache sets are synchronized (to cache contents with normal execution) so that if speculation is confirmed, the cache or cache set originally delegated for speculative execution can immediately join the main or non-speculative execution's Cache collection. Additionally, the initial cache set corresponding to the cache or cache set originally delegated for speculative execution may be removed from the group of cache sets for main execution or non-speculative execution. In such an embodiment, circuitry (eg, circuitry including background synchronization circuitry) may be configured to synchronize caches or sets of caches in the background to reduce the impact of synchronization of cache sets used by the processor's caches. Furthermore, synchronization of the cache or cache set may continue until the speculation is abandoned, or until the speculation is confirmed and synchronization is complete. Synchronization can optionally include synchronization with memory (eg, write back).In some embodiments, a cache system may include a first cache and a second cache, and a connection to a command bus coupled between the cache system and the processor, and a connection between the cache system and the processor A connection to an address bus of the CPU, a connection to a data bus coupled between the cache system and the processor, and a connection to an execution type signal line from the processor that identifies the execution type (see, for example, cache systems 200 and 400) . Such a cache system may also include logic coupled to control the first cache and the second cache depending on the type of execution, and the cache system may be configured to be coupled between the processor and the memory system. Furthermore, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic may be configured to A portion of the content cached in the first cache is copied to the second cache (eg, see operation 1202). Additionally, the logic may be configured to copy the portion of the content cached in the first cache to the second cache independently of the current command received in the command bus.Additionally, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit may be configured to respond The second cache is used to service subsequent commands from the command bus upon a change in execution type from the first type to a second type indicating speculative execution of instructions by the processor (eg, see operation 1208). In such an instance, the logic may be configured to complete synchronizing the portion of the content from the first cache to the second cache before servicing subsequent commands after the execution type is changed from the first type to the second type (see, eg, FIG. 12). The logic may also be configured to continue to synchronize portions of the content from the first cache to the second cache when servicing subsequent commands (eg, see operation 1210).In such an embodiment, the cache system may also include configurable data bits, wherein the logic circuit is further coupled to control the first cache and the second cache according to the configurable data bits. Furthermore, in such an embodiment, the cache system may further comprise a plurality of cache sets. For example, the first cache and the second cache may collectively include multiple cache sets, and the multiple cache sets may include the first cache set and the second cache set. The cache system may also include multiple registers, each associated with multiple cache sets. The plurality of registers may include a first register associated with the first set of caches and a second register associated with the second set of caches. Also, in such an embodiment, the logic circuit may be further coupled to control the plurality of cache sets in accordance with the plurality of registers.In some embodiments, a cache system may include multiple cache sets including a first cache set and a second cache set. The cache system may also include a plurality of registers respectively associated with a plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a second cache set associated with the second register. In such an embodiment, the cache system may include a plurality of caches, the plurality of caches including a first cache and a second cache, and the first cache and the second cache may collectively include a plurality of caches Cache at least part of the collection. Such a cache system may also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection between the cache system and the processor Connections to a data bus between the processors and to execution type signal lines from the processors identifying the execution type, and logic coupled to control the plurality of cache sets according to the execution type.In such an embodiment, the cache system may be configured to be coupled between the processor and the memory system. And, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit is configured to A portion of the content cached in the first cache set is copied to the second cache set. The logic circuit may also be configured to copy the portion of the content cached in the first cache set to the second cache set independently of the current command received in the command bus.Furthermore, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic may be configured to The second cache set is used to service subsequent commands from the command bus in response to the execution type changing from the first type to a second type indicating speculative execution of instructions by the processor. The logic may also be configured to complete synchronizing the portion of the content from the first cache set to the second cache set before servicing subsequent commands after the execution type is changed from the first type to the second type. The logic may also be configured to continue to synchronize portions of the content from the first cache set to the second cache set when servicing subsequent commands. Also, logic circuits may be further coupled to control the plurality of cache sets in accordance with the plurality of registers.In addition to using a shadow cache to ensure speculative execution and synchronizing content between the main and shadow caches to save content cached in the main cache in preparation for accepting content in the shadow cache, A set of alternate caches can be used to speed up speculative execution. Additionally, alternate cache sets can be used to speed up speculative execution without using shadow caches. The use of alternate cache sets is suitable for shadow cache implementations because data held in cache sets used as shadow caches can be verified and therefore used for normal execution, and some cache sets used as primary caches may Not ready to be used as shadow cache. Therefore, one or more cache sets may be used as alternate cache sets to avoid delays waiting for cache set availability. In other words, once speculation is confirmed, the contents of the cache set used as shadow cache are confirmed to be valid and up-to-date; and thus, the previous cache set used as shadow cache for speculative execution is used for normal implement. However, some of the cache set initially used as normal cache may not be ready for subsequent speculative execution. Thus, one or more cache sets may be used as spares to avoid delays waiting for cache set availability and to speed up speculative execution.In some embodiments, the cache set in the normal cache cannot be immediately freed for the next speculative execution if the synchronization from the cache set in the normal cache to the corresponding cache set in the shadow cache has not been completed use. In this case, if there is no alternate cache set, the next speculative execution must wait until synchronization is complete so that the corresponding cache set in the normal cache can be freed. This is just one example when an alternate cache set is beneficial and can be added to an embodiment. Also, there are many other situations when the cache set in the normal cache cannot be freed immediately, so a spare cache set can be useful.Furthermore, in some embodiments, speculative execution may reference memory regions that do not have overlap with memory regions that are cached in the cache set used in normal caches. Since the results of speculative execution are accepted, the cache sets in the shadow cache and the normal cache may all be in the normal cache. This can also cause delays as the cache system spends time freeing the cache set to support the next speculative execution. To free a cache set, the cache system may identify a cache set, such as the least used cache set, and synchronize the cache set with the memory system. If the cache has newer data than the memory system, the data can be written to the memory system.Additionally, systems using alternate cache sets may also use background synchronization circuitry, such as background synchronization circuitry 1102 depicted in Figures 11A and 11B. In some embodiments, background synchronization circuitry 1102 may be part of logic circuitry 606 or 1006 . When the initial speculation is confirmed, the cache set used in the initial speculation may be swapped to join the set in the cache set for main execution. Instead of using the cache set from the previous primary execution being used in the case of a speculative failure, the alternate cache set may be immediately made available for the next speculative execution. Additionally, the alternate cache set may be updated for the next speculative execution via background synchronization circuitry. Also, due to background synchronization, the alternate cache set may be ready for use when the current set of caches for speculative execution is ready to accept for normal execution. In this way, there is no delay in waiting for usage of the next cache set for the next speculative execution. To prepare for the next speculative execution, the alternate cache set may be synchronized to the normal cache set that is likely to be used for the next speculative execution or the least used cache set in the system.In addition to using shadow caches, synchronizing content between main and shadow caches, and using alternate cache sets, extended tags can be used to improve execution for different types of execution by the processor (eg, speculative and non-speculative execution) use of interchangeable caches and cache sets. There are many different ways of using extended tags to address cache sets and cache blocks within a cache system. Two example approaches are shown in FIGS. 16 and 17 .In general, cache sets and cache blocks can be selected via memory addresses. In some instances, the selection is via set associativity. The two examples in Figures 16 and 17 use set associativity. In Figure 16, set associativity is implicitly defined (eg, defined by an algorithm that can be used to determine which tag should be in which cache set for a given execution type). In Figure 17, set associativity is implemented via the bits of the cache set index in the memory address. Furthermore, portions of the functionality illustrated in Figures 16 and 17 may be implemented without the use of set associativity (but this is not depicted in Figures 16 and 17).In some embodiments, including the embodiments shown in Figures 16 and 17, a block index may be used as an address within an individual cache set to identify a particular cache block in the cache set. Also, the extension tag can be used as the address of the cache set. The block index of the memory address can be used for each cache set to obtain the cache block and the tag associated with the cache block. Additionally, as shown in Figures 16 and 17, the tag comparison circuit may compare extended tags generated from the cache set with extended cache tags generated from memory addresses and current execution type. The output of the comparison can be a cache hit or miss. The construction of the extension tag ensures that there is at most one hit in the cache set. If there is a hit, the cache block from the selected cache set provides the output. Otherwise, the data associated with the memory address is not cached in or output from any of the cache sets. Briefly, the extended tags depicted in Figures 16 and 17 are used to select cache sets, and the block index is used to select cache blocks and their tags within the cache set.Furthermore, as shown in FIG. 17, the combination of tags and cache set indexes in the system may provide slightly similar functionality to that using only tags (as shown in FIG. 16). However, in Figure 17, by separating the tag and cache set index, the cache set does not have to store redundant copies of the cache set index, since the cache set can be associated with a cache set register to save the cache set Cache collection index. In Figure 16, however, the cache set does require a redundant copy of the cache set indicator to be stored in each of the blocks of the cache set. However, since the tags have the same cache set indicator in the embodiment depicted in Figure 16, the indicator may be stored in a register for the cache set (see, eg, the cache set register shown in Figure 17) once. The benefit of using a cache set register is that the length of the tags can be shorter compared to implementations that do not have tags in the cache set register.Both the embodiments shown in Figures 16 and 17 have cache set registers configured to hold execution types, such that corresponding cache sets are available to implement different execution types (eg, speculative and non-speculative execution types). However, the embodiment shown in Figure 17 has registers that are further configured to hold the execution type and cache set index. When an execution type is combined with a cache set index to form an extended cache set index, the extended cache set index may be used to select one of the cache sets independent of addressing by tags of cache blocks. Furthermore, the two-step selection can be similar to conventional two-step selection using the cache set index when comparing the tags from the selected cache set to the tags in the address to determine hits or misses, or can be used with Extended tag combinations to support interchange of cache sets for different execution types.In addition to using extension tags and other techniques disclosed herein to improve the use of interchangeable caches and cache sets for different types of execution by the processor, included in a cache system or connected to a cache The circuitry of the cache system may be used to map the physical output from the cache set of the cache hardware system to a logical main cache and a logical shadow cache, respectively, for normal and speculative execution by the processor. The mapping may be according to at least one control register (eg, a physical-to-logical-set-mapping (PLSM) register).Furthermore, disclosed herein are cache systems with interchangeable cache sets that utilize mapping circuitry, such as mapping circuitry 1830 shown in FIG. 18, to map physical cache set outputs to logical cache set outputs computing device. A processor coupled to the cache system may execute two types of threads, such as speculative and non-speculative execution threads. A speculative thread executes speculatively with a condition that has not yet been evaluated. Data for speculative threads may be in the logical shadow cache. Data for a non-speculative thread may be in the logical main cache or normal cache. Subsequently, when the results of evaluating the condition become available, when the condition requires the execution of the thread or the thread is removed, the system may retain the results of executing the speculative thread. Through the mapping circuit, the hardware circuit for the shadow cache can be repurposed as the hardware circuit for the main cache by changing the contents of the control register. Thus, for example, if speculative threads need to be executed, the main cache does not need to be synchronized with the shadow cache.In conventional caches, each cache set is statically associated with a particular value of "index S"/"block index L". In the cache system disclosed herein, any set of caches can be used for any index value S/L and for any purpose of main or shadow cache. A cache set is available to a cache set register associated with the cache set and can be defined by data in the cache set register. Selection logic can then be used to select the appropriate result based on the index value of the S/L and the manner in which the cache set is used.For example, four cache sets (cache set 0 to set 3) may initially be used for main caches of S/L=00, 01, 10 and 11, respectively. Assuming that speculative execution does not change the cache sets defined by 01, 10, and 11, the fourth cache set can be used as a speculative cache with S/L=00. If the results of the speculative execution are required, the map data can be changed to indicate that the main caches of S/L=00, 01, 10 and 11 are used for the fourth cache set, cache set 1, cache set 2 and cache, respectively Cache collection 3. Cache set 0 may then be freed or invalidated for subsequent use in speculative execution. If the next speculative execution needs to change cache set S/L to 01, then cache set 0 can be used as a shadow cache (eg, copied from cache set 1 and used for lookups with S/L equal to '01' The content of the address of L.Furthermore, the cache system and processor do not just swap back and forth between a predetermined main thread and a predetermined speculative thread. Consider the speculative execution of the following pseudo-program.instruction A;if condition = true,then instruction B;end conditional loop;Directive C; andcommand D.For pseudo programs, the processor can run two threads.Thread A:instruction A;Directive C; andcommand D.Thread B:instruction A;instruction B;Directive C; andcommand D.The execution of instruction B is speculative because it depends on the test result of "condition=true" rather than "condition=false". Execution of instruction B is only required when condition=true. When the result of the test "condition=true" becomes available, execution of thread A reaches instruction D, and execution of thread A can reach instruction C. If the test result requires execution of instruction B, then thread B's cache contents are correct and thread A's cache contents are incorrect. Then, all changes made in the cache according to thread B should be maintained, and the processor may continue execution of instruction C using the cache with the result of executing instruction B; and thread A is terminated. Since the changes made according to thread B are in the shadow cache, the contents of the shadow cache should be accepted as the main cache. If the test result does not require execution of instruction B, then the result of thread B is discarded (eg, the contents of the shadow cache are discarded or invalidated).The cache sets used for shadow and normal caches may be swapped or changed according to mapping circuits and control registers (eg, physical-to-logical-set-mapping (PLSM) registers). In some embodiments, a cache system may include a plurality of cache sets having a first cache set configured to provide a first physical output upon a cache hit and a cache set configured to A second set of caches that provide the second physical output immediately after the hit. The cache system may also include a connection to a command bus coupled between the cache system and the processor, and a connection to an address bus coupled between the cache system and the processor. The cache system may also include a control register, and a mapping circuit coupled to the control register to map respective physical outputs of the plurality of cache sets to the first logical cache and the second logical cache according to the state of the control register. The cache system may be configured to be coupled between the processor and the memory system.When the connection to the address bus receives the memory address from the processor and when the control register is in the first state, the mapping circuit may be configured to: map the first physical output to the memory address for the first type of execution by the processor a first logical cache to implement a command received from the command bus for accessing the memory system via the first set of caches during execution of the first type; and to map the second physical output to for execution by the processor A second logical cache of the second type of execution to implement commands received from the command bus for accessing the memory system via the second set of caches during the second type of execution. And, when the connection to the address bus receives the memory address from the processor and when the control register is in the second state, the mapping circuit is configured to: map the first physical output to the second logical cache for execution in the second type of execution during execution of the command received from the command bus for accessing the memory system via the first set of caches; and mapping the second physical output to the first logical cache to execute the command received from the command bus during execution of the first type of commands for accessing the memory system via the second set of caches.In some embodiments, the first logical cache is a normal cache for non-speculative execution by the processor and the second logical cache is a shadow cache for speculative execution by the processor.Additionally, in some embodiments, the cache system may further include a plurality of registers respectively associated with a plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a second register associated with the second cache set A second register associated with the cache set. The cache system may also include logic coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address from the processor, the logic circuit may be configured to generate a set index from at least the memory address, and to determine whether the generated set index is the same as the content stored in the first register or the content stored in the first register The contents of the two registers match. Also, the logic circuit may be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set index. The set index matches the content stored in the second register to implement the command via the second cache set.In some embodiments, the mapping circuit may be part of or connected to the logic circuit, and the state of the control register may control the state of a cache set of the plurality of cache sets. In some embodiments, the state of the control register may control the state of a cache set of the plurality of cache sets by changing the valid bits of each block of the cache set.Additionally, in some examples, the cache system may further include connections to speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the result of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to, if the status of the speculative execution indicates that the results of the speculative execution will be accepted (eg, when the speculative execution will become the main thread of execution) ), the state of the first and second cache sets is changed via the control register. Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second high speed via the control register if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected Cache the state of the collection without changing.In some embodiments, the mapping circuit is part of or connected to the logic circuit, and the state of the control register may control the state of a cache register of the plurality of cache registers via the mapping circuit. In such an example, the cache system may further include connections to speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution indicates whether the result of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to change the state of the first and second registers via the control register if the state of the speculative execution indicates that the results of the speculative execution will be accepted . Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second registers via the control register if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected status without changing.Additionally, the present disclosure includes techniques to ensure speculative instruction execution using multiple interchangeable caches, each interchangeable as a shadow cache or a main cache. Speculative instruction execution may take place in a processor of a computing device. Two different types of threads on which the processor can execute instructions. Executes one of the threads speculatively (eg, under a condition that has not yet been evaluated). Data for speculative threads may be in logical caches that act as shadow caches. The main thread's data may be in a logical cache that acts as the main cache. Subsequently, when the results of evaluating the condition become available, the processor may retain the results of executing the speculative thread when the condition requires execution of the thread, or remove the results. A cache hardware circuit acting as a shadow cache can be repurposed as a main cache hardware circuit by changing the contents of the registers. Therefore, the main cache does not need to be synchronized with the shadow cache if speculative threads need to be executed.The techniques disclosed herein also relate to the use of a unified cache structure that can be used to implement a main cache and a shadow cache with improved performance. In a unified cache structure, the results of the cache set can be dynamically remapped using the register set to swap in the main cache and in the shadow cache. When speculative execution succeeds, the cache set used with the shadow cache has the correct data and can be remapped to the corresponding cache set for the main cache. This eliminates the need to copy data from the shadow cache to the main cache as used by other techniques using shadow and main cache.In general, the cache can be configured as multiple sets of blocks. Each block set can have multiple blocks, and each block can hold a number of bytes. The memory address can be divided into three for accessing cache, tag, block index (which can be used to address sets within multiple sets) and cache block (which can be used to address bytes in byte blocks) Fragment. For each block in the set, the cache stores not only the data from memory, but also a tag of the address from which the data was loaded and a field indicating whether the contents of the block are valid. Data may be retrieved from the cache using a block index (eg, set ID) and a cache block (eg, byte ID). Compare the tags in the retrieved data to the tag portion of the address. A matching tag means that data is cached for that address. Otherwise, it means that the data can be cached for another address mapped to the same location in the cache.In the case of techniques using multiple swappable caches, the set of physical caches of swappable caches are not hardwired to the main or shadow caches. A physical cache set can be used as a main cache set or a shadow cache set. Also, the register set can be used to specify whether the physical cache set is currently being used as the main cache set or the shadow cache set. In general, a map may be constructed to translate the output of a physical cache set into a logical output of a corresponding cache set represented by a block index (eg, set ID) and a main or shadow state. Remapping allows any available physical cache to be used as a shadow cache.In some embodiments, the unified cache architecture may remap shadow caches (eg, speculative caches) to main caches, and may remap main caches to speculative caches. It should be understood that a design may include any number of caches or cache sets interchangeable between a main cache or cache set or a speculative cache or cache set.It should be understood that there is no physical distinction in the hardwiring of the main cache or set of caches and the speculative cache or set of caches. Also, in some embodiments, there is no physical distinction in the hardwiring of the logical units described herein. It should be understood that interchangeable caches or sets of caches do not have different cache capacities and structures. Otherwise, such caches or sets of caches would not be interchangeable. Furthermore, the physical cache set can be dynamically configured to be primarily or speculative, eg, without a priori determination.Furthermore, it should be understood that interchangeability occurs at the cache level and not at the cache block level. Interchangeability at the cache block level may allow the main and shadow caches to have different capacities; and thus, are not interchangeable.Furthermore, in some embodiments, when the speculation by the processor is successful and the cache is being used as the main cache and the other cache is being used as a speculative or shadow cache, the cache index block of the main cache is The associated valid bits are all set to indicate invalid (eg, invalid by a "0" bit value). In such an embodiment, the initial state of all valid bits of the speculative cache indicates invalid, but then changes to indicate valid due to a successful speculation. In other words, the previous state of the main cache is invalidated, and the previous state of the speculative cache is set from invalid to valid and accessible by the main thread.In some embodiments, the PLSM register for the main cache may be changed from indicating the main cache to indicating the speculative cache. Changing the indication from the main cache to the speculative cache by the PLSM register may be done by the PLSM register that receives a valid bit indicating an invalid main cache after a successful speculation. For example, after a successful speculation and if the first cache is initially the primary cache and the second cache is initially the speculative cache, an invalid indication of bit "0" may replace the one for the first cache. The least significant bit in the 3-bit PLSM register, this changes "011" to "010" (or "3" to "2"). Also, for the 3-bit PLSM register for the second cache, a valid indication of bit "1" may replace the least significant bit in the PLSM register, which may change "010" to "011" (or change "2" as "3"). Thus, as the example shows, a PLSM register that was originally used for the first cache (eg, the main cache) and initially selected the first cache is changed to select the second cache (eg, the speculative cache) after successful speculation . And, as the example shows, a PLSM register that was originally used for the second cache (eg, the speculative cache) and initially selected the second cache is changed to select the first cache (eg, the main cache) after successful speculation . With this design, the main thread of the processor may first access the cache initially designated as the main cache, and then access the cache originally designated as the speculative cache after a successful speculation by the processor. Also, a speculative thread of a processor may first access the cache initially designated as the speculative cache, and then access the cache initially designated as the primary cache after a successful speculation by the processor.Figure 1A shows a memory address 102a partitioned into a tag portion 104a, a block index portion 106a, and a block offset portion 108a. According to some embodiments of the present disclosure, execution type 110a may be combined with a portion of a memory address to control cache operations. The total number of bits used to control addressing in a cache system according to some embodiments disclosed herein is A bits. Also, the sum of the bits for sections 104a, 106a, and 108a and execution type 110a equals A bits. The label portion 104a is K bits, the block index portion 106a is L bits, the block offset portion 108a is M bits, and the execution type 110a is one or more T bits.For example, for a given execution type, data for all memory addresses with the same block index portion 106a and block offset portion 108a may be stored in the same physical location in the cache. When data at memory address 102a is stored in the cache, tag portion 104a also stores a block for the block containing the memory address to identify which of the addresses with the same block index portion 106a and block offset portion 108a is currently being cached at the location in the cache.Data at memory addresses may be cached in different locations in the unified cache structure for different types of executions. For example, data may be cached in main cache during non-speculative execution; and subsequently cached in shadow cache during speculative execution. The execution type 110a can be combined with the tag portion 104a to select from a cache that can be dynamically configured for use in main and speculative execution without restriction. There may be many different ways of implementing the use of a combination of execution type 110a and tag portion 104a to make a selection. For example, the logic circuit 206 depicted in Figures 2 and 4 may use the execution type 110a and/or the tag portion 104a.In a relatively simple implementation, execution type 110a may be combined with tag portion 104a to form an extended tag when determining whether a cache location contains data for memory address 102a or data for execution of the current type of instruction. For example, a cache system can use tag portion 104a to select cache locations without distinguishing execution types; and when tag portion 104a is combined with execution type 110a to form an extended tag, the extended tag can be (eg, speculative execution and non-speculative execution) are used in ways that select cache locations so that techniques for shadow caching can be implemented to enhance security. Furthermore, since information about execution types associated with cached data is shared among many cache locations (eg, in a cache set, or in a cache with multiple cache sets), it is not necessary to store execution types at individual locations; and selection mechanisms (eg, switches, filters, or multiplexers, such as data multiplexers) may be used to implement selections based on execution types). Alternatively, physical caches or sets of physical caches for different types of executions may be remapped to logical caches that are pre-associated with the different types of executions, respectively. Thus, the use of logical caches may be selected according to execution type 110a.FIG. 1B shows another way of partitioning the partitioned memory addresses 102b to control cache operations. The memory address 102b is partitioned into a tag portion 104b, a cache set index portion 112b, a block index portion 106b, and a block offset portion 108b. The total bits of memory address 102b are A bits. And, the sum of the bits for the four parts is equal to the A bits of address 102b. The tag portion 104b is K bits, the block index portion 106b is L bits, the block offset portion 108b is M bits, and the cache set index portion 112b is S bits. Thus, for address 102b, its A bits=K bits+L bits+M bits+S bits. The partitioning according to the memory address 102b of FIG. 1B allows the implementation of set associativity when caching data.For example, multiple cache sets may be configured in a cache, where each cache set may be addressed using the cache set index 112b. Data sets associated with the same cache set index may be cached in the same cache set. The tag portion 104b of the data block cached in the cache set may be stored in the cache in association with the data block. When address 102b is used to retrieve data from a cache set identified using cache set index 112b, the tag portion of the data block stored in the cache set may be retrieved and compared to tag portion 104b to It is determined whether there is a match between the tag 104b of the address 102b of the access request and the tag 104b stored in the cache set identified by the cache set index 112b and stored for the cache block identified by the block index 106b. If there is a match (eg, a cache hit), the cache block stored in the cache set is for memory address 112b; otherwise, the cache block stored in the cache set is for memory address 102b Another memory address for cache set index 112b and the same block index 106b, which causes a cache miss. In response to the cache miss, the cache system accesses main memory to retrieve the data block according to address 102b. To implement shadow caching techniques, cache set index 112b may be combined with execution type 110a to form an extended cache set index. Thus, cache sets for different types of executions of different cache set indexes may be addressed using an extended cache set index that identifies both the cache set index and the execution type.In Figure IB, the cache set index portion 112b is extracted from a predetermined portion of the address 102b. Data stored at memory addresses with different set indices may be cached in different cache sets of the cache to enforce set associativity when caching data. A cache set of caches may be selected using a cache set index (eg, portion 112b of address 102b). Alternatively, cache set associativity may be implemented using the partitioning scheme illustrated in FIG. 1C via tag 104c that includes a cache set indicator. Optionally, a cache set indicator is calculated from tag 104c and used as a cache set index to address the cache set. Alternatively, set associativity may be implemented directly via tag 104c, such that the cache set storing tag 104c is selected for a cache hit; and when no cache set stores tag 104c, a cache miss is determined. Alternatively, the address 102d may be partitioned for cache operations as illustrated in FIG. ID, where the tag portion 104d contains the cache set index 112d, where the cache set does not use the cache set index explicitly and separately addressing. For example, to implement shadow caching techniques, a combination of execution type 110e and a tag 104e with an embedded cache set indicator (depicted in FIG. 1E ) may be used to select a cache for the correct execution type and storing the same tag 104e Cache collections for cache hits. A cache miss is determined when no cache set has a matching execution type and stores the same tag 104e.Furthermore, as shown in FIG. 1C , FIG. 1C depicts another way of partitioning the partitioned memory address 102c to control cache operations. The memory address 102c is partitioned into a tag portion 104c with a cache set indicator, a block index portion 106c, and a block offset portion 108c. The total bits of memory address 102c are A bits. And, the sum of the bits for the three parts is equal to the A bits of address 102c. The label portion 104c is K bits, the block index portion 106c is L bits, and the block offset portion 108c is M bits. Thus, for address 102c, its A bits=K bits+L bits+M bits. As mentioned, the partitioning according to memory address 102c of FIG. 1C allows the implementation of set associativity when caching data.Furthermore, as shown in FIG. ID, FIG. ID depicts another way of partitioning the partitioned memory address 102d to control cache operations. The memory address 102d is partitioned into a tag portion 104d with a cache set index 112d, a block index portion 106d, and a block offset portion 108d. The total bits of the memory address 102d are A bits. And, the sum of the bits for the three parts is equal to the A bits of address 102d. The label portion 104d is K bits, the block index portion 106d is L bits, and the block offset portion 108d is M bits. Therefore, for address 102d, its A bits=K bits+L bits+M bits. As mentioned, the partitioning according to the memory address 102d of Figure ID allows for the enforcement of set associativity when caching data.Furthermore, as shown in FIG. 1E, FIG. 1E depicts another way of partitioning the partitioned memory addresses 102e to control cache operations. Figure IE shows a memory address 102e partitioned into a tag portion 104e with a cache set indicator, a block index portion 106e, and a block offset portion 108e. According to some embodiments of the present disclosure, execution type 110e may be combined with a portion of a memory address to control cache operations. The total number of bits used to control addressing in a cache system according to some embodiments disclosed herein is A bits. Also, the sum of the bits for parts 104e, 106e, and 108e and execution type 110e equals A bits. The label portion 104e is K bits, the block index portion 106e is L bits, the block offset portion 108e is M bits, and the execution type 110e is T bits.2, 3A, and 3B show example aspects of example computing devices, each computing device including a computing device having a device that can perform interchange for a first type and a second type (eg, for enhanced security), according to some embodiments of the present disclosure. A cache system that implements the shadow cache technology).2 particularly shows aspects of an example computing device including a cache system 200 having multiple caches (see, eg, caches 202a, 202b, and 202c). The example computing device is also shown with a processor 201 and a memory system 203 . Cache system 200 is configured to be coupled between processor 201 and memory system 203 .The cache system 200 is shown as including a connection 204a to a command bus 205a coupled between the cache system and the processor 201 . The cache system 200 is shown as including a connection 204b to an address bus 205b coupled between the cache system and the processor 201 . The addresses 102a, 102b, 102c, 102d, and 102e depicted in FIGS. 1A, 1B, 1C, 1D, and 1E, respectively, may each be communicated via the address bus 205b, depending on the implementation of the cache system 200. The cache system 200 is also shown to include a connection 204c to a data bus 205c coupled between the cache system and the processor 201 . The cache system 200 is also shown to include a connection 204d to an execution type signal line 205d from the processor 201 that identifies the execution type.Not shown in FIG. 2, cache system 200 may include configurable data bits. The configurable data bits may be included in or be the data 312 in the first state shown in FIG. 3A and may be included in or be the data 314 in the second state shown in FIG. 3B Data 314. Memory access requests from the processor and memory usage by the processor can be controlled through the command bus 205a, address bus 205b, and data bus 205c.In some embodiments, cache system 200 may include a first cache (eg, see cache 202a) and a second cache (eg, see cache 202b). In such an embodiment, as shown in FIG. 2 , cache system 200 may include logic circuitry 206 coupled to processor 201 . Furthermore, in such an embodiment, the logic circuit 206 may be configured to control the first cache (eg, see cache 202a) and the second cache (eg, see cache 202b) based on configurable data bits.When the configurable data bits are in the first state (eg, see data 312 depicted in FIG. 3A ), the logic circuit 206 may be configured to implement the data received from the command bus 205a for use via the first type when the execution type is the first type The first cache accesses commands to the memory system 203 . Furthermore, when the configurable data bits are in the first state (eg, see data 312 depicted in FIG. 3A ), the logic circuit 206 may be configured to implement the command received from the command bus 205a when the execution type is the second type for commands that access the memory system 203 via the second cache.When the configurable data bit is in the second state (eg, see data 314 depicted in FIG. 3B ), the logic circuit 206 may be configured to implement the data received from the command bus 205a for use via the first type when the execution type is the first type The second cache accesses commands to the memory system 203 . Furthermore, when the configurable data bits are in the second state (eg, see data 314 depicted in FIG. 3B ), the logic circuit 206 may be configured to implement the command received from the command bus 205a when the execution type is the second type for commands that access the memory system 203 via the first cache.In some embodiments, the logic circuit 206 is configured to toggle the configurable data bits when the execution type is changed from the second type to the first type.Furthermore, as shown in FIG. 2 , the cache system 200 further includes a connection 208a to a second command bus 209a coupled between the cache system and the memory system 203 . The cache system 200 also includes a connection 208b to a second address bus 209b coupled between the cache system and the memory system 203 . The cache system 200 also includes a connection 208c to a second data bus 209c coupled between the cache system and the memory system 203 . When the configurable data bits are in the first state, when the execution type is a first type (eg, a non-speculative type), the logic circuit 206 is configured to provide a command for accessing the memory system 203 via the first cache to the first Two command bus 209a. When the configurable data bits are in the first state, when the execution type is a second type (eg, a speculative type), the logic circuit 206 is further configured to provide a command for accessing the memory system via the second cache to the second Command bus 209a.When the configurable data bits are in the second state, when the execution type is the first type, the logic circuit 206 is configured to provide a command for accessing the memory system 203 via the second cache to the second command bus 209a. Furthermore, when the configurable data bits are in the second state, when the execution type is the second type, the logic circuit 206 is configured to provide a command for accessing the memory system 203 via the first cache to the second command bus 209a.In some embodiments, connection 204a to command bus 205a is configured to receive read commands or write commands from processor 201 for accessing memory system 203 . Additionally, connection 204b to address bus 205b may be configured to receive memory addresses from processor 201 for accessing memory system 203 for read commands or write commands. Additionally, connection 204c to data bus 205c may be configured to communicate data to processor 201 for the processor to read the data for a read command. Also, connection 204c to data bus 205c may also be configured to receive data from processor 201 to write in memory system 203 for a write command. Additionally, connection 204d to execution type signal line 205d may be configured to receive an identification of the execution type from processor 201 (eg, identification of a non-speculative or speculative type of execution by the processor).In some embodiments, the logic circuit 206 may be configured to, when the configurable data bit is in the first state and the connection 204d to the execution type signal line 205d receives an indication of a first type (eg, a non-speculative type), from the processor 201 selects a first cache for a memory access request (eg, one of the commands received from a command bus for accessing the memory system). In addition, the logic circuit 206 may be configured to select from the processor 201 for memory when the configurable data bit is in a first state and the connection 204d to the execution type signal line 205d receives an indication of a second type (eg, a speculative type) Access the second cache of requests. In addition, the logic circuit 206 may be configured to select a second high speed from the processor 201 for the memory access request when the configurable data bit is in the second state and the connection 204d to the execution type signal line 205d receives an indication of the first type cache. Also, the logic circuit 206 may be configured to select the first high speed from the processor 201 for the memory access request when the configurable data bit is in the second state and the connection 204d to the execution type signal line 205d receives an indication of the second type cache.3A particularly shows aspects of an example computing device including a cache system (eg, cache system 200 ) having multiple caches (eg, see caches 302 and 304 ). The example computing device is also shown having registers 306 that store data 312 that may include configurable bits. Register 306 may be connected to or part of logic circuit 206 . In FIG. 3A, during a first time instance ("time instance X"), register 306 is shown storing data 312, which may be a configurable bit in a first state. Content 308a received from a first cache (eg, cache 302 ) during the first time instance contains content for a first type of execution. Also, content 310a received from a second cache (eg, cache 304) during the first time instance contains content for a second type of execution.3B particularly shows aspects of an example computing device including a cache system (eg, cache system 200 ) having multiple caches (eg, see caches 302 and 304 ). The example computing device is also shown having registers 306 that store data 314 that may include configurable bits. In FIG. 3B, it is shown that during a second time instance ("time instance Y"), register 306 stores data 314, which may be a configurable bit in a second state. Content 308b received from the first cache (eg, cache 302 ) during the second time instance contains content for the second type of execution. Also, content 310b received from a second cache (eg, cache 304 ) during the second time instance contains content for execution of the first type.The illustrated lines 320 connecting register 306 to caches 302 and 304 may be part of logic circuit 206 .In some embodiments, instead of using configurable bits to control the cache usage of the cache system 200, another form of data may be used to control the cache usage of the cache system. For example, logic circuit 206 may be configured to control a first cache (eg, see cache 202a) and a second cache (eg, see cache 202b) based on different data stored in registers 306 that are not configurable bits ). In such an instance, when the register 306 stores the first data or is in the first state, the logic circuit may be configured to implement the received data from the command bus for storing via the first cache when the execution type is the first type fetching a command to the memory system; and when the execution type is the second type, implementing the command received from the command bus for accessing the memory system via the second cache. Also, when the register 306 stores the second data or is in the second state, the logic circuit may be configured to implement the command received from the command bus for accessing the memory system via the second cache when the execution type is the first type. command; and when the execution type is the second type, implementing the command received from the command bus for accessing the memory system via the first cache.Figures 4, 5A, and 5B show example aspects of example computing devices, each computing device including executables with executables for main-type or normal-type execution (eg, non-speculative execution) and speculative execution, according to some embodiments of the present disclosure. A cache system that swaps caches.4 particularly shows aspects of an example computing device including a cache system 400 having multiple caches (eg, see caches 202a, 202b, and 202c depicted in FIG. 4). In FIG. 4 , an example computing device is also shown with a processor 401 and a memory system 203 . As shown by FIG. 4, cache system 400 is similar to cache system 200, but cache system 400 also includes a speculative state signal line 404 from processor 401 that identifies the state of speculative execution of instructions by processor 401 connection 402.Similarly, cache system 400 is shown including connection 204a to command bus 205a coupled between cache system and processor 401 . System 400 also includes a connection 204b to an address bus 205b coupled between the cache system and processor 401 . The addresses 102a, 102b, 102c, 102d, and 102e depicted in Figures 1A, 1B, 1C, 1D, and 1E, respectively, may each be communicated via the address bus 205b depending on the implementation of the cache system 400. System 400 also includes connection 204c to data bus 205c coupled between the cache system and processor 401 . It also contains a connection 204d to an execution type signal line 205d from the processor 401 identifying a non-speculative execution type or a speculative execution type. Not shown in FIG. 4, cache system 400 may also include configurable data bits. The configurable data bits may be included in or be the data 312 in the first state shown in FIG. 5A and may be included in or be the data 314 in the second state shown in FIG. 5B Data 314.In some embodiments, cache system 400 may include a first cache (eg, see cache 202a) and a second cache (eg, see cache 202b). In such an embodiment, as shown in FIG. 4 , cache system 400 may include logic circuitry 406 coupled to processor 401 . Furthermore, in such an embodiment, the logic circuit 406 may be configured to control the first cache (eg, see cache 202a) and the second cache (eg, see cache 202b) based on configurable data bits. When the configurable data bits are in the first state (eg, see data 312 depicted in FIG. 5A ), the logic circuit 406 may be configured to implement the command received from the command bus 205a when the execution type is a non-speculative type. for accessing the memory system 203 via the first cache command; and when the execution type is the speculative type, implementing the command received from the command bus 205a for accessing the memory system 203 via the second cache. When the configurable data bits are in the second state (eg, see data 314 depicted in FIG. 5B ), the logic circuit 406 may be configured to implement the data received from the command bus 205a for the execution type when the execution type is a non-speculative type The command to access the memory system 203 is via the second cache. Furthermore, when the configurable data bits are in the second state (eg, see data 314 depicted in FIG. 5B ), logic circuit 406 may be configured to implement the command received from command bus 205a when the execution type is the speculative type. for commands that access the memory system 203 via the first cache.In some embodiments, such as shown in FIG. 4, the first type may be configured to indicate non-speculative execution of instructions by the processor. In such an instance, the second type may be configured to indicate speculative execution of instructions by the processor. In such an embodiment, the cache system 400 may further include a connection 402 to a speculative state signal line 404 from the processor 401 that identifies the state of speculative execution of instructions by the processor. The connection 402 to the speculative state signal line 404 may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected.In addition, when the execution type is changed from the second type or speculative type to the first type or non-speculative type, the logic circuit 406 of the system 400 may be configured in a situation where the state of the speculative execution indicates that the results of the speculative execution will be accepted Toggle down configurable data bits. Additionally, when the execution type is changed from the second or speculative type to the first or non-speculative type, the logic 406 of the system 400 may be configured to reject the results of the speculative execution where the state of the speculative execution indicates that the results of the speculative execution are to be rejected Configurable data bits are maintained unchanged.5A particularly shows aspects of an example computing device including a cache system (eg, cache system 400 ) having multiple caches (eg, see caches 302 and 304 ). The example computing device is also shown having registers 306 that store data 312 that may include configurable bits. In Figure 5A, it is shown that during a first time instance ("time instance X"), register 306 stores data 312, which may be a configurable bit in a first state. This is similar to Figure 3A, except that the content 502a received from the first cache (eg, cache 302) during the first time instance contains content for non-speculative execution. Also, content 504a received from the second cache (eg, cache 304 ) during the first time instance contains content for speculative execution.5B particularly shows aspects of an example computing device including a cache system (eg, cache system 400 ) having multiple caches (eg, see caches 302 and 304 ). The example computing device is also shown having registers 306 that store data 314 that may include configurable bits. In Figure 5B, it is shown that during a second time instance ("time instance Y"), register 306 stores data 314, which may be a configurable bit in a second state. This is similar to Figure 3B, except that the content 502b received from the first cache (eg, cache 302) during the second time instance contains content for speculative execution. Also, content 504b received from the second cache (eg, cache 304 ) during the second time instance contains content for non-speculative execution.Also, similarly, in FIGS. 5A and 5B , the illustrated lines 320 connecting registers 306 to caches 302 and 304 may be part of logic circuitry 406 of cache system 400 .In some embodiments, instead of using configurable bits to control the cache usage of the cache system 400, another form of data may be used to control the cache usage of the cache system 400. For example, logic 406 in system 400 may be configured to control a first cache (eg, see cache 202a) and a second cache (eg, See cache 202b). In such an instance, when the register 306 stores the first data or is in the first state, the logic circuit may be configured to implement the received data from the command bus for use via the first cache when the execution type is a non-speculative type a command to access the memory system; and when the execution type is the speculative type, implement the command received from the command bus for accessing the memory system via the second cache. Also, when the register 306 stores the second data or is in the second state, the logic circuit may be configured to implement the received data from the command bus for accessing the memory system via the second cache when the execution type is the non-speculative type and when the execution type is the speculative type, implementing the command received from the command bus for accessing the memory system via the first cache.Some embodiments may include a cache system and the cache system may include multiple caches including a first cache and a second cache. The system may also include a connection to a command bus configured to receive read commands or write commands from a processor connected to the cache system for reading from or writing to the memory system. The system may also include a connection to an address bus configured to receive a memory address from the processor for accessing the memory system for a read command or a write command. The system may also include a connection to a data bus configured to: communicate data to the processor for the processor to read data for read commands; and receive data from the processor to write in the memory system for write commands enter. In such an example, memory access requests from the processor and memory used by the processor may be defined by a command bus, an address bus, and a data bus. The system may also include an identified execution type signal line configured to receive an execution type from the processor. The execution type is a first execution type or a second execution type (eg, normal or non-speculative execution or speculative execution).The system may also include configurable data bits configured to be set to a first state (eg, "0") or a second state (eg, "1") to control the first cache and the second Cache selection for processor use.The system may also include logic configured to select a first cache for use by the processor when the configurable data bit is in a first state and the execution type signal line receives an indication of an execution of the first type. The logic circuit may also be configured to select a second cache for use by the processor when the configurable data bit is in the first state and the execution type signal line receives an indication of execution of the second type. The logic circuit may also be configured to select a second cache for use by the processor when the configurable data bit is in the second state and the execution type signal line receives an indication of an execution of the first type. The logic circuit may also be configured to select the first cache for use by the processor when the configurable data bit is in the second state and the execution type signal line receives an indication of an execution of the second type.In some embodiments, the first type of execution is speculative execution of instructions by the processor, and the second type of execution is non-speculative execution of instructions by the processor (eg, normal or main execution). In such an example, the system may further include a connection to a speculative state signal line configured to receive the speculative state from the processor. The speculative state may be an acceptance or rejection of a condition with nested instructions that are initially executed by speculative execution of the processor and then executed by ordinary execution of the processor when the speculative state is an acceptance of the condition.In some embodiments, the logic circuit is configured to swap the configurable data bits from the first state to the second state when the speculative state received by the speculative state signal line is an acceptance of the condition. The logic circuit may also be configured to maintain the state of the configurable data bit when the speculative state received by the speculative state signal line is a rejection of the condition.In some embodiments, the logic circuit is configured to select a second, as identified by the first state of the configurable data bit, when a signal received by the execution type signal line changes from an indication of normal execution to an indication of speculative execution cache, and limit the first cache to use as identified by the first state of the configurable data bits. With this change, the speculative state can be ignored/bypassed by the logic circuit, since the processor in speculative execution does not know whether instructions preformed under speculative execution should be executed by main execution.The logic circuit may also be configured when the execution type signal line receives an indication of normal execution, when a signal received by the execution type signal line changes from an indication of speculative execution to an indication of normal execution, and when received by the speculative status signal line When the resulting speculative state is a rejection of the condition, the first state of the configurable data bits is maintained and the first cache is selected for the memory access request.In some embodiments, the logic circuit is configured to be conditional when a signal received by the execution type signal line changes from an indication of speculative execution to an indication of normal execution, and when the speculative state received by the speculative state signal line is conditional On rejection, the content of the second cache is invalidated and discarded.In some embodiments, the system further includes a connection to the second command bus, the connection configured to communicate read commands or write commands to a memory system (eg, including main memory). A read command or a write command may be received from the processor by the cache system. The system may also include a connection to the second address bus, the connection configured to communicate the memory address to the memory system. The memory address may be received from the processor by the cache system. The system may also include a connection to a second data bus configured to: communicate data to the memory system for writing in the memory system; and receive data from the memory system for communication to the processor for reading by the processor. For example, a memory access request from the cache system to the memory system may be defined by a second command bus, a second address bus, and a second data bus.In some embodiments, when the configurable data bit is in the first state, the logic circuit is configured to provide the command for accessing the memory system via the first cache to the second command when the execution type is the first type a bus; and when the execution type is the second type, providing a command for accessing the memory system via the second cache to the second command bus. And, when the configurable data bit is in the second state, the logic circuit may be configured to: provide a command for accessing the memory system via the second cache to the second command bus when the execution type is the first type; and When the execution type is the second type, a command for accessing the memory system via the first cache is supplied to the second command bus.Some embodiments may include a system including a processor, a memory system, and a cache system coupled between the processor and the memory system. The cache system of the system may include a plurality of caches including a first cache and a second cache. The cache system of the system may also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection between the cache system and the processor. The connection to the data bus between the processors, and the connection to the execution type signal line from the processor that identifies the execution type.The cache system of the system may also include configurable data bits, and logic coupled to the processor to control the first cache and the second cache based on the configurable data bits. When the configurable data bits are in the first state, the logic circuit may be configured to: when the execution type is the first type, implement a command received from the command bus for accessing the memory system via the first cache; and when executing When the type is the second type, commands received from the command bus for accessing the memory system via the second cache are implemented. And, when the configurable data bit is in the second state, the logic circuit may be configured to: when the execution type is the first type, implement the command received from the command bus for accessing the memory system via the second cache; and When the execution type is the second type, the command received from the command bus for accessing the memory system via the first cache is implemented.In such a system, the first type may be configured to indicate non-speculative execution of instructions by the processor, and the second type may be configured to indicate speculative execution of instructions by the processor. Additionally, the system's cache system may further include connections to speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected. When the execution type is changed from the second type (speculative type) to the first type (non-speculative type), the logic circuit may be configured to switch configurable if the state of the speculative execution indicates that the results of the speculative execution will be accepted data bits. Also, when the execution type is changed from the second type (speculative type) to the first type (non-speculative type), the logic circuit may be configured to maintain the state of the speculative execution if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected Configurable data bits do not change.6, 7A, 7B, 8A, 8B, 9A, and 9B show example aspects of example computing devices, each computing device including an A cache system for implementing shadow caching techniques and/or interchangeable sets of caches for main type and speculative type execution) with enhanced security.6 particularly shows aspects of an example computing device including a cache system 600 having multiple caches (eg, see caches 602a, 602b, and 602c), where at least one of the caches is implemented with cache set associativity. The example computing device is also shown with a processor 601 and a memory system 603 . Cache system 600 is configured to be coupled between processor 601 and memory system 603 .The cache system 600 is shown as including a connection 604a to a command bus 605a coupled between the cache system and the processor 601 . The cache system 600 is shown as including a connection 604b to an address bus 605b coupled between the cache system and the processor 601 . The addresses 102a , 102b , 102c , 102d and 102e depicted in FIGS. 1A , 1B, 1C, 1D and 1E, respectively, may each be communicated via an address bus 605b depending on the implementation of the cache system 600 . The cache system 600 is shown as including a connection 604c to a data bus 605c coupled between the cache system and the processor 601 . The cache system 600 is also shown as including a connection 604d to an execution type signal line 605d from the processor 601 that identifies the execution type. Connections 604a , 604b , 604c , and 604d may provide communicative coupling between buses 605a , 605b , 605c , and 605d and logic circuitry 606 of cache system 600 .Furthermore, as shown in FIG. 6 , the cache system 600 further includes a connection 608a to a second command bus 609a coupled between the cache system and the memory system 603 . The cache system 600 also includes a connection 608b to a second address bus 609b coupled between the cache system and the memory system 603 . The cache system 600 also includes a connection 608c to a second data bus 609c coupled between the cache system and the memory system 603 .Cache system 600 also includes a plurality of cache sets (eg, see cache sets 610a, 610b, and 610c). The cache set may include a first cache set (eg, see cache set 610a) and a second cache set (eg, see cache set 610b).Furthermore, as shown in FIG. 6, cache system 600 further includes a plurality of registers (eg, see registers 612a, 612b, and 612c) respectively associated with a plurality of cache sets. The registers (or cache set registers) may include a first register (eg, see register 612a) associated with a first cache set (eg, see cache set 610a) and a first register (eg, see register 612a) associated with a second cache set (eg, see cache set 610a) cache set 610b) associated with a second register (eg, see register 612a). Each of the plurality of registers (eg, see registers 612a, 612b, and 612c) may be configured to store a set index.As shown in Figures 6 and 10, cache 602a and caches 602b through 602c (caches 1 through N) are not fixed structures. It should be understood, however, that in some embodiments, the cache may be a fixed structure. Each of the depicted caches may be viewed as a logical grouping of cache sets, and such logical groupings are shown by dashed lines representing each logical cache. Cache sets 610a-610c (cache sets 1-N) may be based on the contents of registers 610a-610c (registers 1-N). Cache Sets 1 through N may be a set of cache sets within the cache system shared among Cache 1 and Cache 2 through Cache N. Cache 1 may be a subset of the set; cache 2 may be another non-overlapping subset. The set of member caches in each of the caches may change based on the contents of registers 1 through N.Depending on the embodiment, cache set 1 (in the conventional sense) may or may not communicate with its register 1 . Dashed lines are also shown in Figures 7A, 7B, 8A, 8B, 9A and 9B to indicate the logical relationship between the cache sets and the corresponding registers in Figures 7A, 7B, 8A, 8B, 9A and 9B. The contents of register 1 determine how cache set 1 is addressed (eg, what cache set index will cause cache set 1 to be selected for outputting data). In some embodiments, there is no direct interaction between cache set 1 and its corresponding register 1 . Depending on the embodiment, the logic circuit 606 or 1006 interacts with both the cache set and the corresponding registers.In some embodiments, logic 606 may be coupled to processor 601 to control multiple cache sets (eg, cache sets 610a, 610b, and 610c) according to multiple registers (eg, registers 612a, 612b, and 612c). In such an embodiment, cache system 600 may be configured to be coupled between processor 601 and memory system 603 . Also, when connection 604b to address bus 605b receives a memory address from processor 601, logic circuit 606 may be configured to generate a set index from at least the memory address and determine whether the generated set index is the same as that stored in a first register (eg, The contents of register 612a) or match the contents stored in a second register (eg, register 612b). The logic circuit 606 may also be configured to implement an operation on the AND command bus via the first cache set (eg, the cache set 610a) in response to the generated set index matching the contents stored in the first register (eg, the register 612a). The command received in connection 604a of 605a, and implemented via the second cache set (e.g., cache set 610b) in response to the generated set index matching the content stored in the second register (e.g., register 612b) the command.In some embodiments, cache system 600 may include a first cache (eg, see cache 602a) and a second cache (eg, see cache 602b). In such an embodiment, as shown in FIG. 2 , cache system 600 may include logic circuitry 606 coupled to processor 601 . Furthermore, in such an embodiment, the logic circuit 606 may be configured to control the first cache (eg, see cache 602a) and the A second cache (eg, see cache 602b).In some embodiments, in response to determining that the data set of the memory system 603 associated with the memory address is not currently cached in the cache system 600 (eg, not cached in the system's cache 602a), the logic circuit 606 configures The composition allocates a first cache set (eg, cache set 610a) for caching the data set and stores the resulting set index in a first register (eg, register 612a). In this and other embodiments, the cache system may include a connection to an execute-type signal line (eg, connection 604d to execute-type signal line 605 ) from a processor (eg, processor 601 ) that identifies the execution type ). Also, in this and other embodiments, the generated set index is generated further based on the type identified by the execution type signal line. Additionally, the generated set index may include a predetermined segment of bits in the memory address and bits representing the type identified by the execution type signal line 605d.In addition, when the first and second registers (eg, registers 612a and 612b) are in the first state, the logic circuit 606 may be configured to implement the commands received from the command bus 605a for use via the first type when the execution type is the first type. A cache set (eg, cache set 610a ) accesses memory system 601 commands. In addition, when the first and second registers (eg, registers 612a and 612b) are in the first state, the logic circuit 606 may be configured to implement the data received from the command bus 605a for use via the first state when the execution type is the second type. Two cache sets (eg, cache set 610b ) access commands to memory system 601 .In addition, when the first and second registers (eg, registers 612a and 612b) are in the second state, the logic circuit 606 may be configured to implement the command received from the command bus 605a for use via the multiplexer when the execution type is the first type. Another of the cache sets than the first cache set (eg, cache set 610b or 610c ) accesses a command of memory system 601 . In addition, when the first and second registers (eg, registers 612a and 612b) are in the second state, the logic circuit 606 may be configured to implement the data received from the command bus 605a for use via the multiplexer when the execution type is the second type. Another one of the cache sets than the second cache set (eg, cache set 610a or 610c, or another cache set not depicted in FIG. 6) accesses a command to the memory system 601 .In some embodiments, each of the plurality of registers (eg, see registers 612a, 612b, and 612c) may be configured to store the set index, and when the execution type changes from the second type to the first type (eg, from non-speculative) When changing from a speculative type of execution to a speculative type of execution), the logic circuit 606 may be configured to change the content stored in the first register (eg, register 612a ) and the content stored in the second register (eg, register 612b ) . Examples of changes in the content stored in the first register (eg, register 612a) and the content stored in the second register (eg, register 612b) are illustrated in Figures 7A and 7B, Figures 8A and 8B, and Figures 9A and 9B .Each of FIGS. 7A, 7B, 8A, 8B, 9A, and 9B specifically show aspects of an example computing device that includes a cache system with multiple cache sets (eg, see caches 702, 704, and 706), where The cache set is implemented via cache set associativity. The respective cache system for each of these figures is also shown as having a plurality of registers respectively associated with the cache sets. The plurality of registers includes at least register 712 , register 714 and register 716 . The plurality of registers includes at least one additional register not shown in the figure. Register 712 is shown associated with or connected to cache set 702, register 714 is shown associated with or connected to cache set 704, and register 716 is shown associated with cache set 706 Or connect to the cache collection 706.Not shown in Figures 7A, 7B, 8A, 8B, 9A, and 9B, each of the respective cache systems may also include a connection to a command bus coupled between the cache system and the processor, and a connection to a command bus coupled between the cache system and the processor. The connection of the address bus between the system and the processor, and the connection of the data bus coupled between the cache system and the processor. Each of the cache systems may also include logic coupled to the processor to control multiple cache sets (eg, cache sets 702 , 704 and 706 ) according to multiple registers (eg, registers 712 , 714 , and 716 ) .As illustrated by Figures 7A, 7B, 8A, 8B, 9A, and 9B, when a connection to the cache system's address bus receives a memory address (eg, see memory address 102b, 102c, or 102d) from the processor, the cache The logic of the system may be configured to generate a set index (eg, see set index 722, 724, 726, or 728) from a memory address (eg, see set index generation 730, 732, 830, 832, 930, or 932).In particular, as shown in FIG. 7A, at least registers 712, 714, and 716 are configured to be in a first state. When the connection to the address bus of the cache system receives memory address 102b from the processor, the logic of the cache system generates an instance of cache set index 112b based on at least set index generation 730a, 730b or 730c and address 102b, respectively, Collection index 722, 724 or 726. Set index generation 730a, 730b, or 730c may be used to store set index 722, 724, or 726 in register 712, 714, or 716, respectively. Collection index generation 730a, 730b, or 730c may also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 730a, 730b, and 730c occurs when the register is configured to be in the first state. The configuration of the first state may be generated and stored through the collection index.In particular, as shown in FIG. 7B, at least registers 712, 714, and 716 are configured to be in the second state. When a connection to the address bus of the cache system receives memory address 102b from the processor, the logic of the cache system generates an instance of cache set index 112b based on at least set index generation 732a, 732b or 732c and address 102b, respectively, Collection index 726, 722 or 728. Set index generation 732a, 732b, or 732c may be used to store set index 726, 722, or 728 in registers 712, 714, or 716, respectively. Collection index generation 732a, 732b, or 732c may also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 732a, 732b, and 732c occurs when the register is configured in the second state. The configuration of the second state may be generated and stored by the collection index.In particular, as shown in Figure 8A, at least registers 712, 714, and 716 are configured to be in a first state. When the connection to the address bus of the cache system receives the memory address 102c from the processor, the logic of the cache system generates 830a, 830b or 830c and the tag 104c of the address 102b with the cache set indicator, respectively, based on at least the set index An instance of yields a collection index 722, 724 or 726. Set index generation 830a, 830b, or 830c may be used to store set index 722, 724, or 726 in register 712, 714, or 716, respectively. Collection index generation 830a, 830b, or 830c may also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 830a, 830b, and 830c occurs when the register is configured to be in the first state.In particular, as shown in FIG. 8B, at least registers 712, 714, and 716 are configured to be in the second state. When the connection to the address bus of the cache system receives the memory address 102c from the processor, the logic of the cache system generates 832a, 832b or 832c and the tag 104c of the address 102b with the cache set indicator, respectively, based on at least the set index An instance of yields a collection index 726, 722, or 728. Set index generation 832a, 832b, or 832c may be used to store set index 726, 722, or 728 in register 712, 714, or 716, respectively. Collection index generation 832a, 832b, or 832c may also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 832a, 832b, and 832c occurs when the register is configured in the second state.In particular, as shown in FIG. 9A, at least registers 712, 714, and 716 are configured to be in a first state. When the connection to the address bus of the cache system receives the memory address 102d from the processor, the logic of the cache system generates 930a, 930b or 930c and the cache set index 112d in the tag 104d of the address 102b based on at least the set index, respectively An instance of yields a collection index 722, 724 or 726. Set index generation 930a, 930b, or 930c may be used to store set index 722, 724, or 726 in register 712, 714, or 716, respectively. Collection index generation 930a, 930b, or 930c can also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 930a, 930b, and 930c occurs when the register is configured to be in the first state.In particular, as shown in FIG. 7B, at least registers 712, 714, and 716 are configured to be in the second state. When the connection to the address bus of the cache system receives the memory address 102d from the processor, the logic of the cache system generates 932a, 932b or 932c and the cache set index 112d in the tag 104d of the address 102b based on at least the set index, respectively An instance of yields a collection index 726, 722, or 728. Set index generation 932a, 932b, or 932c may be used to store set index 726, 722, or 728 in register 712, 714, or 716, respectively. Collection index generation 932a, 932b, or 932c may also be used to use the most recently generated collection index in a comparison of the most recently generated collection index with what has been stored in registers 712, 714, or 716, respectively. Set index generation 932a, 932b, and 932c occurs when the register is configured in the second state.In some embodiments implemented with the cache systems illustrated in Figures 7A and 7B, 8A and 8B, or 9A and 9B, when a memory address is received from a processor by a connection to an address bus, logic may be configured to determine which Whether the generated set index matches the content stored in one of the registers (eg, registers 712, 714, and 716). The content stored in the register may come from the previous generation of the set index and the storage of the set index in the register.Furthermore, in some embodiments implemented by the cache systems illustrated in Figures 7A and 7B, 8A and 8B, or 9A and 9B, the logic circuit may be configured to be responsive to the generated set index with the first stored in the associated first The content in the register matches the content in the register to implement the command received in the connection to the command bus via the first cache set, and via the second responsive to the generated set index matching the content stored in the associated second register The cache collection implements the command. Furthermore, in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit may be configured to allocate the first cache set for caching the data set and to store the resulting set The index is stored in the first register. The generated set index may include a predetermined segment of bits in the memory address.Furthermore, in such an embodiment, when the first and second registers are in the first state, the logic circuit may be configured to: when the execution type of the processor is the first type, implement the data received from the command bus for use via the the first cache set accesses a command to the memory system; and when the execution type is the second type, implements the command received from the command bus for accessing the memory system via the second cache set. In addition, when the first and second registers are in the second state, the logic circuit may be configured to implement, when the execution type is the first type, implement a divide-by-first function received from the command bus for use in the plurality of cache sets a command that accesses the memory system by another cache set other than the cache set; and when the execution type is the second type, implementing a command received from the command bus for use in a cache set other than the second cache set of the plurality of cache sets A cache other than another set of commands to access the memory system. In such an instance, each of the plurality of registers may be configured to store the set index, and when the execution type is changed from the second type to the first type, the logic circuit may be configured to change the content stored in the first register and The content stored in the second register.10 particularly shows aspects of an example computing device including a cache system 1000 having multiple caches (eg, see caches 602a, 602b, and 602c depicted in FIG. 10 ), wherein at least one of the caches starts with a cache Cache set associativity (eg, see cache sets 610a, 610b, and 601c) is implemented. In FIG. 10 , an example computing device is also shown with a processor 1001 and a memory system 603 . As shown by FIG. 10 , cache system 1000 is similar to cache system 600 , but cache system 1000 also includes speculative status signal lines 1004 from processor 1001 that identify the status of speculative execution of instructions by processor 1001 connection 1002.Similarly, cache system 1000 is shown including a connection 604a to a command bus 605a coupled between the cache system and processor 1001 . System 1000 also includes a connection 604b to an address bus 605b coupled between the cache system and processor 1001 . The addresses 102a, 102b, 102c, 102d, and 102e depicted in FIGS. 1A, 1B, 1C, 1D, and 1E, respectively, may each be communicated via the address bus 605b depending on the implementation of the cache system 1000 . System 1000 also includes a connection 604c to a data bus 605c coupled between the cache system and processor 1001 . It also contains a connection 604d to an execution type signal line 605d from the processor 1001 identifying a non-speculative execution type or a speculative execution type.Similarly, cache system 1000 is also shown as including logic circuit 1006 , which may be similar to logic circuit 606 , but whose circuitry is coupled to connection 1002 to speculative state signal line 1004 .In some embodiments, logic circuit 1006 may be coupled to processor 1001 to control multiple cache sets (eg, cache sets 610a, 610b, and 610c) according to multiple registers (eg, registers 612a, 612b, and 612c). Each of the plurality of registers (eg, see registers 612a, 612b, and 612c) may be configured to store a set index.In such an embodiment, cache system 1000 may be configured to be coupled between processor 1001 and memory system 603 . Also, when connection 604b to address bus 605b receives a memory address from processor 1001, logic circuit 1006 may be configured to generate a set index from at least the memory address and determine whether the generated set index is the same as stored in a first register (eg, , the contents in register 612a) or match the contents stored in a second register (eg, register 612b). The logic circuit 1006 may also be configured to implement an operation on the AND command bus via the first cache set (eg, cache set 610a) in response to the generated set index matching the content stored in the first register (eg, register 612a). The command received in connection 604a of 605a, and implemented via the second cache set (e.g., cache set 610b) in response to the generated set index matching the content stored in the second register (e.g., register 612b) the command.Furthermore, cache system 1000 is shown to include connections 608a, 608b, and 608c that are similar to the corresponding connections shown in FIG. With respect to the connections 608a, 608b, and 608c depicted in FIGS. 6 and 10, when the first and second registers (eg, registers 612a and 612b) are in the first state, the logic circuit 606 or 1006 may be configured to operate when the execution type is In a first type (eg, a non-speculative type), commands for accessing memory system 603 via a first cache set (eg, cache set 610a) are provided to a second command bus 609a. Additionally, when the first and second registers (eg, registers 612a and 612b) are in the first state, the logic circuit 606 or 1006 may be configured to, when the execution type is the second type (eg, the speculative type), will Commands for the second cache set (eg, cache set 610b) to access the memory system are provided to the second command bus 609a.Additionally, when the first and second registers (eg, registers 612a and 612b) are in the second state, the logic circuit 606 or 1006 may be configured to, when the execution type is the first type, use the Commands to access memory system 603 from an external cache set (eg, cache set 610b or 610c, or another cache set not depicted in Figures 6 or 10) are provided to second command bus 609a. Furthermore, when the first and second registers (eg, registers 612a and 612b) are in the second state, the logic circuit 606 or 1006 may be configured to, when the execution type is the second type, will be used to divide the second cache set by dividing Commands to access memory system 603 from an external cache set (eg, cache set 610a or 610c, or another cache set not depicted in Figures 6 or 10) are provided to second command bus 609a.In some embodiments, such as shown in FIG. 10, the first type may be configured to indicate non-speculative execution of instructions by processor 1001; and the second type may be configured to indicate speculative execution of instructions by processor. Shown in FIG. 10, cache system 1000 further includes a connection 1002 to a speculative state signal line 1004 from processor 1001 that identifies the state of speculative execution of instructions by the processor. The connection 1002 to the speculative state signal line 1004 may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected.In such an embodiment, each of the plurality of registers (eg, registers 612a, 612b, and 612c) may be configured to store the set index, and when the execution type is changed from the speculative execution type to the non-speculative type, the logic circuit 1006 may be configured to change the contents stored in the first register (eg, register 612a) and the contents stored in the second register (eg, register 612b) if the status of the speculative type of execution indicates that the results of the speculative execution will be accepted. ) in the content. Also, when the execution type is changed from a speculative type to a non-speculative type, the logic circuit 1006 may be configured to maintain storage in the first The contents of the register and the contents stored in the second register do not change.Some embodiments may include a cache system including a plurality of cache sets having at least a first cache set and a second cache set. The cache system may also include multiple registers, each associated with multiple cache sets. The plurality of registers may include at least a first register associated with the first cache set configured to store the set index, and a second register associated with the second cache set configured to store the set index. The cache system may also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection to an address bus coupled between the cache system and the processor. The connection between the data bus and the execution type signal line from the processor to identify the execution type.The cache system may also include logic coupled to the processor to control the plurality of cache sets according to the plurality of registers. Also, the cache system may be configured to be coupled between the processor and the memory system. When the first and second registers are in the first state, the logic circuit may be configured to: when the execution type is the first type, implement a command received from the command bus for accessing the memory system via the first cache set; and when the execution type is the second type, implementing a command received from the command bus for accessing the memory system via the second set of caches. In addition, when the first and second registers are in the second state, the logic circuit may be configured to implement, when the execution type is the first type, implement a divide-by-first function received from the command bus for use in the plurality of cache sets a cache set other than the cache set accesses a command to the memory system; and when the execution type is the second type, implementing a command received from the command bus for use via the second cache set of the plurality of cache sets except the second cache set A cache other than another set of commands to access the memory system.The connection to the address bus can be configured to receive a memory address from the processor, and the memory address can include a set index.In some embodiments, when the first and second registers are in the first state, a first set index associated with the first cache set is stored in the first register, and a first set index associated with the second cache set is stored in the first register. The second set index is stored in the second register. When the first and second registers are in the second state, the first set index may be stored in another register of the plurality of registers other than the first register, and the second set index may be stored in another register of the plurality of registers than the first register in another register other than the second register. In such an instance, when the connection to the address bus receives a memory address from the processor, the logic circuit may be configured to: generate a set index from at least the memory address; and determine whether the generated set index is the same as stored in the first register or match the content stored in the second register. And, the logic circuit may be further configured to implement the command received in the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set index The set index of the matches the content stored in the second register to implement the command via the second cache set.In response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit may be configured to allocate the first cache set for caching the data set and store the generated set index in the first register.In some embodiments, the generated set index is generated further based on the execution type identified by the execution type signal line. In such an example, the generated set index may include a predetermined segment of bits in the memory address and bits representing the execution type identified by the execution type signal line.Some embodiments may include a system including a processor, a memory system, and a cache system. The cache system may include: a plurality of cache sets including a first cache set and a second cache set; and a plurality of registers respectively associated with the plurality of cache sets, the plurality of registers including A first register associated with a set of caches and a second register associated with a second set of caches. The cache system may also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection to an address bus coupled between the cache system and the processor connection between the data bus.The cache system may also include logic coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives the memory address from the processor, the logic circuit may be configured to: generate a set index from at least the memory address; and determine whether the generated set index is the same as the content stored in the first register or the content stored in the first register The contents of the second register match. Also, the logic circuit may be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set index. The set index matches the content stored in the second register to implement the command via the second cache set.The cache system may further include a connection to an execution type signal line from the processor identifying the execution type. The generated set index may be further generated based on the type identified by the execution type signal line. The generated set index may contain a predetermined segment of bits in the memory address and bits representing the type identified by the execution type signal line.11A and 11B illustrate a background for synchronizing content between the main cache and the shadow cache to save content cached in the main cache in preparation for accepting content in the shadow cache, according to some embodiments of the present disclosure Synchronous circuit system. The cache system in FIGS. 11A and 11B includes background synchronization circuitry 1102 . For example, cache 1124 and cache 1126 may be caches 202a and 202b in FIG. 2 or 4, or caches 602a and 602b in FIG. 6 or 10 . Background synchronization circuitry 1102 may be part of logic circuits 206 , 406 , 606 or 1006 .11A illustrates a scenario where cache 1124 is used as the main cache in non-speculative execution and cache 1126 is used as a shadow cache in speculative execution. Background synchronization circuitry 1102 is configured to synchronize 1130 the contents of the cache from cache 1124 to cache 1126 so that cache 1126 can be used as the main cache for subsequent non-speculative execution if conditional speculative execution is determined to be required ; and, cache 1124 may be used as a shadow cache in additional instances of speculative execution. Synchronizing 1130 the contents of the cache from cache 1124 to cache 1126 copies previous execution results into cache 1126 so that execution results are not lost when cache 1124 is subsequently repurposed as a shadow cache. Cached content from cache 1124 may be cached in cache 1124, but not yet flushed to memory (eg, memory 203 or 603). In addition, some of the memory content with the same copy cached in cache 1124 may also be copied from cache 1124 to cache 1126, so that when cache 1126 is subsequently used as the primary cache, previously cached in cache The content in 1124 is also available in cache 1126. This can speed up access to previously cached content. Copying content between cache 1124 and cache 1126 is faster than retrieving data from memory to a cache system.In some embodiments, if the program references variables during normal execution, the variables may be cached. In such an instance, if the variable is referenced while writing through the cache during speculation, the value in main memory is valid and correct. If a variable is referenced when writing back to the cache during speculation, the aforementioned example features described for FIG. 11A may be used; and the valid value of the variable may be in cache 1124.In the scenario illustrated in FIG. 11A, a processor (eg, processor 201, 401, 601, or 1001) may execute the first instruction set in a non-speculative execution mode. During execution of the first instruction set, the processor may access memory addresses to load data (eg, instructions and operands) from memory, and store results of computations. Since the cache 1124 is used as the main cache, the content of data and/or computation results may be cached in the cache 1124 . For example, cache 1124 may store computation results that have not been written back to memory; and cache 1124 may store loaded data (eg, instructions and operands) that may be used in subsequent executions of instructions.In preparing cache B 1226 for use as a shadow cache in speculative execution of the second instruction set, background sync circuitry 1102 copies the cached contents from cache 1124 to cache 1126 in sync 1130 . At least part of the copy operation may be performed in the background independently of the processor accessing the memory via the cache system. For example, when the processor is accessing the first memory address in non-speculative execution of the first instruction set, the background synchronization circuitry 1102 may copy the content cached in the cache 1124 for the second memory address into cache 1126. In some cases, the copy operation may occur in the background concurrently with accessing memory via the cache system. For example, when the processor is accessing the first memory address in non-speculative execution of the first instruction set to store the result of the computation, the background synchronization circuitry may copy the result of the computation into the cache 1126 as a reference to the first memory address. Cache content.In one embodiment, the background synchronization circuitry 1102 is configured to complete the synchronization operation before allowing the cache 1126 to be used in speculative execution of the second instruction set. Thus, when cache 1126 is enabled for speculative execution of the second instruction set, the payload in cache 1124 may also be found in cache 1126 . However, synchronization operations may delay the use of cache 1126 as a shadow cache. Instead, background synchronization circuitry 1102 is configured to prioritize synchronization of dirty content from cache 1124 to cache 1126 . Dirty content can be when data in cache has been modified and data in main memory has not been modified.Dirty content cached in cache 1124 may be newer than content stored in memory at corresponding one or more addresses. For example, when the processor stores a calculation result at an address, cache 1124 may cache the calculation result of the address without immediately writing the calculation result to memory at the address. The contents of the cache are no longer considered dirty when the result of the computation is written back to memory at the address. Cache 1124 stores data to keep track of dirty content cached in cache 1124 . Background synchronization circuitry 1102 may automatically copy dirty content from cache 1124 to cache 1126 in preparation for cache 1126 to act as a shadow cache.Optionally, the background synchronization circuitry 1102 may allow the cache 1126 to act as a shadow cache in conditional speculative execution of the second instruction set until the synchronization operation is completed. During the time period that the cache 1126 is used as a shadow cache in speculative execution, the background synchronization circuit 1102 may continue the synchronization operation 1130 copying the contents of the cache from the cache 1124 to the cache 1126 . Background synchronization circuitry 1102 is configured to at least complete synchronization of dirty content from cache 1124 to cache 1126 before allowing cache 1126 to be accepted as the primary cache. For example, following an indication that execution of the second instruction set is required, background synchronization circuitry 1102 determines whether dirty content in cache 1124 has been synchronized to cache 1126; and if not, defers cache 1126 as the primary cache is used until synchronization is complete.In some implementations, background synchronization circuitry 1102 may continue its synchronization operations even after accepting cache 1126 as the main cache but before cache 1124 is used as a shadow cache in conditional speculative execution of the third instruction set .Before completing the synchronization operation 1130, the cache system may configure the cache 1124 as a secondary cache between the cache 1126 and the memory during speculative execution, such that when the contents of the memory address are not found in the cache 1126, the cache The caching system checks cache 1124 to determine if the content is in cache 1124; and if so, copies the content from cache 1124 to cache 1126 (rather than loading directly from memory). When the processor stores data at a memory address and caches data in cache 1126, the cache system checks to invalidate the contents cached in cache 1124, which is a secondary cache.After the cache 1126 is reconfigured as the primary cache after accepting the results of the speculative execution of the second instruction set, the background synchronization circuitry 1102 may begin to synchronize 1132 the contents of the cache from the cache 1126 to the cache 1124, as shown in FIG. 11B described in.After speculative execution of the second instruction set, if the speculative state from the processor indicates that the results of execution of the second instruction set should be rejected, cache 1124 still acts as a primary cache; and the contents of cache 1126 may be invalidated . Invalidation may involve the cache 1126 marking all its entries as empty; thus, any subsequent speculation begins with an empty speculative cache.Background synchronization circuitry 1102 may again synchronize 1130 the contents of the cache from cache 1124 to cache 1126 for speculative execution of the third instruction set.In some embodiments, cache 1124 and cache 1126 each have dedicated and fixed sets of cache sets; and configurable bits are used to control the Use, as illustrated in Figures 3A, 3B, 5A and 5B.In other embodiments, caches 1124 and 1126 may share a pool of cache sets, some of which may be dynamically allocated to caches 1124 and 1126 , as illustrated in FIGS. 6-10 . When cache 1124 is used as a main cache and cache 1126 is used as a shadow cache, cache 1126 may have a smaller number of cache sets than cache 1124 . Some of the cache sets in cache 1126 may be shadows of a portion of the cache set in cache 1124 such that portions of the cache set in cache 1124 may be reconfigured when a result of speculative execution is determined to be accepted for use as a shadow cache in the next speculative execution; and the remaining portion of the cache set not affected by speculative execution may be reallocated from cache 1124 to cache 1126, so that the cached The content can be further used in subsequent non-speculative executions.12 shows an example operation of the background synchronization circuitry 1102 of FIGS. 11A and 11B in accordance with some embodiments of the present disclosure.As shown in FIG. 12, at operation 1202, the cache system configures the first cache as the primary cache and the second cache as the shadow cache. For example, when a dedicated cache with a fixed hardware structure is used as the first cache and the second cache, configurable bits can be used to configure the first cache as the main cache and the second cache as the shadow cache, as illustrated in Figures 2 to 5B. Alternatively, registers may be used to allocate cache sets from a pool of cache sets in and out of the first cache and the second cache using registers in the manner illustrated in FIGS. 6 to 10 .At operation 1204, the cache system determines whether the current execution type has changed from non-speculative to speculative. For example, when the processor accesses memory via the cache system 200, the processor further provides an indication of whether the current memory access is associated with conditional speculative execution. For example, an indication may be provided in signal line 205d configured to specify the type of execution.If the current execution type has not changed from non-speculative to speculative, then the cache system at operation 1206 serves the memory access request from the processor using the first cache as the primary cache. When the memory access changes the contents of the cache in the first cache, the background synchronization circuitry 1102 may copy the contents cached in the first cache to the second cache in operation 1208 . For example, background synchronization circuitry 1102 may be part of logic circuit 206 in FIG. 2 , 406 in FIG. 4 , 606 in FIG. 6 , and/or 1006 in FIG. 10 . Background synchronization circuitry 1102 may prioritize the replication of dirty content cached in the first cache.In FIG. 12, operations 1204 through 1208 are repeated until the cache system 200 determines that the current execution type is changed to speculative.Optionally, background synchronization circuitry 1102 is configured to continue copying content cached in the first cache to the second cache to complete synchronization of at least dirty content from the first cache to the second cache in operation 1210 , then in operation 1212 the cache system is allowed to service memory requests from the processor during speculative execution using the second cache.Optionally, background synchronization circuitry 1102 may continue synchronization operations while the cache system uses the second cache during speculative execution in operation 1212 to service memory requests from the processor.At operation 1214, the cache system determines whether the current execution type has changed to non-speculative. If the current execution type is still speculative, operations 1210 and 1212 may be repeated.In response to determining at operation 1214 that the current execution type is changed to non-speculative, the cache system determines whether the results of the speculative execution are to be accepted. The result of the speculative execution corresponds to a change in the contents of the cache in the second cache. For example, processor 401 may provide an indication of whether the results of speculative execution should be accepted via speculative state signal line 404 illustrated in FIG. 4 or speculative state signal line 1004 in FIG. 10 .If, in operation 1216, the cache system determines that the results of the speculative execution are to be rejected, the cache system may, in operation 1222, discard the contents of the cache currently cached in the second cache (eg, by setting the second cache) Invalid bits of the cache block in the cache are discarded). Then, in operation 1244, the cache system may maintain the first cache as the main cache and the second cache as the shadow cache; and in operation 1208, the background synchronization circuitry 1102 may update the cached contents from the first cache One cache is copied to the second cache. While execution is still non-speculative, operations 1204-1208 may be repeated.If, in operation 1216, the cache system determines that the results of the speculative execution are to be accepted, then the background synchronization circuitry 1102 is configured to, in operation 1218, further update the Content cached in the first cache is copied to the second cache to complete synchronization of at least dirty content from the first cache to the second cache. In operation 1220, the cache system configures the first cache as a shadow cache and the second cache as a primary cache in a manner somewhat similar to operation 1202. When the first cache is configured as a shadow cache, the cache system may invalidate its contents and then synchronize the contents of the cache in the second cache in a manner somewhat similar to operations 1222, 1224, 1208 and 1204 to the first cache.For example, when a dedicated cache having a fixed hardware structure is used as the first cache and the second cache, the configurable bits may be changed to configure the first cache as a shadow cache and the second cache in operation 1220. The cache is configured as the primary cache. Alternatively, when a cache set may be allocated from a pool of cache sets to a first cache and a second cache using registers in the manner illustrated in Figures 6 to 10, initially in the first cache but not in the second cache A cache set affected by speculative execution may be reconfigured to join the second cache via its associated registers (eg, registers 612a and 612b illustrated in FIGS. 6 and 10 . The cache originally in the first cache The set (but now with no data content in view of the content in the second cache) can be reconfigured as in the new first cache. Optionally, additional cache sets can be allocated from the available pool of cache sets and added to the new first cache. Optionally, some of the cache set with invalidated cache content may be put back into the cache set's available pool for future allocation (e.g., for addition to the primary cache the second cache or the first cache as a shadow cache).In this specification, the present disclosure has been described with reference to specific exemplary embodiments of the present disclosure. It will, however, be evident that various modifications may be made therein without departing from the broader spirit and scope as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.For example, embodiments may include a cache system including: a first cache; a second cache; a connection to a command bus coupled between the cache system and the processor; connection to the address bus between the cache system and the processor; connection to the data bus coupled between the cache system and the processor; connection to the execution type signal line from the processor identifying the execution type; The execution type controls the logic of the first cache and the second cache. In such an embodiment, the cache system is configured to be coupled between the processor and the memory system. Furthermore, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic is configured to store the cache A portion of the content cached in the first cache is copied to the second cache.In such an embodiment, the logic circuit may be configured to copy the portion of the content cached in the first cache to the second cache independently of the current command received in the command bus.Furthermore, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit may be configured to respond The second cache is used to service subsequent commands from the command bus upon a change in execution type from a first type to a second type indicating speculative execution of instructions by the processor. The logic may also be configured to complete synchronizing the portion of the content from the first cache to the second cache before servicing subsequent commands after the execution type is changed from the first type to the second type. The logic may also be configured to continue to synchronize portions of the content from the first cache to the second cache when servicing subsequent commands.In such an embodiment, the cache system may further include: configurable data bits, and the logic circuit is further coupled to control the first cache and the second cache according to the configurable data bits. When the configurable data bits are in the first state, the logic circuit may be configured to: when the execution type is the first type, implement a command received from the command bus for accessing the memory system via the first cache; and when executing When the type is the second type, commands received from the command bus for accessing the memory system via the second cache are implemented. And, when the configurable data bit is in the second state, the logic circuit may be configured to: when the execution type is the first type, implement the command received from the command bus for accessing the memory system via the second cache; and When the execution type is the second type, the command received from the command bus for accessing the memory system via the first cache is implemented. The logic circuit may also be configured to toggle the configurable data bits when the execution type is changed from the second type to the first type.In such an embodiment, the cache system may further include a connection to a speculative state signal line from the processor that identifies the state of speculative execution of the instruction by the processor. The connection to the speculative state signal line is configured to receive the state of speculative execution. The state of the speculative execution indicates whether the results of the speculative execution will be accepted or rejected. When the execution type is changed from the second type to the first type, the logic circuit may be configured to: toggle the configurable data bits if the state of the speculative execution indicates that the results of the speculative execution will be accepted; and in the state of the speculative execution Configurable data bits are maintained unchanged in the event of an indication that the outcome of speculative execution will be rejected.Furthermore, in such an embodiment, the first cache and the second cache collectively comprise: a plurality of cache sets comprising the first cache set and the second cache set; and associated respectively with the plurality of cache sets and an associated plurality of registers, the plurality of registers including a first register associated with the first set of caches and a second register associated with the second set of caches. In such an example, logic circuitry may be further coupled to control the plurality of cache sets in accordance with the plurality of registers. Furthermore, when the connection to the address bus receives a memory address from the processor, the logic circuit may be configured to: generate a set index from at least the memory address; and determine whether the generated set index is the same as the content stored in the first register or with The contents stored in the second register match. The logic circuit may also be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set The index matches the content stored in the second register to implement the command via the second set of caches. Furthermore, in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit may be configured to allocate the first cache set for caching the data set and to store the resulting set The index is stored in the first register.Additionally, in such an embodiment with a cache set, the cache system may also include a connection to an execution type signal line from the processor identifying the execution type, and further generate the generated data based on the type identified by the execution type signal line. The resulting collection index. The generated set index may contain a predetermined segment of bits in the memory address and bits representing the type identified by the execution type signal line. Furthermore, when the first and second registers are in the first state, the logic circuit may be configured to implement a command received from the command bus for accessing the memory system via the first cache set when the execution type is the first type command; and when the execution type is the second type, implementing the command received from the command bus for accessing the memory system via the second cache set. And, when the first and second registers are in the second state, the logic circuit is configured to: when the execution type is of the first type, implement a command received from the command bus for dividing the first cache via the plurality of cache sets a command that accesses the memory system by another cache set other than the cache set; and when the execution type is the second type, implementing a command received from the command bus for use via a cache set other than the second cache set of the plurality of cache sets Another other cache sets the commands to access the memory system.In such an embodiment with a cache set, each of the plurality of registers may be configured to store the set index. Also, when the execution type is changed from the second type to the first type, the logic circuit may be configured to change the content stored in the first register and the content stored in the second register. Furthermore, the first type may be configured to indicate non-speculative execution of instructions by the processor, and the second type may be configured to indicate speculative execution of instructions by the processor. In such an example, the cache system may further include connections to speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. The connection to the speculative state signal line is configured to receive the state of the speculative execution, and the state of the speculative execution indicates whether the result of the speculative execution will be accepted or rejected. When the execution type is changed from the second type to the first type, the logic circuit may be configured to change the content stored in the first register and the content stored in the second register if the state of the speculative execution indicates that the results of the speculative execution will be accepted. the contents of the registers; and if the status of the speculative execution indicates that the results of the speculative execution will be rejected, the contents stored in the first register and the contents stored in the second register are maintained unchanged.Furthermore, for example, embodiments may include a cache system that includes, in general, a plurality of cache sets and a plurality of registers respectively associated with the plurality of cache sets. The plurality of cache sets includes a first cache set and a second cache set, and the plurality of registers includes a first register associated with the first cache set and a second register associated with the second cache set. Similarly, in such an embodiment, the cache system may include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and A connection to a data bus is coupled between the cache system and the processor, a connection to an execution type signal line from the processor identifying the execution type, and logic coupled to control the plurality of cache sets according to the execution type. The cache system may also be configured to be coupled between the processor and the memory system. Also, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic may be configured to A portion of the content cached in the first cache set is copied to the second cache set.In such an embodiment with a cache set, the logic circuit may be configured to copy the portion of the content cached in the first cache set to the second cache set independently of the current command received in the command bus . When the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit may be configured to respond to The execution type is changed from a first type to a second type indicating speculative execution of instructions by the processor to service subsequent commands from the command bus using the second cache set. The logic may also be configured to complete synchronizing the portion of the content from the first cache set to the second cache set before servicing subsequent commands after the execution type is changed from the first type to the second type. The logic may also be configured to continue to synchronize portions of the content from the first cache set to the second cache set when servicing subsequent commands.Furthermore, in such embodiments with cache sets, logic circuits may be further coupled to control multiple cache sets in accordance with multiple registers. When the connection to the address bus receives the memory address from the processor, the logic circuit may be configured to: generate a set index from at least the memory address; and determine whether the generated set index is the same as the content stored in the first register or the content stored in the first register The contents of the second register match. The logic circuit may also be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated set index matching the content stored in the first register, and in response to the generated set The index matches the content stored in the second register to implement the command via the second set of caches. Furthermore, in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit may be configured to allocate the first cache set for caching the data set and to store the resulting set The index is stored in the first register.Additionally, in such embodiments with a cache set, the cache system may further include a connection to an execution type signal line from the processor identifying the execution type, and may be further generated based on the type identified by the execution type signal line The resulting collection index. The generated set index may contain a predetermined segment of bits in the memory address and bits representing the type identified by the execution type signal line. When the first and second registers are in the first state, the logic circuit may be configured to: when the execution type is the first type, implement a command received from the command bus for accessing the memory system via the first cache set; and when the execution type is the second type, implementing a command received from the command bus for accessing the memory system via the second set of caches. And, when the first and second registers are in the second state, the logic circuit may be configured to: when the execution type is the first type, implement the divided first type received from the command bus for use in the plurality of cache sets a command that accesses the memory system by another cache set other than the cache set; and when the execution type is the second type, implementing a command received from the command bus for use in a cache set other than the second cache set of the plurality of cache sets A cache other than another set of commands to access the memory system.In such an embodiment with a cache set, each of the plurality of registers may be configured to store the set index, and when the execution type is changed from the second type to the first type, the logic circuit may be configured to change the storage in the first type. The contents of one register and the contents stored in the second register. Furthermore, the first type may be configured to indicate non-speculative execution of instructions by the processor, and the second type may be configured to indicate speculative execution of instructions by the processor.In such an embodiment with a cache set, the cache system may also include connections to speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. The connection to the speculative state signal line is configured to receive the state of the speculative execution, and the state of the speculative execution indicates whether the result of the speculative execution will be accepted or rejected. When the execution type is changed from the second type to the first type, the logic circuit may be configured to change the content stored in the first register and the content stored in the second register if the state of the speculative execution indicates that the results of the speculative execution will be accepted. the contents of the registers; and if the status of the speculative execution indicates that the results of the speculative execution will be rejected, the contents stored in the first register and the contents stored in the second register are maintained unchanged.Furthermore, in such an embodiment with a cache set, the cache set may be divided among multiple caches within the cache system. For example, the cache set may be divided among the first and second caches of the plurality of caches.Figures 13, 14A, 14B, 14C, 15A, 15B, 15C, and 15D show a configuration with a backup cache set including a backup cache set (see, eg, the backup cache set 1310d shown in Figures 14A and 15A, in accordance with some embodiments of the present disclosure) ) to accelerate speculative execution of an example computation of a cache system (eg, see cache system 1000 shown in FIG. 13 ) of interchangeable cache sets (eg, see cache sets 1310a , 1310b , 1310c , and 1310d ) Instance aspects of the device.In addition to using the shadow cache to ensure speculative execution and to synchronize content between the main and shadow caches to hold content cached in the main cache in preparation for accepting content in the shadow cache, the alternate cache Sets may also be used to accelerate speculative execution (see, for example, alternate cache set 1310d as depicted in Figures 14A and 15A, and cache set 1310b as depicted in Figures 15B and 15C and cache set 1310b as depicted in Figures 15B and 15D The cache set 1310c) alternate cache set may also be used to speed up speculative execution without using shadow caches. Data held in cache sets used as shadow caches may be validated and thus available for normal execution (see, for example, cache set 1310c as depicted in Figures 14A and 15A, and Figures 15B and 15C). Cache set 1310d and cache set 1310b as depicted in Figure 15D, each of which is available for speculative execution and is a cache set of shadow caches, and then available for normal execution after content validation). Also, some cache sets (see, eg, cache set 1310b as depicted in Figures 14A and 15A, and caches as depicted in Figures 15B and 15C, are used as main caches for normal or non-speculative execution) Set 1310c and cache set 1310d as depicted in Figure 15D) may not be ready to be used as shadow caches for speculative execution. Accordingly, one or more cache sets may be used as alternate cache sets to avoid delays waiting for cache set availability (see, eg, cache set 1310d as depicted in Figures 14A and 15A, and in Figures 15B and 15C ). Cache set 1310b as depicted and cache set 1310c as depicted in Figure 15D).Once speculation is confirmed, the contents of the cache set used as shadow cache are confirmed to be valid and up-to-date; and thus, the previous cache set used as shadow cache for speculative execution is used for normal execution. See, for example, cache set 1310c as depicted in Figures 14A and 15A, and cache set 1310d as depicted in Figures 15B and 15C and cache set 1310b as depicted in Figure 15D, each of which A set of caches that are available for speculative execution and are shadow caches, and then available for normal execution after content validation. However, some of the cache set initially used as normal cache may not be ready for subsequent speculative execution. See, for example, cache set 1310b as depicted in Figures 14A and 15A, and cache set 1310c as depicted in Figures 15B and 15C and cache set 1310d as depicted in Figure 15D, each of which Portions that are used as normal cache but may not be ready for subsequent speculative execution. Thus, one or more cache sets may be used as alternate cache sets to avoid delays waiting for cache set availability and to speed up speculative execution. See, for example, cache set 1310d as depicted in Figures 14A and 15A, and cache set 1310b as depicted in Figures 15B and 15C and cache set 1310c as depicted in Figure 15D, each of which Used as an alternate cache collection.In some embodiments, where the cache system has background sync circuitry (eg, see background sync circuitry 1102 ), if the set from the cache in the normal cache to the corresponding cache in the shadow cache has not been completed Synchronization of the set (eg, see synchronization 1130 shown in FIG. 11A ), then the cache set in the normal cache cannot be immediately freed for use in the next speculative execution. In this case, if there is no alternate cache set, the next speculative execution must wait until synchronization is complete so that the corresponding cache set in the normal cache can be freed. This is just one instance when alternate cache sets are beneficial. There are many other situations when the cache set in the normal cache cannot be freed immediately.Also, for example, speculative execution may reference a memory area in a memory system (see, eg, memory system 603 in Figures 6, 10, and 13) that does not have overlap with a memory region cached in a cache set used in a normal cache memory area. Due to accepting the results of speculative execution, the cache sets in the shadow cache and the normal cache are now all in the normal cache. This can also cause delays as the cache system spends time freeing the cache set to support the next speculative execution. In order to release a cache set, the cache system needs to identify the cache set, such as the least used cache set, and synchronize the cache set with the memory system. If the cache has newer data than the memory system, then the data needs to be written to the memory system.Additionally, alternate cache sets are used (eg, see cache set 1310d as depicted in Figures 14A and 15A, and cache set 1310b as depicted in Figures 15B and 15C and cache set as depicted in Figure 15D 1310c) may also use background synchronization circuitry (eg, background synchronization circuitry 1102). When the initial speculation is confirmed, the cache set used in the initial speculation (eg, see cache set 1310c as depicted in FIGS. 14A and 15A ) may be swapped to join the set in the cache set for main execution (eg, See cache set 1310a as shown in Figures 14A, B, and C and depicted in Figures 15A, B, C, and D, which is a cache of a set of cache sets for main execution or non-speculative execution cache collection). Instead use the cache set from the previous main execution being used in the case of a speculative failure (see, eg, cache set 1310b as depicted in Figures 14A and 15A, and cache set 1310c as depicted in Figures 15B and 15C and cache set 1310d as depicted in FIG. 15D ), an alternate cache set may be immediately made available for the next speculative execution (see, for example, cache set 1310d as depicted in FIGS. 14A and 15A , and FIG. 15B . and cache set 1310b as depicted in Figure 15C and cache set 1310c as depicted in Figure 15D). The set of spare caches may be updated for the next speculative execution, eg, via background synchronization circuitry 1102 . Also, due to background synchronization, when a cache set currently used for speculative execution (eg, cache set 1310c as shown in Figures 14A and 15A ) is ready to accept for normal execution, an alternate cache set (eg, as shown in Figure 14A and 15A ) is ready to accept for normal execution The spare cache set 1310d) shown in 14A and 15A may be ready for use. In this way, there is no delay in waiting for usage of the next cache set for the next speculative execution. To prepare for the next speculative execution, an alternate cache set (eg, cache set 1310c as shown in FIGS. 14A and 15A ) can be synchronized to the normal cache set that is likely to be used for the next speculative execution ( For example cache set 1310b) as shown in Figures 14A and 15A or the least used cache set in the system.13 shows a cache system with an interchangeable cache set (eg, see cache sets 1310a, 1310b, 1310c, and 1310d) that includes alternate cache sets to speed up speculative execution, according to some embodiments of the present disclosure Example aspects of an example computing device of 1000. The computing device in FIG. 13 is similar to the computing device depicted in FIG. 10 . For example, the device shown in FIG. 13 includes processor 1001, memory system 603, cache system 1000, and connections 604a-604d and 609a-609c, and connection 1002.In FIG. 13, cache system 1000 is shown having cache sets (eg, cache sets 1310a, 1310b, 1310c, and 1310d). The cache system 1000 is also shown with a connection 604d to an execution type signal line 605d from the processor 1001 that identifies the execution type, and a connection 1002 to a signal line 1004 from the processor 1001 that identifies the state of speculative execution.The cache system 1000 is also shown to include logic circuitry 1006 that can be configured to allocate a first sub-set of the cache set when the execution type is a first type indicating non-speculative execution of instructions by the processor 1001. set (eg, see cache 602a as shown in FIG. 13) for caching in a cache operation. The logic circuit 1006 may also be configured to allocate a second subset of the cache set (eg, see as shown in FIG. 13 ) when the execution type changes from the first type to the second type indicating speculative execution of instructions by the processor. Cache 602b) is shown for caching in a cache operation. The logic circuit 1006 may also be configured to retain at least one cache set or a third subset of the cache set (eg, see cache 602c as shown in FIG. 13 ) when the execution type is the second type.The logic circuit 1006 may also be configured to reconfigure the second subset when the execution type is the first type and when the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution will be accepted for caching in cache operations (eg, see cache 602b as shown in FIG. 13). Also, the logic circuit 1006 may also be configured to be configured when the execution type is changed from the first type to the second type and when the execution type is changed from the second type to the first type and the status of the speculative execution indicates that the results of the speculative execution will be accepted , at least one cache set or third subset is allocated for caching in a cache operation (eg, see cache 602c as shown in FIG. 13 ). The logic circuit 1006 may also be configured to retain at least one cache set or a third subset ( See, for example, cache 602c) as shown in FIG. 13 .In some embodiments, a cache system may include one or more mapping tables that may map the set of caches referred to herein. Also, in such embodiments, logic circuits such as those mentioned herein may be configured to allocate and reconfigure subsets of cache sets, such as caches in a cache system, according to one or more mapping tables . The mapping may be used in alternative to or in addition to the cache set registers described herein.In some embodiments, as shown at least in Figures 13, 14A-14C, and 15A-15D, cache system 1000 may include associated cache sets (eg, see cache sets 1310a, 1310b, 1310c, and 1310d), respectively cache set registers (see, for example, cache set registers 1312a, 1312b, 1312c, and 1312d). In such an embodiment, the logic circuit 1006 may be configured to allocate and reconfigure a subset of the cache set according to the cache set registers (eg, see caches 602a, 602b, and 602c as shown in FIG. 13).Furthermore, in some embodiments, as shown in Figures 15A-15D, the first subset of cache sets may include the first cache set, the second subset of cache sets may include the second cache set, and The third subset may include a third set of caches. In such an embodiment, the cache set register may include a first cache set register associated with the first cache set, the first cache set register being configured to initially store the first cache set index such that the A cache set is used for non-speculative execution (eg, see cache set index 1504b held in cache set register 1312b as shown in FIG. 15A). The cache set register may also include a second cache set register associated with the second cache set, the second cache set register being configured to initially store the second cache set index such that the second cache set is used for Non-speculative execution (eg, see cache set index 1504c held in cache set register 1312c as shown in Figure 15A). The cache set register may also include a third cache set register associated with the third cache set, the third cache set register being configured to initially store the third cache set index such that the third cache set is used as a Alternate cache set (see, eg, cache set index 1504d held in cache set register 1312d as shown in FIG. 15A).Furthermore, in such an embodiment, the logic circuit 1006 may be configured to speculate based on the memory address received from the address bus 605b of the processor 1001 and from the execution type signal line 605d from the processor identifying the execution type The identification is performed or performed non-speculatively to generate a collection index (eg, see collection indexes 1504a, 1504b, 1504c, and 1504d). Also, the logic circuit 1006 may be configured to determine whether the set index matches the content stored in the first cache set register, the second cache set register, or the third cache set register.Also, in such an embodiment, the logic circuit 1006 may be configured to store the first cache set index in a second cache set register or to associate with another cache set in the second subset of the plurality of cache sets In another cache set register of the second cache set, such that when the execution type is changed from the second type to the first type and the state of A cache set is used for non-speculative execution. See, for example, FIG. 15B depicting cache set index 1504b held in second cache set register 1312c so that second cache set 1310c is available for non-speculative execution. Additionally, the logic circuit 1006 may be configured to store the second cache set index in a third cache set register or another cache set register associated with another cache set of the at least one cache set such that when When the execution type is changed from the second type to the first type and the status of the speculative execution indicates that the results of the speculative execution will be accepted, the third cache set or another cache set of the at least one cache set is used for speculative execution . For example, see Figure 15B depicting cache set index 1504c held in third cache set register 1312d, making third cache set 1310d available and available for speculative execution. The logic circuit 1006 may also be configured to store the third cache set index in the first cache set register or another cache set register associated with another cache set of the first subset of the plurality of cache sets , so that when the execution type is changed from the second type to the first type and the status of the speculative execution indicates that the results of the speculative execution will be accepted, the first cache set or another cache set in the first subset is used as a spare cache Cache collection. For example, see FIG. 15B which depicts the cache set index 1504d held in the first cache set register 1312b, such that the first cache set 1310b is used as a spare cache set.Figures 14A, 14B, and 14C show a cache set 1310d with a spare cache set (see, eg, spare cache set 1310d as shown in Figures 14A and 14B and spare cache set 1310d as shown in Figure 14C) Cache set 1310b) to speed up speculative execution of interchangeable cache sets (eg, see cache sets 1310a, 1310b, 1310c, and 1310d) for instance aspects of an instance computing device of cache system 1000. In particular, Figure 14A shows cache sets in a first state in which cache sets 1310a and 1310b are available for non-speculative execution, cache set 1310c is available for speculative execution, and cache sets 1310c are available for speculative execution. Set 1310d serves as a spare cache set. 14B shows cache sets in a second state in which cache sets 1310a, 1310b, and 1310c are available for non-speculative execution and cache set 1310c is available and available for speculative execution. 14C shows cache sets in a third state in which cache sets 1310a and 1310c are available for non-speculative execution, cache set 1310d is available for speculative execution, and cache set 1310b is used for Alternate cache collection.Figures 15A, 15B, 15C, and 15D each show having an interchangeable cache set with inclusion of a spare cache set to speed up speculative execution in accordance with some embodiments of the present disclosure (see, eg, cache sets 1310a, 1310b, 1310c and 1310d) of an example aspect of an example computing device of the cache system 1000.In particular, Figure 15A shows cache sets in a first state in which cache sets 1310a and 1310b are available for non-speculative execution (or a first type of execution) and cache set 1310c is available For speculative execution (or second type execution), and cache set 1310d is used as a spare cache set. As shown in FIG. 15A, in this first state, the logic circuit 1006 may be configured to store the cache set index 1504b in the cache set register 1312b such that the contents 1502b in the cache set 1310b are used non-speculatively implement. Additionally, in this first state, the logic circuit 1006 may be configured to store the cache set index 1504c in the cache set register 1312c, making the cache set 1310c available and available for speculative execution. The logic circuit 1006 may also be configured to store the cache set index 1504d in the cache set register 1312d, such that the cache set 1310d serves as a spare cache set in this first state.15B shows cache sets in a second state in which cache sets 1310a and 1310c are available for non-speculative execution, cache set 1310d is available for speculative execution, and cache set 1310b is available for as an alternate cache set. The second state depicted in Figure 15B occurs when the execution type is changed from the second type to the first type and the state of speculative execution indicates that the results of the speculative execution will be accepted. As shown in FIG. 15B, in this second state, the logic circuit 1006 may be configured to store the cache set index 1504b in the cache set register 1312c such that the contents 1502b in the cache set 1310c are used for non-speculative use implement. Additionally, in this second state, the logic circuit 1006 may be configured to store the cache set index 1504c in the cache set register 1312d, making the cache set 1310d available for speculative execution. The logic circuit 1006 may also be configured to store the cache set index 1504d in the cache set register 1312b such that the cache set 1310b serves as a spare cache set in this second state.15C shows cache sets mostly in the second state, where cache sets 1310a and 1310c are available for non-speculative execution, and cache set 1310b is used as a spare cache set. However, in Figure 15C, cache set 1310d is shown being used for speculative execution rather than being available only. As shown in FIG. 15C, in this second state, logic circuit 1006 may be configured to store cache set index 1504c in cache set register 1312d, such that content 1502c held in cache set 1310d is also available for use in Speculative execution.15D shows cache sets in a third state in which cache sets 1310a and 1310d are available for non-speculative execution, cache set 1310b is available for speculative execution, and cache set 1310c is available for as an alternate cache set. In a subsequent loop following the second state, the third state depicted in Figure 15D occurs when the execution type is again changed from the second type to the first type and the state of speculative execution indicates that the results of the speculative execution will be accepted. As shown in Figure 15D, in this third state, the logic circuit 1006 may be configured to store the cache set index 1504b in the cache set register 1312d such that the contents 1502b in the cache set 1310d are used non-speculatively implement. Additionally, in this third state, the logic circuit 1006 may be configured to store the cache set index 1504c in the cache set register 1312b, making the cache set 1310b available for speculative execution. The logic circuit 1006 may also be configured to store the cache set index 1504d in the cache set register 1312c such that the cache set 1310c serves as a spare cache set in this third state.As shown by Figures 15A-15D, cache sets are interchangeable and cache sets used as alternate cache sets are also interchangeable.In such an embodiment, when connection 604b to address bus 605b receives a memory address from processor 1001, logic circuit 1006 may be configured to generate a set index from at least memory address 102b based on this cache set index 112b of the address ( See, eg, set index generation 1506a, 1506b, 1506c, and 1506d for generating set indexes 1504a, 1504b, 1504c, and 1504d, respectively). Additionally, when connection 604b to address bus 605b receives a memory address from processor 1001, logic circuit 1006 may be configured to determine whether the generated set index matches the content stored in one of the registers (which may be the stored set Index 1504a, 1504b, 1504c or 1504d) matches. Further, the logic circuit 1006 may be configured to implement the command received in the connection 604a to the command bus 605a via the cache set in response to the generated set index matching the content stored in the corresponding register. Furthermore, in response to determining that the data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit 1001 may be configured to allocate a cache set for caching the data set and to index the resulting set stored in the corresponding register. The generated set index may include a predetermined segment of bits in the memory address, as shown in Figures 15A-15B.Furthermore, in such an embodiment, the logic circuit 1006 may be configured based on a memory address (eg, memory address 102b) received from the address bus 605b of the processor 1001 and an execution type signal from the processor identifying the execution type The identification of speculative or non-speculative executions received by line 605d generates a set index (eg, see set indexes 1504a, 1504b, 1504c, and 1504d). Also, the logic circuit 1006 may be configured to determine whether the set index matches the content stored in the cache set register 1312b, the cache set register 1312c, or the cache set register 1312d.In some embodiments, a cache system may include a plurality of cache sets, connections to execution type signal lines from the processor identifying the execution type, connections to signal lines from the processor identifying the state of speculative execution, and logic circuits. The logic may be configured to allocate a first subset of the plurality of cache sets for caching in a cache operation when the execution type is a first type indicating non-speculative execution of instructions by the processor; and When the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor, a second subset of the plurality of cache sets is allocated for caching in a cache operation. The logic may also be configured to retain at least one cache set (or a third subset of the plurality of cache sets) when the execution type is the second type. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution are to be accepted, the logic circuit may be further configured to reconfigure the second subset to reconfigure the second subset when the execution type is the first type to Used for caching in cache operations. And, when the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution will be accepted, the logic circuit may be further configured to, when the execution type is changed from the first type to the second type, At least one cache set (or a third subset of the plurality of cache sets) is allocated for caching in a cache operation.In such an embodiment, the logic may be configured when the execution type is of the second type and at least one cache set (or a third subset of the plurality of cache sets) contains the least used cache set of the plurality of cache sets When caching sets, at least one cache set (or a third subset of multiple cache sets) is retained.Furthermore, in such an embodiment, the cache system may include one or more mapping tables that map multiple sets of caches. In such an instance, the logic is configured to allocate and reconfigure a subset of the plurality of cache sets according to one or more mapping tables.Furthermore, in such embodiments, the cache system may include multiple cache set registers associated with multiple cache sets, respectively. In such an instance, the logic circuit is configured to allocate and reconfigure a subset of the plurality of cache sets according to the plurality of cache set registers. In such an example, a first subset of the plurality of cache sets may include a first cache set, a second subset of the plurality of cache sets may include a second cache set, and at least one cache set (or A third subset of the plurality of cache sets) may include a third cache set. Additionally, the plurality of cache set registers may include a first cache set register associated with the first cache set, the first cache set register configured to initially store the first cache set index such that the first cache set Collections are used for non-speculative execution. The plurality of cache set registers may also include a second cache set register associated with the second cache set, the second cache set register configured to initially store the second cache set index such that the second cache set For speculative execution. The plurality of cache set registers may also include a third cache set register associated with the third cache set, the third cache set register configured to initially store the third cache set index such that the third cache set Used as an alternate cache collection.In such an embodiment, the logic circuit may be configured to execute speculatively or non-speculatively based on a memory address received from an address bus from the processor and an execute type signal line from the processor identifying the execution type. identification to generate the set index. Also, the logic circuit may be configured to determine whether the set index matches the content stored in the first cache set register, the second cache set register, or the third cache set register. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution will be accepted, the logic circuit may be further configured to store the first cache set index in the second cache set register or in another cache set register associated with the other cache set in the second subset of the plurality of cache sets such that the second cache set or the other cache set in the second subset is used for non-speculative use implement. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution will be accepted, the logic circuit may be further configured to store the second cache set index in the third cache set register or in another cache set register associated with another of the at least one cache set (or a third subset of the plurality of cache sets) such that the third cache set or the at least one cache set Another cache set of (or a third subset of multiple cache sets) is used for speculative execution. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution are to be accepted, the logic circuit may be further configured to store the third cache set index in the first cache set register or in another cache set register associated with the other cache set in the first subset of the plurality of cache sets such that the first cache set or the other cache set in the first subset acts as a spare cache gather.In some embodiments, a cache system may include a plurality of cache sets having a first subset of cache sets, a second subset of cache sets, and a third subset of cache sets. The cache system may also include connections to execution type signal lines from the processor identifying the execution type, connections to signal lines from the processor identifying the state of speculative execution, and logic. The logic may be configured to allocate a first subset of the plurality of cache sets for caching in a cache operation when the execution type is a first type indicating non-speculative execution of instructions by the processor; and When the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor, a second subset of the plurality of cache sets is allocated for caching in a cache operation. The logic circuit may also be configured to retain a third subset of the plurality of cache sets when the execution type is the second type. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution are to be accepted, the logic circuit may be further configured to reconfigure the second subset to reconfigure the second subset when the execution type is the first type to Used for caching in cache operations. When the execution type is changed from the second type to the first type and the state of the speculative execution indicates that the results of the speculative execution will be accepted, the logic circuit may be further configured to assign the first type of execution when the execution type is changed from the first type to the second type Three subsets for caching in cache operations.In some embodiments, the cache system may include multiple caches including a first cache, a second cache, and a third cache. The cache system may also include connections to execution type signal lines from the processor identifying the execution type, connections to signal lines from the processor identifying the state of speculative execution, and logic. The logic circuit may be configured to allocate a first cache for caching in a cache operation when the execution type is a first type indicating non-speculative execution of instructions by the processor; and when the execution type changes from the first When the type changes to a second type indicating speculative execution of instructions by the processor, a second cache is allocated for caching in a cache operation. The logic circuit may also be configured to retain the third cache when the execution type is the second type. When the execution type is changed from the second type to the first type and the status of the speculative execution indicates that the results of the speculative execution are to be accepted, the logic circuit may be further configured to reconfigure the second cache to reconfigure the second cache when the execution type is the first type to Used for caching in cache operations. Also, the logic circuit may be further configured to allocate a third cache for caching in the cache operation when the execution type changes from the first type to the second type.16 and 17 show having extended tags (eg, see extended tags 1640a, 1640b) with utilization for different types of execution (eg, speculative and non-speculative execution) by the processor in accordance with some embodiments of the present disclosure , 1740a, and 1740b) of interchangeable cache sets (eg, see cache sets 1610a, 1610b, 1710a, and 1710b) of an example computing device of an example aspect of a cache system. Furthermore, Figures 16 and 17 illustrate different ways of addressing cache sets and cache blocks within a cache system such as cache systems 600 and 1000 depicted in Figures 6, 10 and 13, respectively. Furthermore, the manner in which cache sets and cache blocks may be selected via memory addresses, such as memory addresses 102e or 102b and memory addresses 102a, 102c or 102d (shown in FIG. 1 ), is shown.The two examples in Figures 16 and 17 use set associativity, and cache systems, such as cache systems 600 and 1000, may be implemented using set associativity. In Figure 16, set associativity is implicitly defined (eg, defined by an algorithm that can be used to determine which tag should be in which cache set for a given execution type). In Figure 17, set associativity is implemented via the bits of the cache set index in the memory address. Furthermore, the functionality illustrated in Figures 16 and 17 may be implemented without the use of set associativity (although this is not depicted), such as implemented by cache systems 200 and 400 shown in Figures 2 and 4, respectively.In Figures 16 and 17, block indexes (eg, see block indexes 106e and 106b) may be used as addresses within individual cache sets (eg, see cache sets 1610a, 1610b, 1710a, and 1710b) to identify the specific cache blocks (see, for example, cache blocks 1624a, 1624b, 1628a, 1628b, 1724a, 1724b, 1728a, and 1728b). Also, extension tags (eg, extension tags 1640a, 1640b, 1740a, 1740b, 1650, and 1750) may be used as addresses for cache sets. A block index (eg, see block indices 106e and 106b) of memory addresses (eg, see memory addresses 102e and 102b) can be used for each cache set (eg, see cache sets 1610a, 1610b, 1710a, and 1710b) to obtain high-speed Cache blocks (see, for example, cache blocks 1624a, 1624b, 1628a, 1628b, 1724a, 1724b, 1728a, and 1728b) and tags associated with the cache blocks (see, for example, corresponding tags 1622a, 1622b, 1626a, 1626b, 1722a, 1722b, 1726a and 1726b).In addition, as shown in FIGS. 16 and 17, tag comparison circuits (eg, tag comparison circuits 1660a, 1660b, 1760a, and 1760b) may set extended tags (eg, extended tags 1640a, 1640b, 1740a, and 1740a) generated from the cache collection. 1740b) is compared to an extended cache tag (e.g., extended tag 1650) from a memory address (e.g., see memory addresses 102e and 102b) and the current execution type (e.g., see execution type 110e and 110b) to determine a cache hit or miss. The construction of the extension tag ensures that there is at most one hit among the cache sets (eg, see cache sets 1610a, 1610b, 1710a, and 1710b). If there is a hit, cache blocks from the selected cache set (see, eg, cache blocks 1624a, 1624b, 1628a, 1628b, 1724a, 1724b, 1728a, and 1728b) provide the output. Otherwise, the data associated with the memory address (eg, memory address 102e or 102b) is not cached in or output from any of the cache sets. Briefly, the extended tags depicted in Figures 16 and 17 are used to select cache sets, and the block index is used to select cache blocks and their tags within the cache set.Furthermore, as shown in Figures 16 and 17, memory addresses are partitioned differently (see, eg, addresses 102e and 102b); and thus, the control of cache operations is also different according to the address. However, there are some similarities. For example, the systems shown in Figures 16 and 17 control cache set usage via set associativity. Control of cache operations may include control of whether the cache set is used for first or second type of execution by the processor (eg, non-speculative and speculative execution), and such control may be to some extent or completely Controlled via set associativity.In Figure 16, extended tag 1650 for memory address 102e has execution type 110e and tag 104e with a cache set indicator that enforces set associativity. In Figure 17, extension tag 1750 for memory address 102b has execution type 110e, cache set index 112b, and tag 104b. In such an instance, the cache set index 112b implements set associativity rather than the cache set indicator in the tag. The different partitioning of memory addresses slightly changes the way in which extension tags (eg, extension tags 1640a, 1640b, 1650, 1740a and 1740b and 1750) control cache operations via set associativity.In the case of memory address partitioning, in the example, extension tags from memory addresses and execution types (see, for example, extension tags 1650 and 1750) and extension tags for cache sets (see, for example, extension tags 1640a, 1640b, 1740a and 1740b) are compared for controlling cache operations implemented via the cache set. The tag comparison circuits (eg, tag comparison circuits 1660a, 1660b, 1760a, and 1760b) may output a hit or a miss, depending on whether the extended tags input into the comparison circuits match or do not match. Extended tags for a cache set (eg, see extended tags 1640a, 1640b, 1740a, and 1740b) can be retrieved from execution types (eg, see execution types 1632a, 1632b), 1732a, and 1732b) and block tags (eg, see tags 1622a, 1622b, 1626a, 1626b, 1722a, 1722b, 1726a) from the first cache set (eg, see cache sets 1610a, 1610b, 1710a, and 1710b) and 1726b) exports. Also, as shown in Figures 16 and 17, the execution type differs in each register of the cache set. For the example shown, a first cache set (eg, cache set 1610a or 1710a) may be used for a first type of execution (eg, non-speculative execution), and a second cache set (eg, cache set 1610b) or 1710b) may be used for a second type of execution (eg, speculative execution).In FIG. 17 , the combination of tag 104b and cache set index 112b provides similar functionality to tag 104e shown in FIG. 16 . However, in FIG. 17, by separating the tag 104b and the cache set index 112b, the cache set does not have to store a redundant copy of the cache set index 112b due to the cache set (eg, see cache sets 1710a and 1710b) A cache set register (eg, see registers 1732a and 1732b) may be associated to hold a cache set index (eg, see cache set index 1732a and 1732b). Whereas in Figure 16, cache sets (see, eg, cache sets 1610a and 1610b) do need to store redundant copies of cache set indicators in their blocks (see, eg, blocks 1624a, 1624b, 1628a, and 1628b) In each of these, this is due to the cache set's associated register not being configured to hold the cache set index.In other words, since tags 1622a, 1622b, etc. have the same cache set indicator, the indicator may be stored once in the register for the cache set (eg, see cache set registers 1712a and 1712b). This is one of the benefits of the arrangement depicted in FIG. 17 over the arrangement depicted in FIG. 16 . Furthermore, the lengths of the labels 1722a, 1722b, 1726a and 1726b in FIG. 17 are shorter compared to the embodiment of the labels shown in FIG. The depicted cache set registers (eg, registers 1710a and 1710b) store both the cache set index and execution type.The extended cache set index may be used to select one of the cache sets when the execution type is combined with the cache set index to form the extended cache set index. Next, the tags from the selected cache set are compared to the tags in the address to determine hits or misses. Two-step selection can be similar to conventional two-step selection using cache set indexes, or can be used in combination with extended tags to support more efficient interchange of cache sets for different execution types (eg, speculative and non-speculative executions). Types of).In some embodiments, a cache system (eg, cache system 600 or 1000) may include multiple cache sets (eg, cache sets 610a-610c, 1010a-1010c, 1310a-1310d, 1610a-1610b, or 1710a-1710b) . The plurality of cache sets may include a first cache set and a second cache set (eg, see cache sets 1610a-1610b, and sets 1710a-1710b). The cache system may also include multiple registers (eg, registers 612a-612c, 1012a-1012c, 1312a-1312d, 1612a-1612b, or 1712a-1712b) associated with multiple cache sets, respectively. The plurality of registers may include a first register associated with a first set of caches, and a second register associated with a second set of caches (eg, see registers 1612a-1612b and registers 1712a-1712b).The cache system may also include a connection (eg, see connection 604a) to a command bus (eg, see command bus 605a) coupled between the cache system and the processor (eg, see processors 601 and 1001). The cache system may also include a connection (eg, see connection 604b) to an address bus (eg, see address bus 605b) coupled between the cache system and the processor.The cache system may also include logic circuits coupled to the processor to control the plurality of cache sets according to the plurality of registers (eg, see logic circuits 606 and 1006). When a connection to the address bus receives a memory address from the processor (see, eg, memory addresses 102a-102e shown in FIG. 1 and addresses 102e and 102b shown in FIGS. 16 and 17, respectively), the logic circuit may be configured to An extended tag is generated from at least a memory address (eg, see extended tags 1650 and 1750). Additionally, when the connection to the address bus receives a memory address from the processor, the logic may be configured to determine whether the generated extension tag (eg, see extension tags 1650 and 1750 ) is compatible with the first cache set (eg, see cache A first extension tag (eg, see extension tags 1640a and 1740a) of cache sets 1610a and 1710a) or a second extension tag (eg, see extension tag 1640b) of a second cache set (eg, see cache sets 1610b and 1710b) matches 1740b).Logic circuits (eg, see logic circuits 606 and 1006 ) may also be configured to respond to a generated extension tag (eg, see extension tags 1650 and 1750 ) matching a first extension tag (eg, see extension tags 1640a and 1740a ) A command received in a connection (eg, see connection 604a) to a command bus (eg, see command bus 605a) is implemented via the first cache set (eg, see cache set 1610a and 1710a), and in response to the generated The command is implemented via the second cache set (eg, see cache sets 1610b and 1710b ) by matching the extension tag of the second extension tag (eg, see extension tags 1640b and 1740b ).Logic circuits (eg, see logic circuits 606 and 1006 ) may also be configured to retrieve a cache address (eg, see in extension tags 1640a and 1740a marked as block of 'tags', as well as tags 1622a, 1622b, 1722a, 1722b, etc.) and the contents stored in the first register (eg, see registers 1612a and 1712a) (eg, see extended tags 1640a and 1740a marked as 'execute' A block of type ' and a block marked 'cache set index' in extension tag 1740a, and executing type 1632a and cache set index 1732a) generate a first extension tag (eg, see extension tags 1640a and 1740a). The logic may also be configured to access the cache address (see, for example, the block labeled 'tag' in extension tags 1640b and 1740b, and tags 1626a, 1626a, 1626b, 1726a, 1726b, etc.) and the content stored in a second register (see, for example, registers 1612b and 1712b) (see, for example, the block labeled 'execution type' in extension tags 1640b and 1740b and in extension tag 1740b The block marked 'cache set index', along with execution type 1632b and cache set index 1732b) generates a second extension tag (eg, see extension tags 1640b and 1740b).In some embodiments, a cache system (eg, cache system 600 or 1000 ) may further include an execution type signal line (eg, see execution type) from a processor (eg, see execution type) that identifies the execution type Connection of signal line 605d) (see, eg, connection 604d). In such an embodiment, logic circuits (eg, see logic circuits 606 and 1006 ) may be configured from memory addresses (eg, see memory addresses 102e and 102b shown in FIGS. 16 and 17 , respectively) and from execute-type signal lines The identified execution type (eg, see execution type 110e shown in Figures 16 and 17) generates an extension tag (eg, see extension tags 1650 and 1750). Furthermore, in such embodiments, the content stored in each of the first register and the second register (eg, see registers 1612a, 1612b, 1712a, and 1712b) may contain an execution type (eg, see first execution type) 1632a and the second execution type 1632b).In some embodiments, in order to determine whether the generated extension tag (eg, see extension tags 1650 and 1750) is the same as the first extension tag of the first cache set (eg, see extension tags 1640a and 1740a) or the second cache A second extension tag of the set (eg, see extension tags 1640b and 1740b ) matches, and logic (see, eg, logic circuits 606 and 1006 ) can be configured to associate the first extension tag (eg, see extension tags 1640a and 1740a ) with all extensions The generated extension tags (eg, see extension tags 1650 and 1750) are compared to determine a cache hit or miss for the first cache set (eg, see cache sets 1610a and 1710a). In particular, as shown in Figures 16 and 17, a first tag comparison circuit (eg, see tag comparison circuits 1660a and 1760a) is configured to receive a first extension tag (eg, see extension tags 1640a and 1740a) and the generated Extended tags (eg, see extended tags 1650 and 1750) are used as input. The first tag comparison circuit (eg, see tag comparison circuits 1660a and 1760a) is also configured to compare the first extended tag to the generated extended tag to determine a cache hit or miss for the first set of caches. The first tag comparison circuit (eg, see tag comparison circuits 1660a and 1760a) is also configured to output a determined cache hit or miss for the first set of caches (eg, see outputs 1662a and 1762a).Furthermore, to determine whether the generated extension tag matches the first extension tag of the first cache set or the second extension tag of the second cache set, the logic may be configured to match the second extension tag (eg, see extension tag 1640b and 1740b) are compared with the generated extension tags (eg, see extension tags 1650 and 1750) to determine cache hits or misses for the second cache set (eg, see cache sets 1610b and 1710b). In particular, as shown in Figures 16 and 17, a second tag comparison circuit (eg, see tag comparison circuits 1660b and 1760b) is configured to receive a second extension tag (eg, see extension tags 1640b and 1740b) and the generated Extended tags (eg, see extended tags 1650 and 1750) are used as input. The second tag comparison circuit (eg, see tag comparison circuits 1660b and 1760b) is also configured to compare the second extended tag to the generated extended tag to determine a cache hit or miss for the second set of caches. The second tag comparison circuit (eg, see tag comparison circuits 1660b and 1760b) is also configured to output the determined cache hit or miss for the second set of caches (eg, see output 1662b and 1762b).In some embodiments, the logic circuit (eg, see logic circuits 606 and 1006 ) may be further configured such that when the logic circuit determines that the generated extension tag (eg, see extension tag 1640a and 1740a ) is associated with the first cache set of the first When the extension tags (eg, see extension tags 1640a and 1740a) match, output is received from the first cache set (eg, see cache sets 1610a and 1710a). The logic may be further configured when the logic determines that the generated extension tag (eg, see cache sets 1610a and 1710a ) matches a second extension tag of the second cache set (eg, see extension tags 1640a and 1740a ) , receiving output from a second cache set (eg, see cache sets 1610b and 1710b).In some embodiments, the cache address of the first cache set includes cache blocks (eg, see cache blocks 1624a, 1624b, 1724a and 1624a) in the first cache set (eg, see cache sets 1610a and 1710a) 1724b) (see, eg, labels 1622a, 1622b, 1722a, and 1722b). In such an embodiment, the cache address of the second cache set contains cache blocks (eg, see cache blocks 1628a, 1628b, 1728a) in the second cache set (eg, see cache sets 1610b and 1710b) and 1728b) (see, eg, labels 1626a, 1626b, 1726a, and 1726b). Also, in such embodiments, block indexes are generally used as addresses within individual cache sets. For example, in such an embodiment, logic circuits (eg, see logic circuits 606 and 1006 ) may be configured to use a first block index from a memory address (see, eg, from memory shown in FIGS. 16 and 17 , respectively) Block indices 106e and 106b of addresses 102e and 102b) obtain the first cache block in the first cache set and the tag associated with the first cache block (see, for example, cache blocks 1624a, 1624b, 1724a and 1724b and correspondingly associated tags 1622a, 1622b, 1722a and 1722b). Additionally, logic circuits (eg, see logic circuits 606 and 1006 ) may be configured to use a second block index from memory addresses (eg, see block indexes 106e and 106e from memory addresses 102e and 102b shown in FIGS. 16 and 17 , respectively) 106b) Obtain the second cache block in the second cache set and the tag associated with the second cache block (eg, see cache blocks 1628a, 1628b, 1728a and 1728b and corresponding associated tags 1626a, 1626b, 1726a and 1726b).In some embodiments (eg, the embodiment illustrated in FIG. 16 ), when the first and second cache sets (eg, see cache sets 1610a and 1610b ) are in the first state, the cache of the first cache set The cache address (eg, see tags 1622a, 1622b, etc.) contains a first cache set indicator associated with the first cache set. The first cache set indicator may be the first cache set index. In such an embodiment, when the first and second cache sets are in the first state, the cache address (eg, see tags 1626a, 1626b, etc.) of the second cache set contains the cache address associated with the second cache set The second cache set indicator for . The second cache set indicator may be the second cache set index.Furthermore, in the embodiment shown in FIG. 16, when the first and second cache sets (eg, see cache sets 1610a and 1610b) are in the second state (this is not depicted in FIG. 16), the first The cache address of a cache set contains a second cache set indicator associated with a second cache set. Additionally, when the first and second cache sets are in the second state, the cache address of the second cache set contains the first cache set indicator associated with the first cache set. This change in content within a cache address may enable interchangeability between cache sets.In the case of the embodiment shown in FIG. 16, the cache set indicator is repeated in the label of each cache block in the cache set, and thus, the labels are smaller than those in the cache set depicted in FIG. 17. Each cache block has a longer label. In Figure 17, instead of repeating the cache set index in the tag of each cache block, the set index is stored in a cache set register (eg, see registers 1712a and 1712b) associated with the cache set.In some embodiments (eg, the embodiment illustrated in FIG. 17 ), when the first and second cache sets (eg, see cache sets 1710a and 1710b ) are in the first state, the cache of the first cache set A cache address (eg, see tags 1722a, 1722b, etc.) may not contain a first cache set indicator associated with the first cache set. Instead, the first cache set indicator is shown stored in the first cache set register 1712a (eg, see first cache set index 1732a stored in the cache set register 1712a). This may reduce the size of the tags of cache blocks in the first cache set since the cache set indicator is stored in a register associated with the first cache set. Furthermore, when the first and second cache sets are in the first state, the cache addresses of the second cache set (eg, see tags 1726a, 1726b, etc.) may not contain the second cache set associated with the second cache set Cache collection indicator. Instead, the second cache set indicator is shown stored in the second cache set register 1712b (eg, see second cache set index 1732b stored in the cache set register 1712b). This may reduce the size of the tags of cache blocks in the second cache set since the cache set indicator is stored in a register associated with the second cache set.Furthermore, in the embodiment shown in FIG. 17, when the first and second cache sets (eg, see cache sets 1710a and 1710b) are in the second state (this is not depicted in FIG. 17), the first A cache address of a cache set (eg, see tags 1722a, 1722b, etc.) may not contain a second cache set indicator associated with a second cache set. Alternatively, the second cache set indicator will be stored in the first cache set register 1712a. Additionally, when the first and second cache sets are in the second state, the cache addresses of the second cache set (eg, see tags 1726a, 1726b, etc.) may not contain the first cache set associated with the first cache set Cache collection indicator. Instead, the first cache set indicator will be stored in the second cache set register 1712b. This change in the contents of the cache set registers may enable interchangeability between cache sets.In some embodiments, as shown in FIG. 17, when the first and second registers (eg, see registers 1712a and 1712b) are in the first state, the content stored in the first register (eg, see register 1712a) A first cache set index (eg, see cache set index 1732a) associated with the first cache set (eg, see cache set 1710a) may be included. Also, the content stored in the second register (eg, see register 1712b ) may contain a second cache set index (eg, see cache set 1710a ) associated with the second cache set (eg, see cache set 1710a ) Index 1732b). In such an embodiment, although not depicted in FIG. 17, when the first and second registers are in the second state, the content stored in the first register may include a second cache set associated with the second set A cache set index, and the content stored in the second register may include a first cache set index associated with the first cache set.In some embodiments (eg, as shown in FIG. 16 and embodiments with connections to execution type signal lines that identify execution types, for example), the cache system (eg, see cache system 1000 ) may further include A connection (eg, see connection 1002 ) to a speculative state signal line (eg, see speculative state signal line 1004 ) from a processor (eg, see processor 1001 ) that identifies the state of speculative execution of instructions by the processor. In such an embodiment, the connection to the speculative state signal line may be configured to receive the state of the speculative execution. The state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to change the first and second cache sets (eg, See status of cache sets 1610a and 1610b). And, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second cache sets ( See, for example, the state of cache sets 1610a and 1610b) without changing.To some extent similar, in some embodiments (such as the embodiment shown in FIG. 17 and embodiments with, for example, a connection to an execution type signal line identifying the execution type), the cache system may further include an Connection of speculative state signal lines from the processor that identify the state of speculative execution of instructions by the processor. In such an embodiment, the connection to the speculative state signal line may be configured to receive the state of the speculative execution. The state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to change the first and second cache sets (eg, See status of cache sets 1610a and 1610b). Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to change the first and second registers (eg, See the status of registers 1712a and 1712b). Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second registers (eg, See the state of registers 1712a and 1712b) without changing.In some embodiments, a cache system may include multiple cache sets including a first cache set and a second cache set. The cache system may also include a plurality of registers respectively associated with a plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a second cache set associated with the second register. The cache system may further include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection to the processor to operate according to a plurality of registers. Logic circuitry that controls multiple cache sets. The logic circuit may be configured to generate the first extension tag from the cache address of the first cache set and the content stored in the first register, and from the cache address of the second cache set and the content stored in the second register A second extended tag is generated. The logic may also be configured to determine whether the first extended tag of the first cache set or the second extended tag of the second cache set matches a generated extended tag generated from a memory address received from the processor. Also, the logic circuit may be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated extension tag matching the first extension tag, and to implement the command received in the connection with the command bus via the first cache set, and in response to the generated extension tag matching the first extension tag. The two extended tags match to implement the command via the second cache set.In such an embodiment, the cache system may also include a connection to an address bus coupled between the cache system and the processor. When the connection to the address bus receives the memory address from the processor, the logic circuit may be configured to generate an extended tag from at least the memory address. Additionally, the cache system may include connections to execution type signal lines from the processor that identify the execution type. In such an instance, the logic circuit may be configured to generate the extension tag from the memory address and the execution type identified by the execution type signal line. Furthermore, the content stored in each of the first register and the second register may contain an execution type.Additionally, to determine whether the generated extension tag matches the first extension tag of the first cache set or the second extension tag of the second cache set, the logic may be configured to: match the first extension tag to the generated extension The tags are compared to determine a cache hit or miss for the first cache set; and a second extension tag is compared to the generated extension tag to determine a cache hit or miss for the second cache set. Further, the logic circuit may be configured to: receive an output from the first cache set when the logic circuit determines that the generated extension tag matches the first extension tag of the first cache set; and when the logic circuit determines that the generated extension tag matches Output is received from the second cache set upon a match with the second extension tag of the second cache set. In this and other embodiments, the cache address of the first cache set may contain the first tag of the cache block in the first cache set, and the cache address of the second cache set may contain the first tag of the cache block in the first cache set The second tag of the cache block in the second cache set.In some embodiments, a cache system may include multiple cache sets including a first cache set and a second cache set. The cache system may also include a plurality of registers respectively associated with a plurality of cache sets, the plurality of registers including a first register associated with the first cache set and a second cache set associated with the second register. Also, the cache system may include a connection to a command bus coupled between the cache system and the processor, a connection to an execute type signal line from the processor identifying the execute type, and a connection to a command bus coupled between the cache system and the processor. an address bus connection between, and logic circuitry coupled to the processor to control a plurality of cache sets in accordance with a plurality of registers. When the connection to the address bus receives the memory address from the processor, the logic circuit may be configured to: generate an extended tag from the memory address and the execution type identified by the execution type signal line; and determine whether the generated extended tag is the same as the first The first extended tag of the cache set or the second extended tag of the second cache set matches. Further, the logic circuit may be configured to implement the command received in the connection to the command bus via the first cache set in response to the generated extension tag matching the first extension tag, and to implement the command received in the connection with the command bus via the first cache set, and in response to the generated extension tag matching the first extension tag. The two extended tags match to implement the command via the second cache set.FIG. 18 shows a method with mapping of physical cache set outputs (eg, see physical outputs 1820a, 1820b, and 1820c ) to logical cache set outputs (eg, see logical output 1840a ) using mapping circuitry 1830 in accordance with some embodiments of the present disclosure , 1840b, and 1840c) of interchangeable cache sets (see, for example, cache sets 1810a, 1810b, and 1810c) of a cache system (see, for example, cache systems 600 and 1000 shown in FIGS. 6 and 10, respectively) Example aspects of an example computing device.As shown, a cache system may include multiple cache sets (eg, see cache sets 1810a, 1810b, and 1810c). The plurality of cache sets include a first cache set (eg, see cache set 1810a) configured to provide a first physical output (eg, see physical output 1820a) upon a cache hit, and A second cache set (eg, see cache set 1810b) of a second physical output (eg, see physical output 1820b) is provided shortly thereafter. The cache system may also include a connection to a command bus (eg, see command bus 605a ) coupled between the cache system and the processor (eg, see processors 601 and 1001 ) (see, eg, FIGS. 6 and 10 ) Depicted connection 605a). The cache system may also include a connection (eg, see connection 605b) to an address bus (eg, see address bus 605b) coupled between the cache system and the processor.Shown in FIG. 18, the cache system includes control registers 1832 (eg, physical-to-logical-set-mapping (PLSM) registers 1832) and mapping circuitry 1830 coupled to the control registers to map multiple cache sets Corresponding physical outputs (eg, see physical outputs 1820a, 1820b, and 1820c) of the cache sets 1810a, 1810b, and 1810c (eg, see physical outputs 1820a, 1820b, and 1820c) map to a first logical cache (eg, a normal cache) and a second logical cache ( For example, shadow caches) are output as corresponding logical cache sets (eg, see logical outputs 1840a, 1840b, and 1840c). The mapping of physical outputs (eg, see physical outputs 1820a, 1820b, and 1820c) to logical cache set outputs (eg, see logical outputs 1840a, 1840b, and 1840c) by mapping circuitry 1830 is based on the state of control registers 1832. As shown in FIG. 18, at least logical outputs 1840a and 1840b are mapped to a first logical cache for a first type of execution, and at least logical output 1840c is mapped to a second logical cache for a second type of execution cache. Not shown, a cache system may be configured to be coupled between a processor and a memory system (eg, see memory system 603).When a connection (eg, see connection 605b) to an address bus (eg, see address bus 605b) receives a memory address (eg, see memory address 102b) from a processor (eg, see processors 601 and 1001) and when control When register 1832 is in a first state (shown in FIG. 18 ), mapping circuit 1830 may be configured to map a first physical output (eg, see physical output 1820a ) to a first physical output for a first type of execution by the processor. A logical cache (eg, see logical output 1840a) to implement data received from a command bus (eg, see command bus 605a) for use during a first type of execution (eg, non-speculative execution) via the first cache A cache set (eg, cache set 1820a) accesses commands to a memory system (eg, see memory system 601).Additionally, when a connection (eg, see connection 605b ) to an address bus (eg, see address bus 605b ) receives a memory address (eg, see memory address 102b ) from a processor (eg, see processors 601 and 1001 ) and When the control register 1832 is in the first state (shown in FIG. 18 ), the mapping circuit 1830 may be configured to map a second physical output (eg, see physical output 1820b ) to a second type of execution by the processor A second logical cache (eg, see logic output 1840b) of the , to implement the data received from the command bus (eg, see command bus 605a) for use during execution of the second type (eg, speculative execution) via the second A cache set (eg, cache set 1820b) accesses commands to a memory system (eg, see memory system 601).When a connection (eg, see connection 605b) to an address bus (eg, see address bus 605b) receives a memory address (eg, see memory address 102b) from a processor (eg, see processors 601 and 1001) and when control When register 1832 is in a second state (not shown in FIG. 18 ), mapping circuit 1830 is configured to map a first physical output (eg, see physical output 1820a) to a second logical cache (eg, see logical output 1840b), to implement a command received from a command bus (eg, see command bus 605a) for accessing the memory system via a first cache set (eg, cache set 1820a) during a second type of execution (eg, speculative execution) (see, eg, memory system 601) commands.Additionally, when a connection (eg, see connection 605b ) to an address bus (eg, see address bus 605b ) receives a memory address (eg, see memory address 102b ) from a processor (eg, see processors 601 and 1001 ) and When the control register 1832 is in the second state (not shown in FIG. 18 ), the mapping circuit 1830 is configured to map the second physical output (eg, see physical output 1820b ) to the first logical cache (eg, see logical output 1840a ) ) to implement data received from the command bus (eg, see command bus 605a) for storing via the second cache set (eg, cache set 1820b) during execution of the first type (eg, non-speculative execution) A command to fetch a memory system (see, eg, memory system 601).In some embodiments, the first logical cache is a normal cache for non-speculative execution by the processor, and the second logical cache is a shadow cache for speculative execution by the processor.Mapping circuit 1830 addresses issues related to execution types. Mapping circuit 1830 provides a solution to how execution types relate to mapping physical cache sets to logical cache sets. If mapping circuitry 1830 is used, a memory address (eg, see address 102b) may be applied in each cache set (eg, see cache sets 1810a, 1810b, and 1810c) to produce a physical output (eg, see physical output 1820a, 1820b and 1820c). Physical outputs (eg, see physical outputs 1820a, 1820b, and 1820c) contain tags and cache blocks (eg, see block index 106b) found using the block index of the memory address. Mapping circuit 1830 may reroute physical outputs (eg, see physical outputs 1820a, 1820b, and 1820c) to one of logical outputs (eg, see logical outputs 1840a, 1840b, and 1840c). The cache system can perform tag comparisons at the physical output or at the logical output. If tag comparisons are made at the physical outputs, the tag hits or misses for the physical outputs are routed through the mapping circuit 1830 to produce hits or misses for the logical outputs. Otherwise, the tags themselves are routed through the mapping circuit 1830; and tag comparisons are made at the logical outputs to produce corresponding tag hit or miss results.As illustrated in Figure 18, logical outputs are predefined for speculative and non-speculative execution. Thus, the current execution type (eg, see execution type 110e) can be used to select which portion of the logic output will be used. For example, since logic output 1840c is predefined for speculative execution in Figure 18, its result may be discarded if the current execution type is normal execution. Otherwise, if the current execution type is speculative, the results from the first portion of the logical outputs in Figure 18 (eg, outputs 1840a and 1840b) may be blocked.In the embodiment shown in Figure 18, if the current execution type is speculative, a hit or miss result from a logical output for non-speculative execution may be ANDed with '0' to force a cache "miss" ”; and a hit or miss result from a logical output for non-speculative execution can be ANDed with '1' to keep the result unchanged. Execution type 110e may be configured such that speculative execution=0 and non-speculative execution=1, and tag hit or miss results from non-speculative outputs 1840a-1840b may be ANDed with execution type (eg, execution type 110e) to produce Hits or misses that include considerations for both match tags and execution types. Also, tag hit or miss results from 1840c may be ANDed with the inverse of execution type 110e to produce a hit or miss.Figures 19 and 20 show a method with a physical cache set output (eg, see physical outputs 1820a, 1820b depicted in Figure 18) using the circuit shown in Figure 18 (mapping circuit 1830) in accordance with some embodiments of the present disclosure and 1820c and physical output 1820a shown in FIG. 19) map to an interchangeable set of caches (see, for example, as depicted in FIGS. 18-21 ) of logical cache set outputs (see, for example, logical outputs 1840a, 1840b, and 1840c). Example aspects of an example computing device of an example computing device of a cache system (eg, cache systems 600 and 1000 shown in FIGS. 6 and 10, respectively).In particular, Figure 19 shows the first cache set 1810a, the first cache set register 1812a, the tag 1815a of the first cache set (which contains the current tag and the cache set index), the tag and set index from address 102b 1850 (which contains current tag 104b and current cache set index 112b from memory address 102b), and tag compare circuit 1860a for first cache set 1810a. In addition, Figure 19 shows a first cache set 1810a with cache blocks and associated tags (eg, see cache blocks 1818a and 1818b, and tags 1816a and 1816b), and a cache holding cache for the first cache set The first cache set register 1812a of set index 1813a. Additionally, Figure 19 shows a tag comparison circuit 1860b for the second cache set 1810b. The figure shows outputting physical output 1820a from first cache set 1810a to mapping circuit 1830. The second cache set 1810b and other cache sets of the system may also provide their corresponding physical outputs to the mapping circuit 1830 (although this is not depicted in Figure 19).Figure 20 shows the system's multiple cache sets that provide physical outputs to the mapping circuit 1830 (eg, see physical outputs 1820a, 1820b, and 1820c provided by cache sets 1810a, 1810b, and 1810c, respectively, as shown in Figure 20) instance. FIG. 20 also depicts portions of mapping circuit 1830 (see, for example, multiplexers 2004a, 2004b, and 2004c and PLSM registers 2006a, 2006b, and 2006c). Figure 20 also shows a first cache 1810a having at least cache blocks 1818a and 1818b and associated tags 1816a and 1816b. Also, the second cache 1810b is shown with at least cache blocks 1818c and 1818d and associated tags 1816c and 1816d.19 also shows multiplexers 1904a and 1904b and PLSM registers 1906a and 1906b, which may be part of logic circuits (eg, see logic circuits 606 and 1006) and/or mapping circuits (eg, see mapping circuit 1830). Each of multiplexers 1904a and 1904b receives at least hit or miss results 1862a and 1862b from tag comparison circuits 1860a and 1860b, which each compare corresponding tags of the cache set (see, for example, The tag of the first cache set 1815a) and the tag and set index from the memory address (eg, see tag and set index 1850). In some examples, an equivalent multiplexer may exist for each tag comparison for each cache set of the system. Each of the multiplexers (eg, see multiplexers 1904a and 1904b ) may output the selected hit based on the state of the multiplexer's corresponding PLSM register (eg, see PLSM registers 1906a and 1906b ) or miss results. The PLSM registers that control the selection of multiplexers for outputting cache hits or misses from cache set comparisons may be used by such registers when the main PLSM register (eg, control register 1832 ) is part of the mapping circuit 1830. control.In some embodiments, each of the PLSM registers (eg, see PLSM registers 1906a and 1906b, and PLSM registers 2110a, 2110b, and 2110c depicted in FIG. 21 ) may be one bit, two bits, depending on the particular implementation Or a three bit register or any bit length register. Such a PLSM register (eg, used by a multiplexer) can be used to select the appropriate physical tag comparison result or output the correct result for one of the logical units that were hit or miss.In the case of the PLSM registers 2006a, 2006b, and 2006c depicted in FIG. 20, such registers may be used (eg, by a multiplexer) to select a cache set (eg, see caches as shown in FIG. 20 ) appropriate physical outputs of sets 1810a, 1810b, and 1810c) (see, for example, physical outputs 1820a, 1820b, and 1820c shown in Figure 20). Such PLSM registers may also each be, depending on the particular implementation, a one-bit, two-bit or three-bit register or any bit length register. Furthermore, the control register 1832 may be a one-bit, two-bit or three-bit register or any bit length register depending on the particular implementation.In some embodiments, selecting a physical output from a cache set or selecting a cache hit or miss is performed by a multiplexer (eg, see multiplexers 1904a and 1904b shown in FIG. 19 , shown in Multiplexers 2004a, 2004b, and 2004c in Figure 20, and multiplexers 2110a, 2110b, and 2110c shown in Figure 21), which are arranged in the system per output type and at least one multiplexer per logical unit or per cache set. As shown in the figure, in some embodiments, where there are n number of cache sets or logical compare units, there are n number of n-to-1 multiplexers.As shown in FIG. 19, a computing device may include a first multiplexer (eg, multiplexer 1904a) configured to The content received by the PLSM register 1906a) outputs the first hit or miss result or the second hit or miss result (eg, see hit or miss outputs 1862a and 1862b as shown in Figure 19) to the processor. The computing device may also include a second multiplexer (eg, multiplexer 1904b) configured to The contents of output the second hit or miss result or the first hit or miss result (eg, see hit or miss outputs 1862b and 1862a as shown in Figure 19) to the processor.In some embodiments, the contents of the PLSM register may be received from a control register, such as control register 1832 shown in FIG. 18 . For example, in some embodiments, the first multiplexer outputs a first hit or miss result when the content received by the first PLSM register indicates a first state, and when received by the first PLSM register When the received content indicates the second state, the first multiplexer outputs the second hit or miss result. Additionally, the second multiplexer may output a second hit or miss result when the content received by the second PLSM register indicates the first state. Also, the second multiplexer may output the first hit or miss result when the content received by the second PLSM register indicates the second state.As shown in FIG. 20, the computing device may include a first multiplexer (eg, multiplexer 2004a) configured to The content received by the register 2006a) outputs the first physical output 1820a of the first cache set or the second physical output 1820b of the second cache set to the processor. The computing device may include a second multiplexer (eg, multiplexer 2004b ) configured to respond to content received by a second PLSM register (eg, PLSM register 2006b ) The first physical output 1820a of the first cache set or the second physical output 1820b of the second cache set is output to the processor.In some embodiments, the contents of the PLSM register may be received from a control register, such as control register 1832 shown in FIG. 18 . For example, in some embodiments, the first multiplexer outputs the first physical output 1820a when the content received by the first PLSM register indicates the first state, and when the content received by the first PLSM register indicates the first state When the content indicates the second state, the first multiplexer outputs a second physical output 1820b. Additionally, the second multiplexer may output a second physical output 1820b when the content received by the second PLSM register indicates the first state. Also, the second multiplexer may output the first physical output 1820a when the content received by the second PLSM register indicates the second state.In some embodiments, block selection may be based on a combination of block index and main or shadow settings. This parameter controls the PLSM register.In some embodiments (eg, the examples shown in Figures 19 and 20), only one address (eg, tag and index) is fed into the interchangeable cache set (eg, cache sets 1810a, 1810b, and 1810c) . In such an embodiment, if a cache set produces a miss, there is a signal which cache set is updated according to memory control.When the cache set is in the first state, the multiplexer 1904a is controlled by the PLSM register 1906a to provide a hit or miss output for the cache set 1810a, and thus a cache set hit for main execution or normal execution or miss status. When the cache set is in the first state, the multiplexer 1904b is controlled by the PLSM register 1906b to provide a hit or miss output for the cache set 1810b, and thus a cache set hit or miss for speculative execution. medium status. On the other hand, when the cache set is in the second state, the multiplexer 1904a is controlled by the PLSM register 1906a to provide a hit or miss output for the cache set 1810b, and thus provide cache for main or normal execution The hit or miss status of the cache collection. When the cache set is in the second state, the multiplexer 1904b is controlled by the PLSM register 1906b to provide a hit or miss output for the cache set 1810a, and thus a cache set hit or miss for speculative execution. medium status.Similar to the selection of a hit or miss signal, the data looked up from the interchangeable cache can be selected to produce a result for the processor (eg, in the presence of a hit), see, for example, the physics shown in FIG. 20 Outputs 1820a, 1820b, and 1820c.For example, in the first state of the cache set, when the cache set 1810a is used as the main cache set and the cache set 1810b is used as the shadow cache set, the multiplexer 2004a is controlled by the PLSM register 2006a to The physical output 1820a of the cache set 1810a of the main cache or normal logical cache is selected for non-speculative execution. Also, for example, in the second state of the cache set, when cache set 1810b is used as the main cache set and cache set 1810a is used as the shadow cache set, then the multiplexer 2004a is controlled by the PLSM register 2006a To select the physical output 1820b of the cache set 1810b for non-speculative execution of the main cache or the normal logical cache. In this instance, in the first state of the cache set, when cache set 1810a is used as the main cache set and cache set 1810b is used as the shadow cache set, then the multiplexer 2004b is controlled by the PLSM register 2006b controls to select the physical output 1820b of the cache set 1810b of the shadow logical cache for speculative execution. Also, for example, in the second state of the cache set, when the cache set 1810a is used as the main cache set and the cache set 1810b is used as the shadow cache set, then the multiplexer 2004b is controlled by the PLSM register 2006b To select the physical output 1820a of the shadow logical cache's cache set 1810a for speculative execution.In some embodiments, the cache system may further include multiple registers (eg, see cache sets 1810a, 1810b, and 1810c as shown in FIGS. 18-21 ), respectively associated with multiple cache sets (eg, See register 1812a) as shown in Figure 19. The registers may include a first register (eg, see register 1812a) associated with a first cache set (eg, see cache set 1810a), and associated with a second cache set (eg, see cache set 1810b) (not depicted in Figures 18 to 21 but depicted in Figures 6 and 10). The cache system may also include logic circuits (eg, see logic circuits 606 and 1006 ) coupled to the processor (eg, see logic circuits 601 and 1001 ) to control multiple sets of caches from multiple registers. When a connection (eg, see connection 604b) to an address bus (eg, see address bus 605b) receives a memory address from the processor, logic may be configured to generate a set index from at least the memory address, and determine the generated set index Whether to match what is stored in the first register or what is stored in the second register. Also, the logic circuit may be configured to implement a connection to a command bus (eg, see command bus 605a) via the first cache set (eg, see command bus 605a) in response to the generated set index matching the content stored in the first register The command received in 604a) is connected and implemented via the second cache set in response to the generated set index matching the content stored in the second register.In some embodiments, a mapping circuit (eg, see mapping circuit 1830 ) may be part of or connected to logic circuitry, and the state of a control register (eg, see control register 1832 ) may control multiple sets of caches The state of the cache collection in . In some embodiments, the state of the control register may control the state of a cache set of a plurality of cache sets by changing the valid bits of each block of the cache set (eg, see Figures 21-23).Additionally, in some examples, the cache system may further include a connection to a speculative state signal line (eg, see speculative state signal line 1004 ) from the processor that identifies the state of speculative execution of instructions by the processor (eg, See connection 1002). The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution may indicate whether the results of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, logic circuits (eg, see logic circuits 606 and 1006 ) may be configured to, if the status of the speculative execution indicates that the results of the speculative execution will be accepted, control the A register (eg, see control register 1832) changes the state of the first and second cache sets. Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second high speed via the control register if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected Cache the state of the collection without changing.In some embodiments, a mapping circuit (eg, see mapping circuit 1830 ) is part of or connected to a logic circuit (eg, see logic circuits 606 and 1006 ), and a control register (see, eg, control register 1832 ) The state of may control the state of a cache register of a plurality of cache registers (eg, see register 1812a as shown in FIG. 19 ) via a mapping circuit. In such an example, the cache system may further include a connection to a speculative state signal line (eg, see speculative state signal line 1004 ) from the processor that identifies the state of speculative execution of instructions by the processor (eg, see speculative state signal line 1004 ) connection 1002). The connection to the speculative state signal line may be configured to receive the state of the speculative execution, and the state of the speculative execution indicates whether the result of the speculative execution will be accepted or rejected. When the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to change the state of the first and second registers via the control register if the state of the speculative execution indicates that the results of the speculative execution will be accepted . Also, when the execution type is changed from speculative execution to non-speculative execution, the logic circuit may be configured to maintain the first and second registers via the control register if the state of the speculative execution indicates that the outcome of the speculative execution will be rejected status without changing.21 shows an example computing device having a cache system with interchangeable cache sets, such as the cache sets shown in FIG. 18, including cache sets 1810a, 1810b, and 1810c, in accordance with some embodiments of the present disclosure instance aspects. Cache sets (eg, cache sets 1810a, 1810b, and 1810c) are shown utilizing the circuit shown in FIG. 18 (mapping circuit 1830) to map physical cache set outputs to logical cache set outputs.The portion depicted in Figure 21 is comprised of a memory (eg, main memory), a processor (eg, see processor 1001), and at least three sets of interchangeable caches (see, eg, interchangeable cache sets 1810a, 1810b and 1810c) part of the computing device. The processor is configured to execute the main thread and the speculative thread.As shown in FIG. 21, a first cache set (eg, cache set 1810a) may be coupled between the memory and the processor, and may include, in a first state of the cache set, a first multi-task for the main thread blocks (see, eg, blocks 2101a, 2101b, and 2101c shown in FIG. 21). Each block of the first plurality of blocks may include cached data, a first significant bit, and a block address including an index and a tag. Also, the processor (alone or in conjunction with the cache controller) may be configured to change each first valid bit from indicating valid to indicating invalid when the speculative thread's speculation is successful, such that in the first state of the cache set , in the second state of the cache set, the first plurality of blocks become accessible for the speculative thread and blocked for the main thread.As shown in FIG. 21, a second cache set (eg, cache set 1810b) may be coupled between main memory and the processor, and may include a first state for speculative threads in a first state of the cache set Two multiple blocks (see, eg, blocks 2101d, 2101e, and 2101f shown in FIG. 21). Each block in the second plurality of blocks may include cached data, a second significant bit, and a block address including an index and a tag. Also, the processor (either alone or in conjunction with the cache controller) may be configured to change each second valid bit from indicating invalid to indicating valid when the speculative thread's speculation is successful, such that in the second state of the cache set , the second plurality of blocks become accessible for the main thread and blocked for the speculative thread.In some embodiments, as shown in FIG. 21 , blocks in the first plurality of blocks may correspond to corresponding blocks in the second plurality of blocks. Also, blocks in the first plurality of blocks may correspond to corresponding blocks in the second plurality of blocks by having the same block addresses as corresponding blocks in the second plurality of blocks.Furthermore, as shown in FIG. 21, the computing device may include a first physical-to-logical-map-set-map (PLSM) register (eg, a PLSM register) configured to receive a first significant bit of a block of the first plurality of blocks 1 2108a). The first valid bit may indicate the validity of the cached data of the block of the first plurality of blocks. It may also be indicated whether a block of the first plurality of blocks or a corresponding block of the second plurality of blocks is used in the main thread.Furthermore, as shown in FIG. 21, the computing device may include a second PLSM register (eg, PLSM register 2 2108b) configured to receive a second significant bit of a block of the second plurality of blocks. The second valid bit indicates the validity of the cached data of the block in the second plurality of blocks. It may also be indicated whether a block of the second plurality of blocks or a corresponding block of the first plurality of blocks is used in the main thread.Furthermore, as shown in FIG. 21, the computing device may include logic unit 2104a for the first set of caches, the logic unit 2104a configured to determine whether a block in the first plurality of blocks is a hit or a miss. Logic unit 2104a is shown including comparator 2106a and AND gate 2107a. Comparator 2106a may determine whether there is a match between the tag of the block and the corresponding tag of the address in memory. And, if the tags match and the valid bits of the block are valid, AND gate 2107a outputs an indication of a block hit. Otherwise, AND gate 2107a outputs a block miss indication. In other words, the logic unit 2104a for the first cache is configured to output a first hit or miss result according to the determination at the logic unit.Furthermore, as shown in FIG. 21, the computing device may include logic unit 2104b for the second set of caches, the logic unit 2104b being configured to determine whether a block in the second plurality of blocks is a hit or a miss. Logic unit 2104b is shown including comparator 2106b and AND gate 2107b. Comparator 2106b may determine whether there is a match between the tag of the block and the corresponding tag of the address in memory. And, if the tags match and the valid bits of the block are valid, AND gate 2107b outputs an indication of a block hit. Otherwise, AND gate 2107b outputs a block miss indication. In other words, the logic unit 2104b for the second cache is configured to output a second hit or miss result according to the determination at the logic unit.Furthermore, as shown in FIG. 21, the computing device may include a first multiplexer (eg, multiplexer 2110a) configured to The first significant bit of the output of the first hit or miss result or the second hit or miss result to the processor. The computing device may also include a second multiplexer (eg, multiplexer 2110b) configured to combine the second multiplexer according to the second valid bit received by the second PLSM register. The hit or miss result or the first hit or miss result is output to the processor. In some embodiments, the first multiplexer outputs a first hit or miss result when the first valid bit received by the first PLSM register indicates valid, and when the first valid bit received by the first PLSM register indicates a valid When a valid bit indicates invalid, the first multiplexer outputs a second hit or miss result. Furthermore, when the second valid bit received by the second PLSM register indicates valid, the second multiplexer outputs a second hit or miss result. And, when the second valid bit received by the second PLSM register indicates invalid, the second multiplexer outputs the first hit or miss result.In some embodiments, block selection may be based on a combination of block index and main or shadow settings.In some embodiments, only one address (eg, tag and index) is fed into the interchangeable cache set (eg, cache sets 1810a, 1810b, and 1810c). In such an embodiment, if a cache set produces a miss, there is a signal which cache set is updated according to memory control. Similar to the selection of a hit or miss signal, the data found from the interchangeable cache may be selected to produce a result for the processor (eg, in the presence of a hit). For example, in the first state of the cache set, if cache set 1810a is used as the main cache set and cache set 1810b is used as the shadow cache set, the multiplexer 2110a is controlled by the PLSM register 2108a to The hit or miss output of the cache set 1804a and the hit or miss status of the main cache set are selected. Also, multiplexer 2110b is controlled by PLSM register 2108b to provide the hit or miss output of cache set 1810b, and thus the hit or miss status of the shadow cache set.In such an embodiment, when the cache set is in the second state, when the cache set 1810a is used as a shadow cache and the cache set 1810b is used as the main cache, the multiplexer 2110a may be controlled by the PLSM register 2108b to select the hit or miss output of the cache set 1810b and the hit or miss status of the main cache. Also, the multiplexer 2110b may be controlled by the PLSM register 2108b to provide the hit or miss output of the cache set 1810a, and thus the hit or miss status of the shadow cache.Thus, multiplexer 2110a may output whether the main cache has a hit or miss in the cache for the address; and multiplexer 2110b may output whether the shadow cache has a cache in the same address hit or miss. Then, depending on whether the address is speculative or not, one of the outputs can be selected. When there is a cache miss, the address is used in memory to load the data into the corresponding cache. The PLSM registers may similarly implement updates to the corresponding cache set 1810a or set 1810b.In some embodiments, in the first state of the cache set, during speculative execution of a first instruction by a speculative thread, the effects of the speculative execution are stored in a second cache set (eg, cache set 1810b ). )Inside. During speculative execution of the first instruction, the processor may be configured to assert a signal indicative of speculative execution configured to block changes to the first cache set (eg, cache set 1810a). When the signal is asserted by the processor, the processor may be further configured to block the second cache set (eg, cache set 1810b) from updating the memory.When the state of the cache set changes to the second state, the second cache set (rather than the first cache set) is used with the first instruction in response to determining that execution of the first instruction is to be performed with the main thread. In response to determining that execution of the first instruction will not be performed by the main thread, a first set of caches is used with the first instruction.In some embodiments, in the first state, during speculative execution of the first instruction, the processor accesses memory via a second cache set (eg, cache set 1810b). Also, during speculative execution of one or more instructions, access to the contents of the second cache is limited to speculative execution of the first instruction by the processor. During speculative execution of the first instruction, the processor may be inhibited from altering the first cache set (eg, cache set 1810a).In some embodiments, the content of the first cache set (eg, cache set 1810a) and/or the second cache set (eg, cache set 1810b) may be accessible via a cache coherence protocol.22 and 23 show methods 2200 and 2300, respectively, of using interchangeable cache sets for speculative and non-speculative execution by a processor, according to some embodiments of the present disclosure. In particular, methods 2200 and 2300 may be performed by the computing device illustrated in FIG. 21 . Furthermore, a somewhat similar method may be performed by any of the computing devices illustrated in Figures 18-20 and the computing devices disclosed herein; however, such computing devices would Another parameter of controls the cache state, cache set state or cache set register state. For example, in Figure 16, the state of the cache set is controlled via a cache set indicator within the tag of the block of the cache set. Also, for example, in Figure 17, the state of the cache set is controlled via the state of the cache set register associated with the cache set. In such an instance, the state is controlled via a cache set index stored in a cache set register. On the other hand, for the embodiments disclosed by Figures 21 to 23, the state of the cache set is controlled via the valid bits of the block address within the cache set.The method 2200 includes, at block 2202, executing, by a processor (eg, processor 1001 ), the main thread and the speculative thread. The method 2200 includes, at block 2204, providing a first cache set for the main thread in a first cache set of a cache system coupled between the memory system and the processor (eg, cache set 1810a as shown in FIG. 21 ). A plurality of blocks (eg, blocks 2101a, 2101b, and 2101c depicted in Figure 21). Each block of the first plurality of blocks may contain cached data, a first significant bit, and a block address with an index and a tag. The method 2200 includes, at block 2206, providing a second plurality of blocks (eg, the cache set 1810b ) for the speculative thread in a second cache set (eg, the cache set 1810b ) of the cache system coupled between the memory system and the processor. , blocks 2101d, 2101e and 2101f). Each block in the second plurality of blocks may contain cached data, a second significant bit, and a block address with an index and a tag.At block 2207, the method 2200 continues with, eg, by the processor identifying whether the speculative thread's speculation was successful, making the first plurality of blocks accessible for the speculative thread and blocked for the main thread, and making the second plurality of blocks Becomes accessible for the main thread and blocked for speculative threads. As shown in Figure 22, if the speculative thread's speculation fails, the processor does not change the validity bits of the first and second plurality of blocks, and retains the same validity as before determining whether the speculative thread succeeded at block 2207 sex value. Therefore, the state of the cache set does not change from the first state to the second state.At block 2208, the method 200 continues with each first valid bit being changed by the processor (either alone or in conjunction with the cache controller) from indicating valid to indicating invalid, by the processor when the speculative thread's speculation is successful, such that the first plurality of Blocks become accessible for speculative threads and blocked for the main thread. Furthermore, at block 2210, the method 200 continues with each second valid bit being changed by the processor (either alone or in conjunction with the cache controller) from indicating invalid to indicating valid when the speculative thread's speculation is successful, such that the second Multiple blocks become accessible for the main thread and blocked for speculative threads. Therefore, the state of the cache set does not change from the first state to the second state.In some embodiments, during speculative execution of the first instruction by the speculative thread, the effects of the speculative execution are stored within the second cache set. In such an embodiment, during speculative execution of the first instruction, the processor may assert a signal indicating that speculative execution of changes to the first cache may be blocked. Additionally, the processor may block the second cache from updating the memory when the signal is asserted by the processor. This occurs when the cache set is in the first state.Furthermore, in such an embodiment, in response to determining that the main thread is to be used for execution of the first instruction, a second cache set (rather than the first cache set) is used with the first instruction. The first cache is used with the first instruction in response to determining that execution of the first instruction will not be performed by the main thread. This occurs when the cache set is in the second state.In some embodiments, during speculative execution of the first instruction, the processor accesses the memory via the second cache. Also, during speculative execution of one or more instructions, access to the contents of the second cache is limited to speculative execution of the first instruction by the processor. In such an embodiment, the processor is prohibited from altering the first cache during speculative execution of the first instruction.In some embodiments, the content of the first cache may be accessed via a cache coherency protocol.In FIG. 23 , method 2300 includes operations at blocks 2202 , 2204 , 2206 , 2207 , 2208 , and 2210 of method 2200 .The method 2300 includes, at block 2302, receiving, by a first physical-to-logical-map-set-map (PLSM) register (eg, the PLSM register 2108a shown in FIG. 21 ), a first significant bit of a block of the first plurality of blocks . The first valid bit may indicate the validity of the cached data of the block of the first plurality of blocks. Furthermore, method 2300 includes, at block 2304, receiving, by a second PLSM register (eg, PLSM register 2108b), a second significant bit of a block of the second plurality of blocks. The second valid bit may indicate the validity of the cached data of the block in the second plurality of blocks.At block 2306, method 2300 includes determining, by a first logic unit (eg, logic unit 2104a depicted in FIG. 21) for the first set of caches, whether a block in the first plurality of blocks is a hit or a miss. At block 2307, the method 2300 continues by outputting a first hit or miss result according to the determination by the first logic unit. Further, at block 2308, method 2300 includes determining, by a second logic unit (eg, logic unit 2104b) for the second set of caches, whether a block in the second plurality of blocks is a hit or a miss. At block 2309, the method 2300 continues by outputting a second hit or miss result according to the determination by the second logic unit.At block 2310, the method 2300 OR the first hit or the first hit by a first multiplexer (eg, multiplexer 2110a depicted in FIG. The miss or the second hit or miss is output to the processor to continue. In some embodiments, the first multiplexer outputs a first hit or miss result when the first valid bit received by the first PLSM register indicates valid, and when the first valid bit received by the first PLSM register indicates a valid When a valid bit indicates invalid, the first multiplexer outputs a second hit or miss result.And, at block 2312, the second hit or miss result or the first hit is assigned by the second multiplexer (eg, multiplexer 2110b) according to the second valid bit received by the second PLSM register Or the miss result is output to the processor. In some embodiments, the second multiplexer outputs a second hit or miss result when the second valid bit received by the second PLSM register indicates valid. And, when the second valid bit received by the second PLSM register indicates invalid, the second multiplexer outputs the first hit or miss result.Some embodiments may include a central processing unit having processing circuitry configured to execute main threads and speculative threads. The central processing unit may also include or be connected to a first cache set of a cache system configured to be coupled between the main memory and the processing circuitry, the first cache set having the first plurality of blocks for the main thread . Each block of the first plurality of blocks may include cached data, a first significant bit, and a block address including an index and a tag. The processing circuitry (either alone or in conjunction with the cache controller) may be configured to change each first valid bit from indicating valid to indicating invalid when the speculative thread's speculation succeeds, such that the first plurality of blocks become speculatively speculative Sexual thread access and blocked for the main thread. The central processing unit may also include or be connected to a second set of caches of a cache system configured to be coupled between the main memory and the processing circuitry, the second set of caches including a second plurality of Piece. Each block in the second plurality of blocks may contain cached data, a second significant bit, and a block address with an index and a tag. The processing circuitry (alone or in conjunction with the cache controller) may be configured to change each second valid bit from indicating invalid to indicating valid when the speculative thread's speculation is successful, such that the second plurality of blocks become available for the main Thread access and blocked for speculative threads. And, blocks in the first plurality of blocks correspond to corresponding blocks in the second plurality of blocks by having the same block addresses as corresponding blocks in the second plurality of blocks.The techniques disclosed herein are applicable at least to computer systems in which the processor is separate from the memory and the processor communicates with the memory and storage via a communication bus and/or a computer network. Additionally, the techniques disclosed herein may be applied to computer systems in which processing capabilities are integrated within a memory/storage device. For example, processing circuitry, including execution units and/or registers of a typical processor, may be implemented within an integrated circuit and/or integrated circuit package of a memory medium for processing within a memory device. Thus, processors as discussed above and illustrated in the figures (eg, see processors 201, 401, 601, and 1001) are not necessarily central processing units in a von Neumann architecture. The processor may be a unit integrated within the memory to overcome von Neumann bottlenecks due to processing delays caused by data moving between a central processing unit and memory configured separately according to the von Neumann architecture Quantitative constraints limit computing performance.The description and drawings of the present disclosure are illustrative and should not be construed in a limiting sense. Numerous specific details are described to provide a thorough understanding. However, in some instances, well-known or conventional details have not been described in order to avoid obscuring the description. References in this disclosure to one embodiment or an embodiment are not necessarily to the same embodiment; and, such references mean at least one embodiment.In the foregoing specification, the present disclosure has been described with reference to specific exemplary embodiments thereof. It will be apparent that various modifications may be made therein without departing from the broader spirit and scope as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The application discloses link width scaling across multiple retimer devices. Differing widths of retimers are developed using differing numbers of individual retimer elements combined together. To maintain synchronous operation, various signals are provided between the individual retimer elements to allow synchronization of the various operations. A first signal is a wired-OR signal that is usedfor event and operation synchronization. A second set of signals form a serial bus used to transfer proper state information and operation correction data from a master retimer element to slave timerelements. The combination of the wired-OR signal and the serial bus allow the various state machines and operations inside each retimer element to be synchronized, so that the entire width of the linkis properly synchronized.
1.A processing device comprising:a processing element having a width and comprising a plurality of state machines, the plurality of state machines controlling operation of the processing elements;Calibrate the input/output for connection to a line or calibration line;a serial bus endpoint for connecting to a serial bus, the serial bus including a serial bus clock and a serial data line for providing status data,Wherein the calibration input/output receives an indication from the plurality of state machines and provides an indication to the plurality of state machines, and the serial bus endpoint provides an indication to the plurality of state machines,Wherein at least some of the plurality of state machines provide an indication of operation of the state machine in a specified state to the calibration input/output, andWherein at least one of the plurality of state machines is converted based on an indication from the calibration input/output, and at least one of the plurality of state machines is converted based on state data provided on the serial bus.2.The processing device of claim 1 wherein said processing element is a primary element, and wherein said serial bus endpoint provides said serial bus clock and said status data on said serial data line.3.The processing device of claim 1, wherein the processing element is a slave element, and wherein the serial bus endpoint receives the serial bus clock and the status data on the serial data line.4.The processing device of claim 1 wherein said processing device is a link retimer.5.The processing device according to claim 4, wherein said serial data line further provides condition data, andWherein the processing element uses the condition data to perform an operation.6.The processing device of claim 1, wherein the serial data line further provides operational correction data, and wherein the processing element uses the operational correction data to correct an operation.7.The processing device of claim 1 wherein said processing component comprises a clock driver, said clock driver provides a clock signal, and wherein said calibration input/output synchronizes said clock signal provided by said clock driver.8.A processing device comprising:a first processing element having a first given width and comprising a plurality of first state machines, the plurality of first state machines controlling operation of the first processing element, and the first processing element comprises:a first calibration input/output for connection to a line or calibration line;a first serial bus endpoint for connecting to a serial bus, the serial bus including a serial bus clock and a serial data line for providing status data,Wherein the first calibration input/output receives an indication from the plurality of first state machines and provides an indication to the plurality of first state machines, and the first serial bus endpoint provides an indication to the Said a plurality of first state machines,Wherein at least one of the plurality of first state machines provides an indication of operation of the first state machine in a specified state to the first calibration input/output, andWherein at least one of the plurality of first state machines is converted based on an indication from the first calibration input/output, and at least one of the plurality of first state machines is based on being provided on the serial bus State data is converted;a second processing element having a second given width and comprising a plurality of second state machines, the plurality of second state machines controlling operation of the second processing element, and the second processing element comprising:a second calibration input/output for connection to a line or calibration line;a second serial bus terminal for connecting to a serial bus, the serial bus including a serial bus clock and a serial data line for providing status data,Wherein the second calibration input/output receives an indication from the plurality of second state machines and provides an indication to the plurality of second state machines, and the second serial bus endpoint provides an indication to the Said a plurality of second state machines,Wherein at least one of the plurality of second state machines provides an indication of operation of the second state machine in a specified state to the second calibration input/output, andWherein at least one of the plurality of second state machines is converted based on an indication from the second calibration input/output, and at least one of the plurality of second state machines is based on being provided on the serial bus Status data is converted;a line or calibration line connected to the first calibration input/output and the second calibration input/output;a serial bus including a serial bus clock and a serial data line, the serial bus being coupled to the first serial bus end and the second serial bus end,Wherein the plurality of first state machines and the plurality of second state machines are instances of the same state machine in the respective first processing element and second processing element,Wherein the at least one of the plurality of first state machines and the at least one of the plurality of second state machines are instances of the same state machine of the respective first processing element and the second processing element, andWherein the at least one of the plurality of first state machines and the at least one of the plurality of second state machines are converted together.9.A processing apparatus according to claim 8, wherein said first processing element is a main element,Wherein the first serial bus endpoint provides the serial bus clock and the status data on the serial data line,Wherein the second processing element is a slave element, and wherein the second serial bus endpoint receives the status data on the serial bus clock and the serial data line.10.The processing device of claim 8 wherein said first processing element and said second processing element are link retimers.11.The processing device according to claim 10, wherein said serial data line further provides condition data, andWherein one of the first processing element and the second processing element performs the operation using the condition data.12.The processing apparatus according to claim 8, wherein said serial data line further provides operation correction data, and wherein said one of said first processing element and said second processing element corrects operation using said operation correction data .13.The processing device of claim 8 wherein said first processing element comprises a first clock driver that provides a clock signal,Wherein the first calibration input/output synchronizes the clock signal provided by the first clock driver,Wherein the second processing element includes a second clock driver that provides a clock signal, andWherein the second calibration input/output synchronizes the clock signal provided by the second clock driver.14.A method of operating a processing device, the processing device comprising a processing element of a given width and comprising a plurality of state machines, the plurality of state machines controlling operation of the processing element, the processing element comprising: for connecting to a calibration input/output of a line or calibration line; and a serial bus endpoint for connecting to a serial bus, the serial bus including a serial bus clock and a serial data line for providing status data, the method comprising :The calibration input/output receives an indication from the plurality of state machines and provides an indication to the plurality of state machines;The serial bus endpoint provides an indication to the plurality of state machines;At least one of the plurality of state machines provides an indication of operation of the state machine in a specified state to the calibration input/output;At least one of the plurality of state machines is converted based on an indication from the calibration input/output;At least one of the plurality of state machines performs conversion based on state data provided on the serial bus.15.The method of claim 14 wherein said processing element is a primary element, and wherein said serial bus endpoint provides said serial bus clock and said status data on said serial data line.16.The method of claim 14 wherein said processing element is a slave element and wherein said serial bus endpoint receives said serial bus clock and said status data on said serial data line.17.The method of claim 14 wherein said processing device is a link retimer.18.The method of claim 17, wherein said serial data line further provides condition data, andWherein the processing element uses the condition data to perform an operation.19.The method of claim 14, wherein the serial data line further provides operational correction data, and wherein the processing element uses the operational correction data to correct an operation.20.The method of claim 14 wherein said processing component comprises a clock driver, said clock driver provides a clock signal, and wherein said calibration input/output synchronizes said clock signal provided by said clock driver.
Link width scaling across multiple retimer devicesTechnical fieldThe field relates to high speed communication devices.Background techniqueToday, Peripheral Component Interconnect Express (PCIe) links are used to interconnect many different devices and computer systems. One feature of PCIe links is that they can have different widths, such as one channel, two channels, four channels, eight channels, or 16 channels. By using additional channels, the total data throughput of the communication increases proportionally. One disadvantage of using PCIe links is that PCIe links have a relatively limited length because of their high speed and various characteristics of the materials from which they are transmitted. In order to achieve longer distances, retimers have been developed to allow signals on various channels of the PCIe link to be retimed or resynchronized and then re-driven. This allows an effective increase in the length of the allowable PCIe link.Since PCIe links can support a wide variety of channels, a given device for a given width is typically used. For example, a 16-channel PCIe link uses a 16-channel retimer device, while a 4-channel PCIe link uses a 4-channel retimer device. This requires different components for each channel used by the PCIe link, which leads to inventory issues and the like. Moreover, for very wide retimer devices, such as 16 channels, the heat generated by a particular retimer device becomes very large, requiring heat sinking and other dissipative work. In addition, the large space required for wider retimer devices is problematic in space constrained applications. In addition, because of the need to route so many high-speed signals, such as 64 signals for 16-channel retimers (differential transmit and receive signals for each of the 16 channels) to and from the retimer device The high speed signal, so the printed circuit board becomes more complicated.Summary of the inventionThe different widths of the retimers are developed using different numbers of individual retimer components combined together. In order to maintain a synchronous operation, various signals are provided between the various retimer elements to allow synchronization of various operations in the retimer elements. The first signal is a line or signal that is used for synchronization of events and operations. The second set of signals form a serial bus for transferring appropriate status information and operational correction data from the primary re-timer element to the secondary timer element. The combination of a line or signal and a serial bus allows various state machines and operations within each retimer element to be synchronized so that the entire width of the link is properly synchronized as if it were a single-chip retimer device The same is done. By utilizing the retimer component, fewer components need to be inventoried. In addition, each weight timer component has lower power consumption and is more easily distributed on the board to simplify heat dissipation and signal routing issues.DRAWINGSFor a detailed description of various examples, reference will now be made to the drawings, in which:1 is a block diagram of an example use of a retimer in a computer system.Figure 2 is a block diagram of a single-chip retimer.3 is a block diagram of an example of using a narrower width retimer element to develop a wider retimer.4 is a block diagram illustrating the development of various width retimers using narrower retimer elements.Figure 5 is a more detailed block diagram of the plurality of retimer elements of Figure 3.Figure 6 is a detailed block diagram of communication signals and circuitry between two timer elements.FIG. 7 is a timing chart illustrating the operation of the CAL_IN_OUT communication signal of FIG. 6.8 is a timing diagram of a CAL_IN_OUT communication signal in conjunction with a state machine utilizing the CAL_IN_OUT communication signal of FIG. 6.9 is a timing diagram of the SCLK and SDIO communication signals of FIG.10 is a block diagram of a clock circuit using the CAL_IN_OUT communication signal of FIG. 6.Figure 11 is a block diagram of two retimer elements and various state machines for controlling retimer elements.Figure 12 is a main state machine used in a retimer element.Figure 13 is a de-skew state machine used in a retimer element.Figure 14 is a serial bus frame.Figure 15 is an RTSM state machine used in a retimer element.Detailed waysReferring now to Figure 1, an example use of a retimer is shown. The CPU board 102 is connected through the back board 104 and the I/O board 106. The CPU 108 on the CPU board 102 uses the 16 channels of the PCIe as the 16-channel PCIe link 110 connected to the retimer device 112. The retimer device 112 is coupled to the connector 114 on the backplane 104. The 16 channels of the PCIe link 116 traverse the backplane 104 from the connector 114 of the retimer device 112 to the connector 118 that is connected to the I/O board 106. The 16 channels of PCIe link 116 are connected to retimer 120. Retimer 120 is coupled to I/O CPU 122 via a 16 lane PCIe link 124. Due to the distance between the CPU 108 and the I/O CPU 122 and the need to pass through the two connectors 114 and 118, a PCIe link directly from the CPU 108 to the I/O CPU 122 may not be feasible. By utilizing retimer devices 112 and 120, the distance of each individual PCIe link is reduced, and the retimer addresses any reflection problems and other signal issues generated by connectors 114 and 118. This allows high speed PCIe links to be used in a wider range of environments where the various components are distributed outside of the links that are typically traversed by PCIe links.As mentioned above, a single-chip retimer is typically used. Referring now to Figure 2, a 16 channel wide single piece retimer 202 is illustrated. The one-chip retimer 202 connects the system component 1 204 and the system component 2 206. As can be seen in FIG. 2, the traces on the printed circuit board of system component 1204 and system component 2 206 must be compressed and narrowed to work with monolithic retimer 202.3 illustrates the use of narrower retimer elements that can be stacked or linked together to provide a wider PCIe link, with retimer elements 302A-302D each processing between system element 1 204 and system element 2 206 Four channels. Retimer elements 302A-302D are interconnected and operate as described below. Due to the use of smaller retimer components, wiring is simplified and heat dissipation is also diffused and simplified.Figure 4 illustrates the use of different numbers of retimer elements to form PCIe links of different widths. A single retimer element 302A forms a four channel PCIe link between system element 1 204 and system element 2 206. Two retimer elements 302A, 302B are used to form an eight channel PCIe link, and four retimer elements 302A-302D are used together to form a 16 channel link. Therefore, PCIe links of different widths have been developed using only a single component, thereby reducing inventory issues.Referring now to Figure 5, more details of retimer elements 302A-302D are provided. Since the PCIe links are bidirectional, each particular retimer element 302A-302D needs to include a provision for transmitting and receiving signals in each direction. Using retimer element 302A as an example, retimer element 302A includes four inputs for the A-side receiver and four outputs for the A-side transmitter. In addition, retimer element 302A includes four inputs for the B-side receiver and four outputs for the B-side transmitter. A clock signal is provided to each of the retimer elements 302A-302D to provide a base clock signal for the retimer elements 302A-302D. Details regarding the internal clocks of the retimer elements 302A-302D are provided below.Each retimer element 302A-302D includes a main pin. If the main pin is connected high, the particular retimer element operates as a master, and if the main pin is associated with ground, the particular retimer element acts as a slave. In FIG. 5, the retimer element 302A is connected as a main element, and the retimer elements 302B-302D are connected as slave elements. Since each retimer element 302A-302D includes various state machines and operations that must be synchronized to operate as a suitable 16-channel PCIe link, various signals are coupled between the four retimer elements 302A-302D. The first signal, CAL_IN_OUT, or calibration input and output signals, which are calibration signals, are used to indicate when each particular retimer element 302A-302D has completed a particular operation and is used to synchronize the beginning of the next state or operation. The CAL_IN_OUT pins are connected in a line or manner such that each retimer element 302A-302D can indicate its timing information. Each retimer element 302A-302D includes an SCLK or serial bus clock pin and an SDIO or serial data input and output pin. The SCLK and SDIO pins of each of the retimer elements 302A-302D are connected together, wherein the SDIO pins are connected in a line or manner such that the various retimer elements 302A-302D can indicate conditions and the like. This forms the serial bus 504 used by the master device to provide more detailed state information to the slave retimer component to maintain synchronous operation of the device, and allows the retimer component to indicate error conditions and the like as described below. The CAL_IN_OUT line or calibration line, the SCLK serial bus clock, and the SDIO serial data line together form an inter-chip communication (ICC) link 502.Figure 6 illustrates the CAL_IN_OUT signal and the SCLK and SDIO signals and associated circuitry in more detail. For the sake of simplicity, two different retimer elements 302A and 302B are shown. Each retimer element 302A, 302B includes a driver 602A, 602B for driving a CAL_IN_OUT line or a calibration line that substantially produces a CAL_OUT signal. Drivers 602A, 602B receive their inputs from various command generators 604A, 604B, such as state machines, some of which are described in more detail below. When the retimer element 302B is operating in the slave mode, the command generator 604B is disabled. To provide the desired timing information, status indicators 606A, 606B are provided as an input to OR gates 608A, 608B. The status indicator indicates the operational status of the various state machines in the retimer elements 302A, 302B. The second inputs of OR gates 608A, 608B are provided by command generators 604A, 604B, representing the outputs of the various state machines in retimer elements 302A, 302B. The outputs of the OR gates 608A, 608B control the enable outputs of the drivers 602A, 602B.Referring to retimer element 302B, because command generator 604B is disabled, a command value of zero is provided to the input of driver 602B. In this manner, when status indicator 606B indicates that a particular operation has not been completed, a high signal is provided to OR gate 608B, which in turn enables driver 602B to drive a zero value onto the CAL_IN_OUT line to indicate that the retimer element has not completed operation. . When the operation is complete, status indicator 606B goes low, causing driver 602B to be disabled and the CAL_IN_OUT line is no longer driven by driver 602B, such that if no other retimer elements are driving the CAL_IN_OUT line, the CAL_IN_OUT line will go high Level. The operation of driver 602A is similar except for command generator 604A being active, which will provide signals from command generator 604A and status indicator 606A.Receivers 610A, 610B are connected to the CAL_IN_OUT line to receive the CAL_IN signal. The outputs of the receivers 610A, 610B are provided to various interpretation circuits 612A, 612B, representing inputs to various state machines described below, and the like.Figure 7 illustrates the operation of the CAL_IN_OUT line. In the example timing diagram of Figure 7, four retimer elements are connected together. Initially, each of the timer elements is not ready, thus enabling the output enable signal of driver 602 and driving zero to the CAL_IN_OUT line. This is indicated by each of the four retimer elements indicating that it is not ready and driving zero to the CAL_IN_OUT line. This is indicated by the rectangle in the CAL_IN_OUT line shown at the bottom of Figure 7, indicating that all chips or retimer elements are driven to zero. This continues until time T1. At time T1, the second and fourth retimer elements complete their operation in the desired state and are now ready. The second and fourth retimer elements then provide a low signal to the enable input of driver 602 such that driver 602 stops driving the CAL_IN_OUT line signal. This is indicated by the rectangle shown in the CAL_IN_OUT line, which is shown to be driven only to zero by the first and third retimer elements. At time T2, the first retimer element is ready, causing the first retimer element to set its output book enable to zero and stop driving the CAL_IN_OUT line. This causes only the third retimer element to drive the CAL_IN_OUT line low. Finally, at time T3, the third retimer element has completed operation and disables the output enable of driver 602. This causes the CAL_IN_OUT line to go high, so it is a line or line and no retimer components are driving. This rising edge is then used to indicate that all retimer elements are ready, and then the retimer element proceeds to the next step in the operation based on the rising edge.An alternate format is shown in FIG. As mentioned above, part of the CAL_IN_OUT line is used to synchronize the state machine. In Figure 8, four retimer elements are shown in step A 802 of a given state machine. During step A 802, each of the state machines indicates to the retimer element that the output driver 602 is enabled to drive the CAL_IN_OUT line to a low state to indicate that the state operation is not complete. This causes the state machine to remain looping in the wait state 804. Over time, each of the various state machines completes in various retimer elements and stops driving the CAL_IN_OUT line, indicating the completion of the state operation. In the description, the final third state machine completes step A 802 of the state machine and stops driving the CAL_IN_OUT line. This causes the rising edge of the CAL_IN_OUT line shown in Figures 7 and 8 to indicate the transition of the state machine from wait state 804 to step B state 806. As shown, then all of the state machines in each retimer element are simultaneously advanced to step B state 806.Returning now to Figure 6, the serial bus interface is also shown in more detail. Retimer elements 302A, 302B include SCLK drivers 640A, 640B and SDIO drivers 642A, 642B. Each retimer element 302A, 302B includes SCLK receivers 644A, 644B and SDIO receivers 646A, 646B. SCLK drivers 640A, 640B, SDIO drivers 642A, 642B, SCLK receivers 644A, 644B, and SDIO receivers 646A, 646B form serial bus endpoints 647A, 647B. The retimer element 302A as a master device includes a main logic block 648 that always drives the SCLK signal through the SCLK driver 640A and drives the SDIO signal through the SDIO driver 642A during a write operation. Retimer element 302A includes an input block 650 that receives the SCLK signal from SCLK receiver 644A and uses the clock and SDIO signals received from SDIO receiver 646A to collect data. In this manner, retimer element 302A synchronizes its own operations based on the input received on the SDIO signal.The retimer element 302B, as a slave device, includes a slave logic block 652 that drives the output enable of the SDIO driver 642B during a read or condition indication operation. The retimer element 302B includes an input and output block 654 that uses the SCLK signal to clock the value on the SDIO line to obtain an input value on the write operation and to drive the SDIO value to the read or condition indication operation. SDIO driver 642B. The timing of the SDIO read or condition indication is based on the SCLK signal.FIG. 9 illustrates the operation of serial bus 504. The master device (such as retimer element 302A) drives the SCLK signal as illustrated to provide a conventional clock mode. In the illustrated frame, the SDIO signal is driven by the master device as this is a rule for conditional vectors or commands, such as status indications for various state machines. Main logic block 648 provides a particular bit of serial information based on the rising edge of the SCLK signal. The value used on the SDIO line is read as the retimer component of the slave device rising on the next rising edge of the SCLK signal. This continues in the desired manner until the necessary number of bits have been sent. If a read operation is desired, the data value is driven from the retimer element to the SDIO line after the number of command bits indicating the read operation and address and the turnaround time. If an internal condition of the retimer element needs to be indicated, as described below, the retimer element drives the SDIO line low during the appropriate bit time. The line or characteristic of the SDIO line allows the retimer component to indicate internal conditions as needed without the need for a read operation.The actual value provided in the condition vector or command is the design choice required for a particular application. In some cases, the condition vector may be relatively short if only a limited number of states need to be operated to operate a particular device of interest, while in other cases, longer values may be required and provided. In some other cases, some bits are used for the encoded condition vector, while others are used as conditional status bit positions to allow any retimer element to drive the SDIO line low during the appropriate bit time to indicate the presentation specification. Condition, where the bit is high, indicating that the condition has ended or has been met.An example of using the CAL_IN_OUT signal is the synchronization of the clocks used in various devices. In Figure 10, the clock driver logic for two different retimer elements 302A, 302B is illustrated. Each retimer element 302A, 302B includes a phase locked loop 1002A, 1002B that is based on an input CLK signal and is divided as needed to produce a desired internal clock signal. The PLL output is provided as a clock to frequency dividers 1004A, 1004B, which provide internal clock signals. The frequency dividers 1004A, 1004B have enable inputs provided by flip-flops 1006A, 1006B. The flip-flops 1006A, 1006B are cleared so that the divider is not enabled when no clock is provided internally. The D inputs of flip flops 1006A, 1006B are associated with a high. The clock input is provided by the CAL_IN_OUT signal. In this way, when the CAL_IN_OUT signal has a rising edge, the enabling of the two frequency dividers 1004A, 1004B is simultaneously set to one. This allows the clock to be synchronized with the dividers 1004A, 1004B in the retimer elements 302A, 302B. However, this is not sufficient because each retimer element 302A, 302B will vary slightly based on the temperature of the integrated circuit, process parameters, and the like. Thus, the outputs of frequency dividers 1004A, 1004B are sent as reference clocks to the inputs of delay locked loops (DLLs) 1008A, 1008B. Delay locked loops 1008A, 1008B are used to provide phase correlation to compensate for various delays in a particular clock tree for each component. Each delay locked loop 1008A, 1008B receives a feedback position clock signal from a desired clock position for synchronization and is phase locked to a reference clock. Based on the feedback position clock signal, the delay locked loops 1008A, 1008B synchronize the phase of the clock signal such that the feedback position clock is synchronized with respect to the reference clock. The output of DLLs 1008A, 1008B is based on a conventional clock driver tree through clock delay trees 1010A, 1010B to provide the desired clock that will be correct when received at the feedback location. The clock signal at the feedback location will be in phase with the clock signals from dividers 1004A, 1004B.Referring now to Figure 11, a simplified block diagram of two exemplary retimer elements 302A, 302B is provided. Illustrated are the various blocks of retimer elements associated with a particular state machine that are used to control the operation of portions of those blocks. Each retimer element 302A, 302B includes various necessary PHYs, such as A side receive PHYs 1102A, 1102B, A side transmitter PHYs 1104A, 1104B, B side receive PHYs 1106A, 1106B, and B side transmitter PHYs 1108A, 1108B . The retimer elements 302A, 302B include parts per million (PPM) compensation blocks 1110A, 1110B, 1112A, 1112B on the A side and B side receivers. Transmit FIFOs 1114A, 1114B are provided on the A side output, and transmit FIFOs 1116A, 1116B are provided on the B side transmitter. MAC blocks 1118A, 1118B are provided in each retimer element 302A, 302B to perform the necessary functions of the MAC element. ICC blocks 1119A, 119B interconnect retimer elements 302A, 302B.Various state machines interact with these specific blocks. Main state machines 1120A, 1120B provide initial control until a clock is initiated on retimer elements 302A, 302B. De-skew state machines 1121A and 1121B are provided on the A-side receiver, and de-skew state machines 1122A, 1122B are provided on the B-side receiver. De-skew state machines 1121A, 1121B, 1122A, 1122B are used to ensure that skew in a particular direction is synchronized between retimer elements. Stack state machines 1124A, 1124B are used to further ensure that various retimer components are synchronized. The PPM compensation logic is used to ensure that the buffer does not underrun or overrun. Retimer State Machines (RTSM) 1126A, 1126B are used to manage channel equalization and control channel shutdown. Various state machines and PPM logic are described in more detail below. In the description of the state machine, the A, B, C, or D suffix is removed to more generally describe the state machine without being associated with a particular retimer element.FIG. 12 illustrates a primary state machine 1120. The initial state 1202 is a power-on reset state. Once the power-on reset is completed, the next state is the NVM (non-volatile memory) initialization state 1204. In state 1204, the retimer element drives the CAL_IN_OUT signal low. When the NVM initialization is complete, the primary state machine 1120 proceeds to state 1206. In state 1206, the retimer element stops driving the CAL_IN_OUT line, and the main state machine 1120A waits for the CAL_IN_OUT signal to be non-zero, indicating that all other retimer elements have completed non-volatile memory initialization. When the CAL_IN_OUT line goes high, it indicates that all retimer elements have completed NVM initialization, and the next state is EEPROM load state 1208. In state 1208, the retimer element drives the CAL_IN_OUT signal low. When the EEPROM load is complete, operation proceeds to state 1210 where the retimer component stops driving the CAL_IN_OUT signal and waits for the CAL_IN_OUT line to be high, indicating that all other retimer components have also completed the EEPROM load. When all retimer elements have completed EEPROM loading, the phase locked loop (PLL) is enabled in state 1212 and the retimer element drives the CAL_IN_OUT signal to zero to indicate the operation being performed. When the PLL is locked, control proceeds to state 1214 where the retimer element stops driving the CAL_IN_OUT line and waits until the CAL_IN_OUT signal equals one, indicating that all retimer elements have completed booting the PLL. When all of the retimer elements have completed PLL startup, operation proceeds to state 1216 where the internal clock is initiated in the retimer element. This is the completion of the main state machine 1120 for the purposes of this specification. In practice, the main state machine 1120 performs other items related to restarting the retimer element, but those operations are omitted for simplicity.FIG. 13 illustrates the operation of the stacked state machine 1124 in combination with the de-skew state machine 1121 and the de-skew state machine 1122. Operation begins in state 1302 where stack state machine 1124 is waiting to receive a high speed signal. Once the high speed signal is received, the operation proceeds to which side is first initialized, and the A side is still the state 1304 on the B side. One case operation proceeds to the A-side clock and data recovery (CDR) lock state 1306. In the A-side CDR Lock State 1306, the retimer element is waiting for the A-side receive PLL for each of the PCLe channels for synchronization and locking. When the channel PLL is not locked, the retimer component drives the CAL_IN_OUT signal low. Operation proceeds to state 1308 when all retimer elements have locked all PLLs on their channels. Entry state 1308 triggers the start of de-skew state machine 1121. De-skew state machine 1121 begins at state 1310 where the alignment of the physical coding sublayer (PCS) begins. The retimer component drives the CAL_IN_OUT signal low to indicate PCS misalignment, as do all other retimer components. When the PCS is aligned, the retimer element stops driving the CAL_IN_OUT signal low, and when all of the retimer elements are aligned, operation proceeds to state 1312.After state 1312, the operation proceeds to state 1314 where de-skew begins. In state 1314, the retimer element drives the CAL_IN_OUT signal low. Operation proceeds to state 1316 where the de-skew state machine 1121A waits until the retimer element has de-skewed and drives the CAL_IN_OUT signal high, and all remaining retimer elements have similarly stopped driving the CAL_IN_OUT signal, thus , indicating that the retimer components have all been skewed. Operation proceeds to state 1318 to indicate that the de-skew is complete. Operation proceeds to state 1320 where the transmit buffer is loaded. When sufficient data is in the transmit buffer, operation proceeds to state 1322 for transmission. In this state, the master issues a rising edge on the CAL_IN_OUT line to start the clock to read the TX buffers on all chips. The start of the clock is described above with reference to FIG. The enter state 1322 provides an indication to the state machine 1124A state 1308 that the transmission has begun. If the B side has completed the de-skew operation and the A side is now transmitting, then operation proceeds from state 1308 to state 1310, where the stack is indicated as complete.If the B side has not started the de-skew operation at state 1308, then operation proceeds to step 1336, waiting for the B-side CDR lock. When all B sides are CDR locked, as indicated by the CAL_IN_OUT signal being high, the operation proceeds to state 1338 to wait for transmission by the B side. Entry state 1338 triggers a de-skew state machine 1122 that operates similarly to the de-skew state machine 1121. Finally in state 1352, when the transmission begins on the B-side transmitter, state 1338 is provided with an indication that the B-side is transmitting. If the A side has been completed and the B side is now transmitting, the operation proceeds to state 1310. If the A side does not begin to de-skew, then operation returns to state 1306.FIG. 14 illustrates frame 1402 on serial bus 504. The beginning of the frame is indicated by the SDIO signal going low and the SCLK signal going low. The two preambles start the actual frame 1402. Next, a first instance of two PPM or parts per million compensation data bits is provided. It is well known that PPM compensation must be performed to prevent buffer underflow or overflow. The PPM compensates for the data bit indicating which direction, A or B requires compensation or operational correction, and whether the SKP symbol should be inserted or removed in the SKP ordered set. The first part of the RTSM condition vector is the next five bits, followed by the second PPM compensation data bits that appear. Next is the remaining six bits of the RTSM condition vector. The frame data ends with a third group of two PPM compensation data bits. In some cases, the condition vector has two parts, one part, where the state machine indicates encoding by the main retimer element, and the second part, where each condition bit represents an internal condition of the retimer element. Since the SDIO line is a line or a connection, any retimer element with the specified indication condition can drive the associated condition vector bits low. The retimer component monitors these bits to determine if the state machine conversion should be completed. This operation is as follows. The end of the frame is signaled by the SDIO signal rising and the SCLK signal being low. The illustrated frame 1402 is followed by another frame because the SCLK and SDIO signals are constantly running during operation.The PPM compensation information is sent over the serial bus 504 because each channel must extend or shorten the SKP ordered set together. Using serial bus 504 allows this to occur efficiently. When the next SKP ordered set is received at a given channel, the actual SKP ordered set addition or reduction is performed. The PCIe specification requires periodic transmission of SKP ordered sets to allow this PPM compensation by downstream components. By transmitting the PPM compensation bits three times per frame, based on the speed of the SCLK signal and the conventional SKP ordered set release rate, the PPM compensation value is presented each time the SKP ordered set is received.FIG. 15 illustrates an RTSM or retimer state machine 1126. Once the de-skew is completed in both directions A and B, RTSM 1126 begins at state 1504, with each retimer element forwarding data in both directions. Signal CAL_IN_OUT is not used and state machine synchronization is dependent on serial bus 504. Each retimer element actively checks the ordered set being received, with each retimer element updating or driving a corresponding bit in the condition vector, as described above. When all channels in one of the retimer elements receive EC=2, which is a field in the training sequence ordered set, the retimer element stops driving the SDIO lines for the corresponding bits in the condition vector. Since it is open-drain signaling, if the channel in all other retimer elements also sees EC=2, then this bit is '1', resulting in each retimer element. The RTSM 1126 transitions to the link EQ state 1502. Otherwise, the bit remains driven to '0'. In other words, all retimer elements contribute to the conditional vector bits such that the RTSMs 1126 of all retimer elements are responsive to the result of the combination on the condition vector. RTSM 1126 remains at link EQ state 1502 until all channels on each retimer element have completed EQ operations, as indicated by another bit in the condition vector by open-drain signaling as described above. . All of the completion conditions are carried by the serial bus frame condition vector bits such that the RTSM 1126 of all retimer elements returns to the forwarding state 1504.From forwarding state 1504, it is also possible that one of the channels in any of the retimer elements suddenly loses high speed signals. This event is often referred to as inferring electrical idleness. If such an event occurs on any of the retimer elements, the retimer element provides the event in the condition vector by driving the corresponding bit to '0' instead of not driving it. In other words, if all channels are receiving high speed signals, then no retimer element drives the electrical idle bit to '0' and the bits in the condition vector remain '1'. When the condition vector shows '0' on the corresponding bit, the RTSM 1126 in all retimer elements knows that at least one of the channels has lost the high speed signal, causing the RTSM 1126 to all enter the electrical idle state 1506, where The transmitter of the direction is gracefully closed.A less noticeable condition for closing the channel is through the disabled bit in the training sequence TS1. When RTSM 1126 is in state 1504 and any of the channels detects a disable command from TS1, the retimer element will indicate the change by driving the corresponding bit in the condition vector to '0'. Since it is open-drain signaling, the corresponding bit in the condition vector is '0' as long as one of the retimer elements drives it to '0'. All RTSMs 1126 see that the condition vector has changed and transitions to state 1508 where the transmitter is gracefully turned off. From state 1506 or state 1508, when the high speed signal is detected again, the RTSM 1126 in each retimer element exits this state, which triggers the stacked state machine 1124 and the de-skew state machine 1121, 1122. After stack state machine 1124 and de-skew state machine 1121, 1122 are completed, RTSM 1126 eventually returns to forwarding state 1504.The next option from forwarding state 1504 is to proceed to a compliance state 1510, which is triggered similarly to disabled state 1508, with the exception that the compliance bit is set in TS1. All channels in the retimer component enter the scale and generate or forward the desired mode. After completing the scaled operation, the RTSM 1126 proceeds from the compliance state 1510 to the electrical idle state 1506.As described above, the CAL_IN_OUT or calibration line is used to synchronize the operation between the retimer elements such that the retimer elements consistently transition states. The serial bus is used to provide the next status information when the next conversion may be one of several different states. The serial bus is also used to provide the required data on all retimer components for consistent operation. The serial bus is also used to return status information from the retimer elements to all retimer elements such that even if only one retimer element can have a trigger condition, all of the retimer elements can be performed together. The use of calibration lines and serial buses allows the single-chip retimer to be divided into more than one slice or component, and maintains the required synchronization of all various state machines and other parameters, allowing the combination of components to function as a single-chip retimer The role.It should be understood that only example state machines, states, and data have been discussed above to provide an explanation. Many more state machines, states, and data can be used in an actual retimer design, but the techniques and projects discussed herein can be used to perform the required synchronization and the like.While it has been shown that each retimer element has the same width, different elements can be of different widths if desired.It should also be understood that the retimer is only one example of a processing device that uses a state machine to control operations and can be divided into more than one processing slice or processing element, and that the combination performs the same operation as a wider single device. . A related example is the transmitter and receiver PHY and MAC modules used at the end of the PCIe link and in the PCIe switch. It should also be understood that the operation is not limited to PCIe, but other serial link formats, such as wireless bandwidth technology (InfiniBand) and the like. It is even further understood that the operation is not limited to serial devices, but can be applied to many devices that can use multiple widths and must synchronize various items in operation.The above description is intended to be illustrative, and not restrictive. For example, the above examples can be used in combination with each other. There are many other examples with reference to the above description. The scope of the invention should be determined by the scope of the appended claims and the claims. In the appended claims, the terms "including" and "in which" are used as the plain English equivalent of the corresponding terms "comprising" and "wherein".
A low cost, low power consumption scalable architecture is provided to allow a computer system to be managed remotely during all system power states. In a lowest power state, power is only applied to minimum logic necessary to examine a network packet. Power is applied for a short period of time to an execution subsystem and one of a plurality of cores selected to handle processing of received service requests. After processing the received service requests, the computer system returns to the lowest power state.
1.A device capable of implementing system power state transition includes:Multiple central processing units; andA wake-up module in the execution subsystem, the execution subsystem is coupled to the plurality of central processing units, and powers the wake-up module under all multiple system power states, and the wake-up module monitors and receives through the communication network Received service request for services that do not depend on the operating system (OS), and when the service request is detected in the first system power state, by selecting one of the central processing units The unit comes to power, and the execution subsystem transitions to the second system power state to process the service request in the second system power state.2.The apparatus of claim 1, wherein the transition to the second system power state includes transitioning the execution subsystem from a sleep mode to a resume mode.3.The apparatus according to claim 2, after processing the service request, the execution subsystem switches by switching the execution subsystem from the recovery mode to the sleep mode and powering off the selected central processing unit Return to the first system power state.4.The apparatus of claim 3, wherein if the number of transitions between the first system power state and the second system power state is greater than a transition threshold, the execution subsystem delays the transition back to the first A system power state.5.The apparatus of claim 1, wherein the wake-up module includes a minimum logic for checking network packets.6.The apparatus of claim 1, wherein the first system power state is the lowest power consumption system power state.7.The apparatus of claim 1, wherein the maximum configuration percentage of the entire central processing unit (CPU) time slice is allocated for processing service requests when in the system power state with the highest power consumption.8.The apparatus of claim 1, wherein the execution subsystem powers up logic and input / output links associated with the selected central processing unit to process the service request.9.The apparatus of claim 2, wherein the execution subsystem powers down the logic and input / output links associated with the selected central processing unit that are powered on to process the service request.10.A method that can realize the system power state transition includes:Power the wake-up module in the execution subsystem under all multiple system power states;The wake-up module listens to service requests received through the communication network for services that do not depend on the operating system (OS); andWhen the service request is detected while in the first system power state, it is switched on by selecting one of the central processing units to switch to the second system power state, so that the second system power state The service request is processed next.11.The method of claim 10, wherein the transition to the second system power state includes transitioning the execution subsystem from a sleep mode to a resume mode.12.The method of claim 11, further comprising:After processing the service request, by switching the execution subsystem from the recovery mode to the sleep mode and powering off the selected central processing unit, the execution subsystem transitions back to the first system power state.13.The method of claim 12, further comprising:If the number of transitions between the first system power state and the second system power state is greater than the number of transition thresholds, the execution subsystem delays the transition back to the first system power state.14.The method of claim 10, wherein the wake-up module includes minimal logic for checking network packets.15.The method of claim 10, wherein the first system power state is the lowest power consumption system power state.16.The method of claim 10, wherein the maximum configuration percentage of the entire central processing unit (CPU) time slice is allocated for processing service requests when in the system power state with the highest power consumption.17.A device capable of implementing system power state transition includes:A unit for powering the wake-up module in the execution subsystem under all multiple system power states;A unit for monitoring by the wake-up module a service request received through the communication network for a service that does not depend on an operating system (OS); andFor switching to the second system power state by selecting one central processing unit among a plurality of central processing units when the service request is detected in the first system power state, so as to switch to the second system power state A unit that processes the service request in a power state.18.The apparatus of claim 17, wherein the transition to the second system power state includes transitioning the execution subsystem from a sleep mode to a resume mode.19.The apparatus according to claim 18, further comprising:After processing the service request, by switching the execution subsystem from the recovery mode to the sleep mode and powering off the selected central processing unit, the execution subsystem switches back to the first system Power state unit.20.The apparatus according to claim 19, further comprising:A unit for delaying the transition back to the first system power state by the execution subsystem if the number of transitions between the first system power state and the second system power state is greater than a transition threshold.21.The apparatus of claim 17, wherein the wake-up module includes a minimum logic for checking network packets.22.The apparatus of claim 17, wherein the first system power state is the lowest power consumption system power state.23.The apparatus of claim 17, wherein the maximum configuration percentage of the entire central processing unit (CPU) time slice is allocated for processing service requests when in the system power state with the highest power consumption.24.A system capable of implementing system power state transition includes:Dynamic random access memory;Multiple central processing units; andA wake-up module in the execution subsystem, the execution subsystem is coupled to the plurality of central processing units, and powers the wake-up module under all multiple system power states, and the wake-up module monitors and receives through the communication network To the management group that does not depend on the operating system (OS) management service, and when the management group is detected in the first system power state, by selecting one of the central processing units The processing unit is powered on, and the execution subsystem transitions to the second system power state to process the service request in the second system power state.
Method and device for services not dependent on operating systemTechnical fieldThe present disclosure relates to computer systems, and in particular to cost-effective and scalable services in computer systems that do not depend on operating systems.Background techniqueA computer system is a layered device that includes a hardware layer, a firmware and operating system layer, and an application program layer. The hardware layer of a computer system is often called a physical platform. The physical platform may include a processor, chipset, communication channel, memory, plug-in board, and system.The computer system may also include a manageability engine including a microcontroller, which is dedicated to allowing remote management of the computer system via a communication network via a remote management console. Even when the computer system is in a low power (standby / sleep) state, the manageability engine supports remote management of the computer system.BRIEF DESCRIPTIONBy making the following detailed description with reference to the drawings, the features of the embodiments of the claimed subject matter will become more apparent, the same numbers in the figures indicate the same parts, and among them:1 is a block diagram of a system including a hardware-based service engine that does not depend on an operating system;2 is a block diagram of an embodiment of a system including a low-power, scalable execution container according to the principles of the present invention, where the execution container can be accessed both out-of-band and in-band;Figure 3 illustrates the system power state in the system shown in Figure 2; andFIG. 4 is a flowchart illustrating an embodiment of a method of using the execution subsystem shown in FIG. 2 to provide manageability services.Although the following detailed description will be made with reference to exemplary embodiments of the claimed subject matter, many alternatives, modifications, and variations to these embodiments will be apparent to those skilled in the art. Therefore, the claimed subject matter should be viewed widely and is limited only by what is set forth in the appended claims.detailed descriptionFIG. 1 is a block diagram of an embodiment of a system including a service engine 130 that does not depend on an operating system (OS). The system 100 includes a processor 101 that includes a memory control center (MCH) 102 and an input / output (I / O) control center (ICH) 104. The MCH 102 includes an OS-independent service engine 130 and a memory controller 106 for controlling communication between the processor 101 and the memory 110. The processor 101 and the MCH 102 communicate through the system bus 116. In another embodiment, the functionality of MCH 102 may be included in processor 101, and processor 101 is directly coupled to ICH 104 and memory 110.The OS engine-independent service engine 130 in MCH 102 performs various services on behalf of (but not limited to) management, security, and power applications. For example, through a network interface card (NIC) 122, an OS-independent service engine 130 can control out-of-band (OOB) access through a communication network. A portion of the memory 110 is dedicated to the OS engine-independent service engine 130, for example, to store instructions and runtime data. The MCH 102 protects this dedicated part of the memory 110 from being accessed by the processor 101.To reduce system power consumption, the system can include support for power management. For example, the method for providing power management discussed in "Advanced Configuration and Power Interface Specification" (Revision 2.0c, August 25, 2003) includes six power states (power states) marked as S0-S5 ). The power state ranges from a state S0 in which the system is fully powered on and fully working to a state S5 in which the system is completely powered off. The other states S1-S4 refer to the standby / sleep or sleep state. In the standby / sleep state, power consumption is reduced and the system appears to be shut down. However, the system retains enough context to enable the system to return to state SO without a system restart.In the standby state, in order to reduce battery power consumption, the monitor and hard disk are not powered. However, the information stored in volatile memory is not saved in non-volatile memory such as a hard disk. Therefore, if an interruption occurs in the power supply of the volatile memory, the information stored in the volatile memory will be lost. In the sleep state, the information stored in the volatile memory is saved to the non-volatile memory before the power supply is removed from the hard disk and the monitor. When returning from the sleep state, the information stored in the non-volatile memory is restored to the volatile memory, so that the system looks the same as before entering the sleep state.To support out-of-band access, the OS engine-independent service engine 130 is available in all system power states (S0-S5). However, the OS engine-independent service engine 130 adds extra cost to the computer system, and since the OS engine-independent service engine 130 consumes power when the computer system is in a standby / sleep power state, it increases system power consumption.Unlike providing a dedicated OS-independent service engine 130, an embodiment of the present invention provides a low-cost, low-power, and scalable architecture that allows the computer to be used during all system power states. The system performs out-of-band (OOB) access / management.The processor 101 may be any one of a variety of processors, for example, a single-core IntelPentiumprocessor, a single-core Intel Celeron processor, an IntelXScale processor, or a processor such as IntelPentium D, Inteltoprocessor or Intelcoolmulti-core processor such as dual-core processor, or any other type of processor.The memory 110 may be dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), second-generation double data rate (DDR2) RAM, or Rambus dynamic random access Memory (RDRAM), or any other type of memory.The high-speed chip-to-chip interconnect 114 such as direct media interface (DMI) may be used to couple the ICH 104 to the MCH 102. Through two unidirectional channels, DMI supports a concurrent transmission rate of 2 gigabits per second.The ICH 104 may include a storage I / O controller 120 for controlling communication with at least one storage device 112 coupled to the ICH 104. The storage device 112 may be, for example, a magnetic disk drive, a digital video disk (DVD) drive, a compact disk (CD) drive, a redundant array of independent disks (RAID), a tape drive, or other storage devices. Using a serial storage protocol, such as Serial Attached Small Computer System Interface (SAS) or Serial Advanced Technology Attachment (SATA), the ICH 104 can communicate with the storage device 112 through the storage protocol interconnect 118. The ICH 104 may be coupled to a network interface controller (NIC) 122 to support communication on the communication network.2 is a block diagram of an embodiment of a system 200 that includes a low-power, scalable execution container 220 that can be accessed out-of-band and in-band in accordance with the principles of the present invention. One embodiment of the execution container 220 used by the management service will be described. However, the invention is not intended to be limited to management services. Any platform service (services that do not depend on the operating system) can use the execution container. Examples of platform services include security and power applications.System 200 includes a plurality of central processing units (CPUs) 265-1 ..., 265-N coupled to one or more input / output control centers (ICH) 270. In the illustrated embodiment, the multiple CPUs 265-1 ..., 265-N share memory 202. The memory 202 may store a host operating system 206 shared by the multiple CPUs 265-1 ..., 265-N. In other embodiments, the host operating system 206 may be replaced with a hypervisor.The system includes an execution subsystem including one or more service modules 204, a mailbox shared memory 208 in the memory 202, an execution subsystem wake-up module 275 in the ICH 270, a host operating system driver 115, and an execution container Scheduler 282. The host operating system driver 115 allows applications running in the system 200 to communicate with services running in the execution container.The service module may include a microkernel, an operating system, and a set of applications that represent the service being executed. In one embodiment, the execution container scheduler 282 includes microcode in each of the plurality of CPUs 265-1 ..., 265-N, and for use in the plurality of CPUs 265-1 .. ., 265-N coordination logic. The embodiment shown in FIG. 2 has an execution container having one or more service modules 204. In other embodiments, there may be multiple execution containers, and each execution container has one in one or more service modules. Separated core.In another embodiment, the system 200 may include a virtual machine. The virtual machine is one of multiple discrete execution environments in the system 200. Each virtual machine can execute an operating system and be isolated from other virtual machines, so each virtual machine "owns" all hardware resources on the system 200 from the user's perspective. Typically, a virtual machine monitor (VMM) provides the ability to share system hardware resources between virtual machines. In some systems, the virtual machine monitor may emulate all hardware or partially emulate some of the hardware. In other systems, instead of emulating the hardware, the virtual machine monitor can provide access to hardware resources through an application programming interface (API). Therefore, by using VMM, one physical platform can be used as multiple "virtual" machines.A portion of the memory 202 is dedicated to the service module 204. In an embodiment that includes an operating system or hypervisor, the service module 204 is not visible to the operating system 206. FIG. 2 illustrates an embodiment that includes a host operating system 206. Another part of the main system memory 202 is a mailbox shared memory 208. The mailbox shared memory 208 is used for communication between the service module 204 and the host operating system 206 so that they can exchange information. For example, the service within the service module 204 may use a mailbox shared memory 208 to monitor whether the agent in the host operating system 206 is running. Through the mailbox shared memory 208, the agent can send a periodic keep-alive packet to the service module 204. After the service module 204 detects that the agent stops sending keep-alive packets, it determines that the agent has stopped running, and the service module 204 can take appropriate actions.In one embodiment, the service module 204 includes a scheduler that can schedule service threads for a small time slice of one of multiple CPUs. For example, the manageability service thread may be scheduled to service network packets received from the remote console 294 by the network interface card (NIC) 290 via the Internet 292.The execution subsystem wake-up module 275 is included in the ICH 270. In one embodiment, the execution subsystem wake-up module 275 is implemented as hardware logic, and it is active during all power states, including all low power states. Whenever a request for service from the service module 204 is received through the network interface card (NIC) 290, the execution subsystem wake-up module 275 is used to wake up the service module 204. For example, the request may be a management request received from the remote console 294 through the NIC 290, or a timer request that may be received from the NIC 290.A compressed image including code (instructions) of the service module 204 may be stored in a non-volatile random access memory 280 that may be coupled to the ICH 270. The code may include a mini operating system (OS) and manageability applications.In one embodiment, the service module 204 runs an embedded operating system. For example, the embedded operating system may be an embeddedservice module 204 and may also run a conventional software stack. However, the environment of the service module 204 is not visible to the host operating system 206 running on the system. The host operating system 206 only communicates with the service module 204 via the mailbox shared memory 208 via the platform service device driver 115 in the host operating system 206. Therefore, to the operating system / virtual machine manager, the service module 204 looks like a management controller and management firmware.When needed, the execution container scheduler 282 schedules time slots of one of multiple CPUs 265-1 ..., 265-N to store compressed codes (instructions) of the service module 204 stored in the non-volatile memory 280 ) Is loaded into the memory 202. For example, in response to the manageability request received through the NIC 290, the code of the service module 204 may be loaded into the memory 202, and the code is run by one of a plurality of CPUs 265-1 ..., 265-N in order to Provide services to network packets.In one embodiment, the non-volatile random access memory may be flash memory. In one embodiment, the compressed code of the service module 204 may be stored in the same non-volatile memory used to store the basic input output system (BIOS) used by the CPU 265-1 ..., 265-N.To achieve manageability, the service module 204 also accesses the ICH 207, for example, to allow access to the access channel (input / output (I / O)) bus. The IO bus may be a system management bus (SMBus), a universal serial bus (USB), a fast peripheral component interconnect (PCIe) system bus, or any other type of IO bus. The access channel from the service module 204 to the ICH 207 allows the NIC 290 to send packets to and receive packets from the service module 204.It is not necessary to execute all the capabilities of the subsystem in each of the various system power states. Therefore, in order to reduce the power consumption of the system 200, various capabilities may be placed in a sleep mode based on a specific power state. In one embodiment, four working stages are identified, and each stage (system power state) can use one or more capabilities of the execution subsystem.The system also includes non-volatile memory for performing fast "sleep" and "resume" of the subsystem. Use sleep to save electricity. The non-volatile memory may be the same non-volatile memory in which the service module 204 is stored. However, in one embodiment, the non-volatile memory used for fast sleep has faster read / write characteristics. The first-time storage / retrieval service module 204 does not require quick access to non-volatile memory. During "sleep", the entire memory image is stored in non-volatile memory, and then the system goes to a low power mode (standby power). During the "recovery", the image is copied from the non-volatile memory into the memory 202, and execution starts from the memory 202 in the state where hibernation occurs.In order to provide initial filtering for network packets received by NIC 290, and to wake up the execution subsystem when a packet of interest is received, the execution subsystem wake-up module 275 is always available (active) in all system power states . In one embodiment, the execution subsystem wake-up module 275 includes a microcontroller or logic that provides the functionality of the microcontroller. In the illustrated embodiment, the microcontroller is included in the ICH270. In other embodiments, the microcontroller may be in a non-core portion of the processor (uncore), or may support communication over a network such as a local area network (LAN), wireless (WiFi) network, microwave (WiMAX) network, or any other type of communication The NIC 290 communicates with a network such as the Internet.FIG. 3 illustrates the system power state of the system 200 shown in FIG.The first system power state "V3" 300-standard low power state is the lowest system power state. A capability that may be required is to access the system 200 through the network (Internet 292) in order to wake up the management subsystem. In state V3, the system 200 is powered off or in one of the standby / sleep power states, namely the S1-S5 power states discussed previously. While the system is in one of the S1-S5 power states, the CPU 265-1 ..., 265-N is not active.Only the execution subsystem wake-up module 275 and the networking module in the NIC 290 are active (power on), which allows the NIC 290 to process the received network packet, and allows the execution subsystem wake-up module 275 to receive the management subsystem wake-up request from the NIC 290 Instructions.The second system power state "V2" 302-appears after receiving a request from the network, is the next lowest system power state. In the system power state V2, although the system is powered off (power state S0) or in one of the standby / sleep power states (S1-S5), the remote management console can try to access some information. For example, this information may be needed by the remote console to identify the system in order to determine whether this system is to be serviced remotely. In response to the management request from the remote management console, the execution subsystem can be temporarily switched to the V2 system power state.In the V2 state, the execution subsystem transitions from "sleep" mode to "resume" mode. In order to switch to the "recovery" mode, the image is copied from the non-volatile memory to the memory 202, and the service module 204 starts executing from the memory 202 in the state where the sleep occurs. At V2, the execution subsystem "image" is retrieved from the non-volatile memory so that the service module 204 and the execution container scheduler 282 can process the manageability request received by the NIC 290 in the network packet through the network.In the system power state V2, in order to process the manageability network packets received by the NIC 290, one of the ICH 270, NIC 290, multiple CPUs 265-1 ..., 265-N and related logic, and input / The output link is powered (active). In order to achieve the most appropriate power supply, in one embodiment, the execution container scheduler 282 wakes up the CPU that was recently powered on (warm core) and is in the lowest processor performance state (P state) or voltage / frequency operating point 265- 1 ..., 265-N. The P state is a lower power performance state within the standard state of the CPU (core).Therefore, only the minimum logic (executing subsystem wake-up module 275) in the ICH 270 for monitoring and managing network packets is powered. It also supplies power to a part of the bootstrap processor. In one embodiment, an interrupt is generated when a management network packet is received. This interrupt activates a thread processed by the bootstrapping processor to select one of the CPUs to process the received management network packet.The execution container scheduler 282 optimizes the power supply, response, and impact on system resources. When the system 200 is in the system power state V2, the execution container scheduler 282 uses the most aggressive strategy to return to the system power state V3, that is, to put the service module 204 to sleep once the packet is serviced.However, the execution container scheduler 282 also tracks the number of transitions between the power state V3 and the power state V2. If the number of transitions between the power state V3 and the power state V2 occurs too much, that is, the number of transitions is greater than a predetermined threshold, the power strategy may be too aggressive. The execution container scheduler 282 may wait longer in the power state V2 before transitioning to the power state V3.The third system power state "V1" 304-before the operating system, after the operating system, the basic input output system (BIOS), is the state when the system 200 is powered on, but the operating system has not been installed or is not yet available . Capabilities that may be required under the V1 system power state include serial redirection and media redirection. Powers the CPU 265-1 ..., 265-N, ICH270, non-volatile memory 280, storage device 285, and NIC 290. The execution container scheduler 282 provides time slices for the service module 204 that runs independently of the host operating system 206.The fourth system power state "V0" 306-concurrent with the operating system, is the highest power stage. At the V0 stage, the entire system 200 is powered on and the operating system is available. Capabilities that may be required in the V0 system power state include network traffic filtering and circuit breakers.When the system is in the system power state V0 (ie, the normal operating mode in which the operating system / virtual machine manager is running and available), the execution container scheduler 282 ensures that the execution management function is not too large for the host virtual machine manager / operating system influences.Under normal idle conditions, usually the execution subsystem does not receive many management requests, so it uses fewer CPU cycles. However, when the service module 204 is processing computationally intensive tasks (eg, remote keyboard video, mouse session), the execution subsystem consumes more CPU cycles.The execution container scheduler 282 limits the CPU time slice allocated to the service module 204 to the highest configured percentage of the entire CPU time slice. In one embodiment, only five percent of the CPU time slice can be allocated to the service module 204. The execution container scheduler 282 ensures that the service module 204 gets at least the minimum configured time slice. This is to ensure that a malfunctioning virtual machine manager / operating system will not use all available CPU time slices.The execution container scheduler 282 schedules service threads among multiple different cores (CPUs) as evenly as possible. Perform this scheduling method to distribute the load to each core so as not to affect the way the host operating environment is allocated and used by the core. For example, in one embodiment, where the host operating environment is a hypervisor and the core has been assigned to a specific virtual machine, the execution container scheduler 282 schedules the service threads as uniformly as possible among multiple different cores, so as not to Make one core heavier than other cores.After the service module 204 receives the notification about the manageability packet received from the NIC 290, the service module 204 can use the standard networking driver of the PCIe or USB network device driver or by using the PCIe vendor definition message (VDM) to communicate with NIC 290 communicates.The executive subsystem is an optimal modular architecture that meets the needs of the above four system power states (V1-V3) and provides a scalable architecture.4 is a flowchart illustrating an embodiment of a method for using the execution subsystem shown in FIG. 2 to provide management services.At block 400, the execution subsystem wake-up module 275 monitors the NIC 290 for received network packets to be processed by the execution subsystem. If a network packet for the execution subsystem is received, the process continues to block 402. If not received, the process remains at block 400 to wait for network packets related to manageability.At block 402, if the current power state is V3, the process continues to block 404. If not, the process continues to block 414. The process continues to 404.At block 404, in order to process network packets related to manageability received by NIC 290, one of ICH 270, NIC 290, and multiple CPUs 265-1 ..., 265-N is powered on. By copying the image from non-volatile memory 280 into memory 202, the execution subsystem transitions from the "sleep" mode to the "resume" mode. Processing continues to block 406.At block 406, the execution subsystem processes the network packet. Processing continues to block 408.At block 408, if the number of transitions between power state V3 and power state V2 is greater than a predetermined threshold indicating that the power strategy may be too aggressive, then the process continues to block 412. If it is not greater than the predetermined threshold, the process continues to block 410.At block 410, the power state is switched back to the power state V3, the manageability system is switched to the "sleep" mode and the selected one of the ICH 270 and the plurality of CPUs 265-1 ..., 265-N is powered off . The process continues to block 400 to wait for another network packet to be processed.At block 412, the execution container scheduler 282 is still in the power state V2 for a period of time before transitioning to the power state V3. The process continues to block 400 to wait for another network packet to be processed.At block 414, if the current power state is V2, the process continues to block 406 to process the received network packet. If not, the process continues to block 416.At block 416, if the current power state is V1, the process continues to 418. If not, the process continues to block 420.At block 418, in power state V1, the operating system is either not yet installed or unusable. The CPU 265-1 ..., 265-N, ICH 270, non-volatile memory 280, storage device 285, and NIC 290 are powered on. The execution container scheduler 282 provides a time slice for the service module 204 that runs independently of the host operating system 206 in order to process the received network packet. The process continues to block 400 to process the next received network packet.At block 420, the current power state is V0, the entire computer system 200 is powered and the operating system is available. Time slices are allocated to the execution subsystem to process the received network packets. The process continues to block 400 to process the next received network packet.It is obvious to those skilled in the art that the methods involved in the embodiments of the present invention may be implemented by computer program products including computer-usable media. For example, such a computer-usable medium may include a read-only memory device (such as a compact disk read only memory (CD-ROM) disk or a conventional ROM device) or a computer magnetic disk having computer-readable program code stored thereon.Although the present invention has been specifically shown and described with reference to the embodiments of the present invention, those skilled in the art will understand that various modifications in form and details can be made to these embodiments without departing from the appended rights The scope of the embodiments of the invention covered by the requirements.
Semiconductor devices including stacked semiconductor dies and associated systems and methods are disclosed herein. In one embodiment, a semiconductor device includes a first semiconductor die coupled to a package substrate and a second semiconductor die stacked over the first semiconductor die and laterally offset from the first semiconductor die. The second semiconductor die can accordingly include an overhang portion that extends beyond a side of the first semiconductor die and faces the package substrate. In some embodiments, the second semiconductor die includes bond pads at the overhang portion that are electrically coupled to the package substrate via conductive features disposed therebetween. In certain embodiments, the first semiconductor die can include second bond pads electrically coupled to the package substrate via wire bonds.
CLAIMSI/We claim:1. A semiconductor device, comprising:a package substrate;a first semiconductor die coupled to the package substrate and having an upper surface facing away from the package substrate, the upper surface including first bond pads;wire bonds electrically coupling the first bonds pads of the first semiconductor die to the package substrate;a second semiconductor die coupled to the upper surface of the first semiconductor die and having a lower surface facing the package substrate, wherein the second semiconductor die extends laterally beyond at least one side of the first semiconductor die to define an overhang portion of the second semiconductor die, and wherein the lower surface includes second bond pads at the overhang portion; andconductive features electrically coupling the second bond pads of the second semiconductor die to the package substrate.2. The semiconductor device of claim 1, further comprising:a molded material over the package substrate and at least partially around the first semiconductor die, the wire bonds, the second semiconductor die, and/or the conductive features, wherein the second semiconductor die includes an upper surface opposite the lower surface, and wherein the molded material does not extend away from the package substrate beyond a plane coplanar with the upper surface of the second semiconductor die.3. The semiconductor device of claim 2 wherein the molded material encapsulates the first semiconductor die, the wire bonds, and the conductive features.4. The semiconductor device of claim 1 wherein the conductive features are conductive pillars extending between the second bond pads and the package substrate.5. The semiconductor device of claim 1 wherein the package substrate is a redistribution structure having a first surface and a second surface opposite the first surface, wherein the first surface includes first conductive contacts and second conductive contacts, wherein the second surface includes third conductive contacts, wherein the first conductive contacts and second conductive contacts are electrically coupled to corresponding ones of the third conductive contacts by conductive lines extending through and/or on an insulating material, and wherein the redistribution structure does not include a pre-formed substrate.6. The semiconductor device of claim 5 wherein the wirebonds electrically couple the first bond pads of the first semiconductor die to corresponding ones of the first conductive contacts of the redistribution structure, and wherein the conductive features are copper pillars formed on the second conductive contacts and electrically coupling the second bond pads of the second semiconductor die to corresponding ones of the second conductive contacts.7. The semiconductor device of claim 1 wherein a maximum height of the wire bonds above the upper surface of the first semiconductor die is less than or equal to a height of the second semiconductor die above the upper surface of the first semiconductor die.8. The semiconductor device of claim 1 wherein the upper surface of the first semiconductor die includes a first portion and a second portion, wherein the second semiconductor is over the first semiconductor die only at the second portion, and wherein the first bond pads are located at the first portion.9. The semiconductor device of claim 1 wherein the first semiconductor die includes opposing first sides and opposing second sides, and wherein the second semiconductor die extends laterally beyond only one of the first sides or one of the second sides.10. The semiconductor device of claim 1 wherein the first semiconductor die includes opposing first sides and opposing second sides, and wherein the second semiconductor die extends laterally beyond one of the first sides and one of the second sides.11. The semiconductor device of claim 1 wherein the first semiconductor die and the second semiconductor die have the same shape and dimensions.12. The semiconductor device of claim 1, further comprising:a first die-attach material between the first semiconductor die and the package substrate; anda second die-attach material between the first semiconductor die and the second semiconductor die.13. The semiconductor device of claim 1 wherein the wire bonds are first wire bonds, wherein the conductive features are first conductive features, and further comprising:a third semiconductor die coupled to an upper surface of the second semiconductor die and having an upper surface facing away from the second semiconductor die and the package substrate, the upper surface of the third semiconductor die including third bond pads; andsecond wire bonds electrically coupling the third bonds pads of the third semiconductor die to the package substrate.14. The semiconductor device of claim 13 further comprising a die-attach material on the upper surface of the second semiconductor die, wherein the third semiconductor die is coupled to the second semiconductor die via the die-attach material, and wherein the third semiconductor die is over substantially all of the upper surface of the second semiconductor die.15. The semiconductor device of claim 13, further comprising:a fourth semiconductor die coupled to the upper surface of the third semiconductor die and having a lower surface facing the package substrate, wherein the fourth semiconductor die extends laterally beyond at least one side of the third semiconductor die to define an overhang portion of the fourth semiconductor die, and wherein the lower surface of the fourth semiconductor die includes fourth bond pads at the overhang portion of the fourth semiconductor die; andsecond conductive features electrically coupling the fourth bond pads of the fourth semiconductor die to the package substrate.16. The semiconductor device of claim 15 wherein the fourth semiconductor die has an upper surface opposite the lower surface, and further comprising a molded material over the package substrate, wherein the molded material does not extend away from the package substrate beyond a plane coplanar with the upper surface of the fourth semiconductor die.17. The semiconductor device of claim 1 wherein the package substrate is at least one of an interposer, a printed circuit board, a dielectric spacer, or another semiconductor die.18. A semiconductor device, comprising:a redistribution structure having a first surface and a second surface opposite the first surface, wherein the first surface includes first conductive contacts and second conductive contacts, wherein the second surface includes third conductive contacts, and wherein the first conductive contacts and the second conductive contacts are electrically coupled to corresponding ones of the third conductive contacts by conductive lines extending through and/or on an insulating material; a first semiconductor die coupled to the redistribution structure and having first bond pads;wire bonds electrically coupling the first bond pads to the first conductive contacts of the redistribution structure;a second semiconductor die stacked over the first semiconductor die and laterally offset from the first semiconductor die, wherein the second semiconductor die includes a surface having a first portion over the first semiconductor die and a second portion over the second conductive contacts of the redistribution structure, and wherein the second portion includes second bond pads; andconductive pillars coupling the second conductive contacts to the second bond pads.19. The semiconductor device of claim 18 wherein the first semiconductor die and the second semiconductor die have the same planform shape.20. The semiconductor device of claim 19 wherein the first semiconductor die and the second semiconductor die are identical, and wherein an arrangement of the first bonds on the first semiconductor die is identical to an arrangement of the second bond pads on the second semiconductor die.21. The semiconductor device of claim 18 wherein the first semiconductor die includes a pair of opposing sides, and wherein the second semiconductor is laterally offset from the first semiconductor die only along an axis extending between the pair of opposing sides and parallel to the pair of opposing sides.22. The semiconductor device of claim 21 wherein the first conductive contacts of the redistribution structure are spaced laterally outward from one of the pair of opposing sides, and wherein the second conductive contacts of the redistribution structure are spaced laterally outward from the other of the pair of opposing sides.23. The semiconductor device of claim 19 further comprising:a first die-attach material coupling the first semiconductor die to the redistribution structure; anda second die-attach material coupling the second semiconductor die to the first semiconductor die, wherein the second die-attach material is the same as the first die-attach material.24. The semiconductor device of claim 17 wherein the redistribution structure does not include a pre-formed substrate, and wherein a thickness of the redistribution structure between the first and second surfaces is less than about 50 μιη.25. A method of manufacturing a semiconductor device, the method comprising: forming conductive pillars on a first surface of a package substrate and electrically coupled to the package substrate; coupling a first semiconductor die to the package substrate;electrically coupling first bond pads of the first semiconductor die to the package substrate via wire bonds;coupling a second semiconductor die to the first semiconductor die and the conductive pillars, wherein the conductive pillars electrically couple second bond pads of the second semiconductor die to the package substrate; andforming a molded material over the first surface of the package substrate and at least partially around the first semiconductor die, the conductive pillars, the wire bonds, and the second semiconductor die.26. The method of claim 25 wherein the molded material does not extend above the second semiconductor die relative to the package substrate.27. The method of claim 25 wherein the package substrate is a redistribution structure, and further comprising:forming the redistribution structure on a carrier;after forming the molded material, removing the carrier to expose a second surface of the redistribution structure; andforming a plurality of electrically conductive elements on the second surface, the electrically conductive elements electrically coupled, via the redistribution structure, to one or more of the conductive pillars and/or wire bonds.28. The method of claim 23 wherein coupling the second semiconductor die to the conductive pillars includes thermo-compression bonding the second bond pads of the second semiconductor die to the conductive pillars.
SEMICONDUCTOR DEVICE HAVING LATERALLY OFFSETSTACKED SEMICONDUCTOR DIESTECHNICAL FIELD[0001] The present disclosure generally relates to semiconductor devices. In particular, the present technology relates to semiconductor devices having a semiconductor die stack that includes laterally offset semiconductor dies, and associated systems and methods.BACKGROUND[0002] Microelectronic devices, such as memory devices, microprocessors, and light emitting diodes, typically include one or more semiconductor dies mounted to a substrate and encased in a protective covering. The semiconductor dies include functional features, such as memory cells, processor circuits, interconnecting circuitry, etc. Semiconductor die manufacturers are under increasing pressure to reduce the volume occupied by semiconductor dies and yet increase the capacity and/or speed of the resulting encapsulated assemblies. To meet these demands, semiconductor die manufacturers often stack multiple semiconductor dies vertically on top of each other to increase the capacity or performance of a microelectronic device within the limited volume on the circuit board or other element to which the semiconductor dies are mounted.[0003] In some semiconductor die stacks, stacked dies are directly electrically interconnected— e.g., using through-silicon vias (TSVs) or flip-chip bonding— for providing an electrical connection to the circuit board or other element to which the dies are mounted. However, interconnecting the dies in this manner requires additional processing steps to create the vias and/or metallization features necessary to interconnect the dies. In other semiconductor die stacks, the stacked dies are wire bonded to the circuit board or other element. While using wire bonds can avoid the cost and complexity associated with interconnecting the dies, wire bonds increase the total height of the die stack because they loop above each die in the stack, including the uppermost die. BRIEF DESCRIPTION OF THE DRAWINGS[0004] Figures 1A and IB are a cross- sectional view and a top plan view, respectively, illustrating a semiconductor device in accordance with embodiments of the present technology.[0005] Figures 2A-2J are cross-sectional views illustrating a semiconductor device at various stages of manufacturing in accordance with embodiments of the present technology.[0006] Figures 3A and 3B are a cross-sectional view and top plan view, respectively, illustrating a semiconductor device in accordance with embodiments of the present technology.[0007] Figure 4 is a top plan view of a semiconductor device in accordance with an embodiment of the present technology.[0008] Figure 5 is a schematic view of a system that includes a semiconductor device configured in accordance with embodiments of the present technology.DETAILED DESCRIPTION[0009] Specific details of several embodiments of semiconductor devices are described below. In several of the embodiments described below, a semiconductor device includes a first semiconductor die coupled to a package substrate and a second semiconductor die stacked over the first semiconductor die and laterally offset from the first semiconductor die. Accordingly, the second semiconductor die can include an overhang portion that extends beyond at least one side of the first semiconductor die. In some embodiments, the second semiconductor die is stacked over only a first portion of the first semiconductor die and not a second portion of the first semiconductor die. In certain embodiments, (a) bonds pads of the first semiconductor die are located at the first portion and electrically coupled to the package substrate via wire bonds, and (b) bond pads of the second semiconductor die are located at the overhang portion and electrically coupled to the package substrate via conductive pillars. Because bond pads of both the first and second semiconductor dies are directly electrically coupled to the package substrate, the formation of electrical interconnections between the stacked dies is not necessary. Moreover, the height of the semiconductor device is not limited by the height of the wire bonds, since the wire bonds are only coupled to the first semiconductor device and need not extend beyond the upper surface of the second semiconductor die.[0010] In the following description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with semiconductor devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.[0011] As used herein, the terms "vertical," "lateral," "upper," and "lower" can refer to relative directions or positions of features in the semiconductor devices in view of the orientation shown in the Figures. For example, "upper" or "uppermost" can refer to a feature positioned closer to the top of a page than another feature. These terms, however, should be construed broadly to include semiconductor devices having other orientations, such as inverted or inclined orientations where top/bottom, over/under, above/below, up/down, and left/right can be interchanged depending on the orientation.[0012] Figure 1A is a cross- sectional view, and Figure IB is a top plan view, illustrating a semiconductor device 100 ("device 100") in accordance with an embodiment of the present technology. With reference to Figure 1A, the device 100 includes a first semiconductor die 110 and a second semiconductor die 120 (collectively "semiconductor dies 110, 120") carried by a package substrate 130. The semiconductor dies 110, 120 can each have integrated circuits or components, data storage elements, processing components, and/or other features manufactured on semiconductor substrates. For example, the semiconductor dies 110, 120 can include integrated memory circuitry and/or logic circuitry, which can include various types of semiconductor components and functional features, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, other forms of integrated circuit memory, processing circuits, imaging components, and/or other semiconductor features. In some embodiments, the semiconductor dies 110, 120 can be identical (e.g., memory dies manufactured to have the same design and specifications), but in other embodiments the semiconductor dies 110, 120 can be different from each other (e.g., different types of memory dies or a combination of controller, logic, and/or memory dies).[0013] The first semiconductor die 110 includes a lower surface 113b facing the package substrate 130 and an upper surface 113a opposite the lower surface 113b. Similarly, the second semiconductor die 120 includes a lower surface 123b facing the upper surface 113a of the first semiconductor die 110 and the package substrate 130, and an upper surface 123a opposite the lower surface 123b. In the embodiment illustrated in Figure 1A, the second semiconductor die 120 is stacked over the first semiconductor die 110 such that a portion of the lower surface 123b of the second semiconductor die 120 is over (e.g., directly above and/or adjacent to) the upper surface 113a of the first semiconductor die 110. That is, the second semiconductor die 120 is laterally offset from the first semiconductor die 110 such that the second semiconductor die 120 includes an overhang portion 124 that is not positioned over the first semiconductor die 110, and the first semiconductor die 110 includes a corresponding open portion 114 where the second semiconductor die 120 is not positioned over the first semiconductor die 110. More particularly, with reference to Figure IB, the first semiconductor die 110 can include opposing first sides 116 and opposing second sides 118, and the second semiconductor die 120 can extend beyond only one of the first sides 116 (shown in phantom in Figure IB) of the first semiconductor die 110 (e.g., in a direction along an axis Xi generally parallel to second sides 118) to define the overhang portion 124. In other embodiments (e.g., as shown in Figure 4), the second semiconductor die 120 can extend beyond more than one of the first sides 116 and/or second sides 118 of the first semiconductor die 110 to define the overhang portion 124.[0014] The size, shape, and relative extent of the open portion 114 of the first semiconductor die 110 and the overhang portion 124 of the second semiconductor die 120 depend at least on the relative dimensions (e.g., width, thickness, and length) and positioning (e.g., lateral offset) of the semiconductor dies 110, 120. As shown in the top plan view of Figure IB, for example, the semiconductor dies 110, 120 can each have the same rectangular planform shape with the same or substantially similar dimensions. Accordingly, the open portion 114 and the overhang portion 124 can both have rectangular planform shapes with the same or substantially similar dimensions. However, in other embodiments, the shape, size, and offset of the semiconductor dies 110, 120 can differ. For example, the first semiconductor die 110 and/or second semiconductor die 120 can be circular, square, polygonal, and/or other suitable shapes. Accordingly, the open portion 114 of the first semiconductor die 110 and/or the overhang portion 124 of the second semiconductor die 120 can have different relative shapes and/or sizes.[0015] The first semiconductor die 110 further includes first bond pads 112 on (e.g., exposed at) the upper surface 113a at the open portion 114, and facing away from the package substrate 130. Similarly, the second semiconductor die 120 includes second bond pads 122 on the lower surface 113b at the overhang portion 124, and facing the package substrate 130. That is, the semiconductor dies 110, 120 can be arranged in a face-to-face configuration in which the bond pads of each semiconductor die face opposite directions. As illustrated in Figure IB, the first and second bond pads 112, 122 (collectively "bond pads 112, 122"; the bond pads 122 of the second semiconductor die are shown in phantom in Figure IB) can each have rectilinear shapes and can be formed in a single column along one side of the semiconductor dies 110, 120, respectively. However, in other embodiments, the bond pads 112, 122 can have any other shape or configuration. For example, the bond pads 112, 122 can be circular, polygonal, etc., and may be arranged in multiple rows and/or columns, along more than one side of the semiconductor dies 110, 120, etc.[0016] As shown in Figure 1A, the device 100 includes only two semiconductor dies. However, in other embodiments, the device 100 may include any number of semiconductor dies. For example, the device 100 may include one or more additional semiconductor dies stacked on the first semiconductor die 110 and/or second semiconductor die 120, or the device 100 may have other semiconductor dies coupled to the package substrate 130 adjacent to the first semiconductor die 110 and/or second semiconductor die 120.[0017] Referring again to Figure 1A, the device 100 can further include a first die-attach material 142 formed at least partially between the lower surface 113b of the first semiconductor die 110 and the package substrate 130, and a second die-attach material 144 formed at least partially between the upper surface 113a of the first semiconductor die 110 and the lower surface 123b of the second semiconductor die 120. The first and second die-attach materials 142, 144 can be, for example, adhesive films (e.g. die-attach films), epoxies, tapes, pastes, or other suitable materials. In some embodiments, the first and second die-attach materials 142, 144 are the same material and/or have the substantially the same thickness. As shown in the embodiment of Figure 1A, the second die-attach material 144 can extend at least partially onto the overhang portion 124 of the second semiconductor die 120. However, in other embodiments, the second die-attach material 144 can extend only between the first semiconductor die 110 and the second semiconductor die 120. Likewise, in some embodiments, the second die-attach material 144 can extend at least partially onto the open portion 114 of the first semiconductor die 110.[0018] The package substrate 130 can include a redistribution structure, an interposer, a printed circuit board, a dielectric spacer, another semiconductor die (e.g., a logic die), or another suitable substrate. More specifically, in the embodiment illustrated in Figure 1A, the package substrate 130 has a first side 133a and a second side 133b opposite the first side 133a, and includes an insulating material 135 that insulates conductive portions of the package substrate 130. The conductive portions of the package substrate can include first contacts 132 and second contacts 134 in and/or on the insulating material 135 and exposed at the first surface 133a. As is more clearly illustrated in Figure IB, the first contacts 132 are spaced laterally outward from (e.g., outboard of) one of the first sides 116 of the first semiconductor die 110. The second contacts 134 (obscured in Figure IB) can be spaced laterally outward from the other of the first sides 116 and below the overhang portion 124 of the second semiconductor die 120. In some embodiments, the second contacts 134 are vertically aligned with (e.g. superimposed below) the second bond pads 122 of the second semiconductor die 120.[0019] The conductive portions of the package substrate 130 can also include (a) conductive third contacts 136 in and/or on the insulating material 135 and exposed at the second surface 133b of the package substrate 130, and (b) conductive lines 138 (e.g., vias and/or traces) extending within and/or on the insulating material 135 to electrically couple individual ones of the first contacts 132 and second contacts 134 to corresponding ones of the third contacts 136. In some embodiments, one or more of the third contacts 136 can be positioned laterally outboard of (e.g., fanned out from) the corresponding first contacts 132 or second contacts 134 to which the third contacts 136 are electrically coupled. Positioning at least some of the third contacts 136 laterally outboard of the first contacts 132 and/or second contacts 134 facilitates connection of the device 100 to other devices and/or interfaces having connections with a greater pitch than that of the first semiconductor die 110 and/or second semiconductor die 120. In some embodiments, an individual one of the third contacts 136 can be electrically coupled, via corresponding conductive lines 138, to more than one of the first contacts 132 and/or second contacts 134. In this manner, the device 100 may be configured such that individual pins of the semiconductor dies 110, 120 are individually isolated and accessible (e.g., signal pins) via separate third contacts 136, and/or configured such that multiple pins are collectively accessible via the same third contact 136 (e.g., power supply or ground pins). In certain embodiments, the first contacts 132, second contacts 134, third contacts 136, and conductive lines 138 can be formed from one or more conductive materials such as copper, nickel, solder (e.g., SnAg-based solder), conductor- filled epoxy, and/or other electrically conductive materials.[0020] In some embodiments, the package substrate 130 is a redistribution structure that does not include a pre-formed substrate (i.e., a substrate formed apart from a carrier wafer and then subsequently attached to the carrier wafer). In such embodiments, and as described in further detail below with reference to Figures 2A-2D, the insulating material 135 can comprise, for example, one or more layers of a suitable dielectric material (e.g., a passivation material) that are additively formed one layer on top of another. Likewise, the conductive portions of the redistribution structure can be additively formed via a suitable build-up process. In embodiments in which the redistribution structure does not include a pre-formed substrate, the package substrate 130 can be very thin. For example, in some such embodiments, a distance D between the first and second surfaces 133a, 133b of the package substrate 130 can be less than 50 μηι. In certain embodiments, the distance D is approximately 30 μιη, or less than 30 μιη. Therefore, the overall size of the semiconductor device 100 can be reduced as compared to, for example, devices including a conventional redistribution layer formed over a pre-formed substrate. However, the thickness of the package substrate 130 is not limited. In other embodiments, the package substrate 130 can include different features and/or the features can have a different arrangement.[0021] The device 100 further includes electrical connectors 108 (e.g., solder balls, conductive bumps, conductive pillars, conductive epoxies, and/or other suitable electrically conductive elements) electrically coupled to the third contacts 138 of the package substrate 130 and configured to electrically couple the device 100 to external devices or circuitry (not shown). In some embodiments, the electrical connectors 108 form a ball grid array on the second surface 133b of the package substrate 130. In certain embodiments, the electrical connectors 108 can be omitted and the third contacts 136 can be directly connected to external devices or circuitry.[0022] The device 100 further includes (a) wirebonds 104 electrically coupling the first bond pads 112 of the first semiconductor die 110 to the first contacts 132 of the package substrate 130, and (b) conductive features 106 electrically coupling the second bond pads 122 of the second semiconductor die 120 to the second contacts 134 of the package substrate 130. Notably, in the embodiment illustrated in Figure 1A, a maximum height of the wire bonds 104 above the package substrate 130 (or, e.g., the upper surface 113a of the first semiconductor die 110) is not greater than a height of the second semiconductor die 120 above the same. That is, the wire bonds 104 do not extend upward beyond a plane coplanar with the upper surface 123 a of the second semiconductor die 120. Moreover, as illustrated in the top plan view of Figure IB, each first contact 132 can be electrically coupled to only a single one of the bond pads 112 of the first semiconductor die 110 via a single one of the wire bonds 104. However, in other embodiments, individual ones of the first contacts 132 can be electrically coupled via two or more wire bonds 104 to two or more of the first bond pads 112 (e.g., for providing a common signal to two pins of the first semiconductor die 110). The conductive features 106 can have various suitable structures, such as pillars, columns, studs, bumps, etc., and can be made from copper, nickel, solder (e.g., SnAg-based solder), conductor-filled epoxy, and/or other electrically conductive materials. In certain embodiments, the conductive features 106 are solder-joints, while in certain embodiments the conductive features 106 are copper pillars. In other embodiments, the conductive features 106 can include more complex structures, such as bump- on-nitride structures, or other known flip-chip mounting structures.[0023] Notably, the second semiconductor die 120 need not be directly electrically interconnected with or through the first semiconductor die 110 since the second semiconductor die 120 is directly connected to the package substrate 130. In contrast, many conventional semiconductor devices require relatively complex and expensive interconnection structures for coupling stacked semiconductor dies to a package substrate. For example, many known semiconductor devices include through- silicon vias (TSVs) that extend through lower semiconductor dies in a stack to electrically connect upper dies in the stack to a package substrate. Such devices not only require the formation of TSVs, but also the formation of interconnects (e.g., under bump metallization features, solder connections, etc.) for connecting the TSVs of adjacent semiconductor dies in the stack. Likewise, many known semiconductor devices include stacked semiconductor dies that are arranged face-to-face and flip-chip bonded together. Again, such devices require the formation of interconnect structures that connect the bond pads of facing dies and, in many instances, the formation of a redistribution layer (RDL) between the semiconductor dies to provide a suitable mapping between the bond pads of each die. The device 100 described herein does not require direct electrical interconnection between the semiconductor dies 110, 120, and therefore avoids the cost and complexity associated with associated interconnection structures. For example, in lieu of forming a RDL between the semiconductor dies 110, 120, the device 100 can simply include the second die-attach material 144 between the semiconductor dies 110, 120.[0024] As further shown in Figure 1A, the device 100 includes a molded material 146 over the first side 133a of the package substrate 130 (the molded material 146 is not shown in Figure IB for ease of illustration). The molded material 146 at least partially surrounds the first semiconductor die 110, the second semiconductor die 120, the wire bonds 104, and/or the conductive features 106 to protect one or more of these components from contaminants and/or physical damage. For example, in the embodiment illustrated in Figure 1A, the molded material 146 encapsulates (e.g., seals) the first semiconductor die 110, wire bonds 104, and conductive features 106, while only the upper surface 123a of the second semiconductor die 120 is exposed from the molded material 146.[0025] Notably, the molded material 146 does not extend above the second semiconductor die 120 relative to the package substrate 130 (e.g., above a plane coplanar with the upper surface 123a of the second semiconductor die 120), while also substantially encapsulating the wire bonds 104 and conductive features 106. In contrast, many conventional semiconductor devices include a stack of semiconductor dies each wire-bonded to a package substrate. In such devices, the wire bonds of the uppermost semiconductor die in the stack extend beyond the uppermost die to connect to the bond pads of that die (e.g., in a manner similar to the wire bonds 104 in Figure 1A, which include a "loop-height" above the upper surface 113a of the first semiconductor die 110). However, because the second semiconductor die 120 is directly electrically coupled to the package substrate 130 via the conductive features 106— rather than via wire bonds— the molded material 146 need not extend above the second semiconductor die 120.[0026] Accordingly, the height (e.g., thickness) of the device 100 and the total amount of molded material 146 used in the device 100 may be reduced. Reducing the amount of molded material 146 in the device 100 can reduce the tendency of the device 100 to warp in response to changing temperatures. In particular, molded materials generally have a greater coefficient of thermal expansion (CTE) than silicon semiconductor dies. Therefore, reducing the volume of the molded material 146 by reducing the height of the molded material can lower the overall average CTE for the device 100 (e.g., by increasing the relative volume occupied by the semiconductor dies 110, 120). However, in other embodiments, the molded material 146 may extend above the second semiconductor die 120. For example, in some embodiments, the molded material 146 can extend slightly above the second semiconductor die 120 so as to cover the upper surface 123a, while still reducing the overall height of the device 100 as compared to, for example, a semiconductor device in which the uppermost semiconductor die is wire bonded to a package substrate.[0027] Furthermore, in some embodiments, the molded material 146 can at least partially fill the space below the overhang portion 124 of the second semiconductor die 120. The molded material 146 can therefore support the overhang portion 124 to prevent warpage of, or other damage to, the second semiconductor die 120 resulting from external forces. Moreover, in embodiments where the package substrate 130 is a redistribution structure that does not include a pre-formed substrate, the molded material 146 can also provide the desired structural strength for the device 100. For example, the molded material 146 can be selected to prevent the device 100 from warping, bending, etc., as external forces are applied to the device 100. As a result, in some embodiments, the redistribution structure can be made very thin (e.g., less than 50 μιη) since the redistribution structure need not provide the device 100 with a great deal of structural strength. Therefore, the overall height (e.g., thickness) of the device 100 can further be reduced.[0028] Figures 2A-2J are cross- sectional views illustrating various stages in a method of manufacturing semiconductor devices 100 in accordance with embodiments of the present technology. Generally, a semiconductor device 100 can be manufactured, for example, as a discrete device or as part of a larger wafer or panel. In wafer-level or panel-level manufacturing, a larger semiconductor device is formed before being singulated to form a plurality of individual devices. For ease of explanation and understanding, Figures 2A-2J illustrate the fabrication of two semiconductor devices 100. However, one skilled in the art will readily understand that the fabrication of semiconductor devices 100 can be scaled to the wafer and/or panel level— that is, to include many more components so as to be capable of being singulated into more than two semiconductor devices 100— while including similar features and using similar processes as described herein.[0029] Figures 2A-2D, more specifically, illustrate the fabrication of a package substrate for the semiconductor devices 100 (Figure 1A) that is a redistribution structure that does not include a pre-formed substrate. In other embodiments, a different type of package substrate (e.g., an interposer, a printed circuit board, etc.) can be provided for the semiconductor devices 100, and the method of manufacturing the semiconductor devices 100 can begin at, for example, the stage illustrated in Figure 2E after providing the package substrate.[0030] Referring to Figure 2A, the package substrate 130 (i.e., the redistribution structure) is formed on a carrier 250 having a back side 251b and a front side 251a including a release layer 252 formed thereon. The carrier 250 provides mechanical support for subsequent processing stages and can be a temporary carrier formed from, for example, silicon, silicon-on- insulator, compound semiconductor (e.g., Gallium Nitride), glass, or other suitable materials. In some embodiments, the carrier 250 can be reused after it is subsequently removed. The carrier 250 also protects a surface of the release layer 252 during the subsequent processing stages to ensure that the release layer 252 can later be properly removed from the package substrate 130. The release layer 252 prevents direct contact of the package substrate 130 with the carrier 250 and therefore protects the package substrate 130 from possible contaminants on the carrier 250. The release layer 252 can be a disposable film (e.g., a laminate film of epoxy-based material) or other suitable material. In some embodiments, the release layer 252 is laser- sensitive or photosensitive to facilitate its removal at a subsequent stage.[0031] The package substrate 130 (Figure 1A) includes conductive and dielectric materials that can be formed from an additive build-up process. That is, the package substrate 130 is additively built directly on the carrier 250 and the release layer 252 rather than on another laminate or organic substrate. Specifically, the package substrate 130 is fabricated by semiconductor wafer fabrication processes such as sputtering, physical vapor deposition (PVD), electroplating, lithography, etc. For example, referring to Figure 2B, the third contacts 136 can be formed directly on the release layer 252, and a layer of insulating material 135 can be formed on the release layer 252 to electrically isolate the third contacts 136. The insulating material 135 may be formed from, for example, parylene, polyimide, low temperature chemical vapor deposition (CVD) materials— such as tetraethylorthosilicate (TEOS), silicon nitride (Si3Ni4), silicon oxide (S1O2)— and/or other suitable dielectric, non-conductive materials. Referring to Figure 2C, one or more additional layers of insulating material can be formed to build up the insulating material 135, and one or more additional layers of conductive material can be formed to build up the conductive lines 138 on and/or within the insulating material 135.[0032] Figure 2D shows the package substrate 130 after being fully formed over the carrier 250. As described above, the first contacts 132 and second contacts 134 are formed to be electrically coupled to corresponding ones of the third contacts 136 via one or more of the conductive lines 138. The first contacts 132, second contacts 134, third contacts 136, and the conductive lines 138 can be made from copper, nickel, solder (e.g., SnAg-based solder), conductor-filled epoxy, and/or other electrically conductive materials. In some embodiments, these conductive portions are all made from the same conductive material. In other embodiments, the first contacts 132, second contacts 134, third contacts 136, and/or conductive lines 138 can comprise more than one conductive material. The first contacts 132 and second contacts 134 can be arranged to define die-attach areas 239 on the package substrate 130.[0033] Referring to Figure 2E, fabrication of the semiconductor devices 100 continues by forming the conductive features 106 on the second contacts 134 of the package substrate 130. The conductive features 106 can be fabricated by a suitable electroplating or electroless plating technique, as is well known in the art. In other embodiments, other deposition techniques (e.g., sputter deposition) can be used in lieu of electroplating. In yet other embodiments, the conductive features 106 may comprise solder balls or solder bumps disposed on the second contacts 134. The conductive features 106 can have a circular, rectangular, hexagonal, polygonal, or other cross-sectional shape, and can be single layer or multi-layer structures.[0034] Referring to Figure 2F, fabrication of the semiconductor devices 100 continues with (a) coupling the first semiconductor dies 110 to corresponding ones of the die-attach areas 239 (Figure 2D) of the package substrate 130, and (b) forming the wire bonds 104 such that they electrically couple the first bond pads 112 of the first semiconductor dies 110 to the first contacts 132 of the package substrate 130. More particularly, the first semiconductor dies 110 can be attached to the die-attach areas 239 of the package substrate via the first die-attach material 142. The first die-attach material 142 can be a die-attach adhesive paste or an adhesive element, for example, a die-attach film or a dicing-die-attach film (known to those skilled in the art as "DAF" or "DDF," respectively). In some embodiments, the first die-attach material 142 can include a pressure-set adhesive element (e.g., tape or film) that adheres the first semiconductor dies 110 to the package substrate 230 when it is compressed beyond a threshold level of pressure. In other embodiments, the first die-attach material 142 can be a UV-set tape or film that is set by exposure to UV radiation.[0035] Figure 2G shows the semiconductor devices 100 after the second semiconductor dies 120 have been stacked over the first semiconductor dies 110 and coupled to the conductive features 106. More specifically, the second semiconductor dies 120 can be flip-chipped bonded to the package substrate 130 such that the second bond pads 122 of the second semiconductor dies 120 are electrically coupled to corresponding ones of the second contacts 134 of the package substrate 130 via the conductive features 106. In some embodiments, the second bond pads 122 are coupled to the conductive features 106 using solder or a solder paste. In other embodiments, another process such as thermo-compression bonding (e.g., copper-copper (Cu- Cu) bonding) can be used to form conductive Cu-Cu joints between the second bond pads 122 and the conductive features 106.[0036] The second semiconductor dies 120 can be attached to at least a portion of the first semiconductor dies 110 via the second die-attach material 144. As described above, no electrical interconnections (e.g., metallization features, solder bumps, RDLs, etc.) need be formed between the semiconductor dies 110, 120. The second die-attach material 144 can be generally similar to the first die-attach material 142 (e.g., a DAF, DDF, etc.) and, in some embodiments, is the same material as the first die-attach material 142 and/or has the same thickness as the first die-attach material 142. In the embodiment illustrated in Figure 2G, the second die-attach material 144 extends onto the overhang portions 124 of the second semiconductor dies 120. In some such embodiments, the second die-attach material 144 is peeled back from, or otherwise removed from or prevented from covering the second bond pads 122 of the second semiconductor dies 120 prior to coupling the second bond pads 122 to the conductive features 106. In other embodiments, the second die-attach material 144 is not formed on or is entirely removed from the overhang portions 124.[0037] Figure 2H shows the semiconductor devices 100 after disposing the molded material 146 on the first surface 133a of the package substrate 130 and at least partially around the first semiconductor dies 110, the wire bonds 104, the second semiconductor dies 120, and/or the conductive features 106. The molded material 146 may be formed from a resin, epoxy resin, silicone-based material, polyimide, and/or other suitable resin used or known in the art. Once deposited, the molded material 146 can be cured by UV light, chemical hardeners, heat, or other suitable curing methods known in the art. The cured molded material 146 can include an upper surface 247. In the embodiment illustrated in Figure 2H, the upper surface 247 is generally co- planar with the upper surfaces 123 a of the second semiconductor dies 120 such that the upper surfaces 123a are not covered by the molded material 146. In some embodiments, the molded material 146 is formed in one step such that the upper surfaces 123 a of the second semiconductor dies 120 are exposed at the upper surface 247 of the molded material 146. In other embodiments, the molded material 146 is formed and then ground back to planarize the upper surface 247 and to thereby expose the upper surfaces 123a of the second semiconductor dies 120. As further shown in Figure 2H, in some embodiments, the molded material 146 encapsulates the first semiconductor dies 110, wire bonds 104, and conductive features 106 such that these features are sealed within the molded material 146.[0038] Figure 21 illustrates the semiconductor devices 100 after (a) separating the package substrate 130 from the carrier 250 (Figure 2H) and (b) forming the electrical connectors 108 on the third contacts 136 of the package substrate 130. In some embodiments, a vacuum, poker pin, laser or other light source, or other suitable method known in the art can detach the package substrate 130 from the release layer 252 (Figure 2H). In certain embodiments, the release layer 252 (Figure 2H) allows the carrier 250 to be easily removed such that the carrier 250 can be reused again. In other embodiments, the carrier 250 and release layer 252 can be at least partially removed by thinning the carrier 250 and/or release layer 252 (e.g., using back grinding, dry etching processes, chemical etching processes, chemical mechanical polishing (CMP), etc.). Removing the carrier 250 and release layer 252 exposes the second surface 133b of the package substrate 130, including the third contacts 136. The electrical connectors 108 are formed on the third contacts 136 and can be configured to electrically couple the third contacts 136 to external circuitry (not shown). In some embodiments, the electrical connectors 108 comprise a plurality of solder balls or solder bumps. For example, a stenciling machine can deposit discrete blocks of solder paste onto the third contacts 136 that can then be reflowed to form the solder balls or solder bumps on the third contacts 136.[0039] Figure 2J shows the semiconductor devices 100 after being singulated from one another. As shown, the package substrate 130 and the molded material 146 can be cut at a plurality of dicing lanes 255 (illustrated in Figure 21) to singulate the stacked semiconductor dies 110, 120 and to separate the semiconductor devices 100 from one another. Once singulated, the individual semiconductor devices 100 can be attached to external circuitry via the electrical connectors 108 and thus incorporated into a myriad of systems and/or devices.[0040] Figure 3A is a cross- sectional view, and Figure 3B is a top plan view, illustrating a semiconductor device 300 ("device 300") in accordance with another embodiment of the present technology. This example more specifically shows another semiconductor device configured in accordance with the present technology having more than two stacked semiconductor dies. The device 300 can include features generally similar to those of the semiconductor device 100 described in detail above. For example, in the embodiment illustrated in Figure 3A, the device 300 includes a first semiconductor die 310 and a second semiconductor die 320 (collectively "semiconductor dies 310, 320") carried by a package substrate 330 (e.g., a redistribution structure that does not include a preformed substrate). More specifically, the second semiconductor die 320 is stacked over and laterally offset from the first semiconductor die 310 to define an overhang portion 324 of the second semiconductor die 320 and an open portion 314 of the first semiconductor die 310. The first semiconductor die 310 has a lower surface 313b attached to the package substrate 330 via a first die-attach material 342, and an upper surface 313a facing the second semiconductor die 320 and having first bond pads 312 exposed at the open portion 314 of the first semiconductor die 310. The second semiconductor die 320 has a lower surface 323b partially attached to the upper surface 313a of the first semiconductor die 310 via a second die-attach material 344, and an upper surface 323a opposite the lower surface 323b. The second semiconductor die 320 further includes second bond pads 322 on the lower surface 323b, exposed at the overhang portion 324 of the second semiconductor die 320, and facing the package substrate 330. The package substrate 330 includes first contacts 332 and second contacts 334. First wire bonds 304 electrically couple the first bond pads 312 to the first contacts 332 of the package substrate 330, and first conductive features 306 electrically couple the second bond pads 322 to the second contacts 334 of the package substrate 330. The first and second contacts 332 and 334 are electrically coupled to corresponding third contacts 336 of the package substrate via conductive lines 338.[0041] The device 300 further includes a third semiconductor die 360 and a fourth semiconductor die 370 (collectively "semiconductor dies 360, 370") stacked over the semiconductor dies 310, 320. The semiconductor dies 360, 370 can be arranged generally similarly to the semiconductor dies 110, 120 (Figure 1) and the semiconductor dies 310, 320. For example, as illustrated in the embodiment of Figure 3A, the fourth semiconductor die 370 can be laterally offset from the third semiconductor die 360 to define an overhang portion 374 of the fourth semiconductor die 370 and an open portion 364 of the third semiconductor die 360. More particularly, with reference to Figure 3B, the third semiconductor die 360 can have opposing first sides 316 and opposing second sides 318. As shown, the fourth semiconductor die 370 can extend beyond only one of the first sides 316 (shown in phantom in Figure 3B) of the third semiconductor die 360 (e.g., in a direction along an axis X3generally parallel to second sides 318) to define the overhang portion 374. In some embodiments, the amount (e.g., a distance along the axis X3) of lateral offset off the semiconductor dies 360, 370 is the same or substantially the same as the lateral offset off the semiconductor dies 310, 320. Moreover, as is more clearly illustrated in the top plan view of Figure 3B, the semiconductor dies 310, 320 and semiconductor dies 360, 370 can be laterally offset in the same or substantially the same direction (e.g., in a direction along the axis X3). In other embodiments, the semiconductor dies 310, 320 and the semiconductor dies 360, 370 can be offset in more than one direction or by different amounts (e.g., the overhang portion 324 of the second semiconductor die 320 and the overhang portion 374 of the fourth semiconductor die 370 can have different shapes, orientations, and/or dimensions). [0042] The third semiconductor die 360 has a lower surface 363b attached to the upper surface 323a of the second semiconductor die 320 via a third die-attach material 348, and an upper surface 363a facing the fourth semiconductor die 370 and having third bond pads 362 exposed at the open portion 314 of the third semiconductor die 360. The fourth semiconductor die 370 has an upper surface 373a and a lower surface 373b that is partially attached to the upper surface 363a of the third semiconductor die 360 via a fourth die-attach material 349. The lower surface 373b of the fourth semiconductor die 370 includes fourth bond pads 372 at the overhang portion 374. The fourth bond pads 372 are positioned over (e.g., vertically aligned with, superimposed over, etc.) at least a portion of the second contacts 334 of the package substrate 330.[0043] The device 300 further includes (a) second wirebonds 368 electrically coupling the third bond pads 362 of the third semiconductor die 360 to corresponding ones of the first contacts 332 of the package substrate 130, and (b) second conductive features 376 electrically coupling the fourth bond pads 372 of the fourth semiconductor die 370 to corresponding ones of the second contacts 334 of the package substrate 330. In certain embodiments, a maximum height of the second wire bonds 368 above the package substrate 330 and/or above the upper surface 363 a of the third semiconductor die 360 is not greater than a height of the fourth semiconductor die 370 above the same. As illustrated in the embodiment of Figure 3B, the first contacts 332 and second contacts 334 (not pictured; below the second and fourth bond pads 322, 372 shown in phantom) can be arranged in one or more columns (e.g., two columns), and can each be coupled to one or more of the bond pads of the various semiconductor dies. In other embodiments, the arrangement of the first and second contacts 332, 334 can have any other suitable configuration (e.g., arranged in one column, in rows, offset rows and/or columns, etc.). The first and second conductive features 306, 376 can have various suitable structures, such as pillars, columns, studs, bumps, etc., and can be made from copper, nickel, solder (e.g., SnAg- based solder), conductor- filled epoxy, and/or other electrically conductive materials.[0044] Notably, each semiconductor die in the device 100 is directly electrically coupled to the first or second contacts 332 or 334 of the package substrate 330. Therefore, interconnections or other structures are not needed between any of the first semiconductor die 310, second semiconductor die 320, third semiconductor die 360, and fourth semiconductor die 370 (collectively "semiconductor dies 310-370) to electrically connect the semiconductor dies 310-370 to the package substrate 330. In some embodiments, for example, in lieu of interconnection structures (e.g., RDLs) between the semiconductor dies 310-370, the semiconductor dies 310-370 may be coupled together via one or more of the second die-attach material 344, third die-attach material 348, and fourth die-attach material 349. In some embodiments, each of the die-attach materials in the device 300 are the same material and/or have the same thickness.[0045] The device 100 can further include a molded material 346 over an upper surface of the package substrate 330 (the molded material 346 is not shown in Figure 3B for ease of illustration). In some embodiments, the molded material 346 at least partially surrounds the semiconductor dies 310-370, the first and second wire bonds 304, 368, and/or the first and second conductive features 306, 376 to protect one or more of these components from contaminants and/or physical damage. For example, in the embodiment illustrated in Figure 3A, only the upper surface 373a of the fourth semiconductor die 370 is exposed from the molded material 346. Notably, the molded material 346 does not extend above the fourth semiconductor die 370 relative to the package substrate 330 (e.g., above a plane coplanar with the upper surface 373a of the fourth semiconductor die 370), while still encapsulating the first and second wire bonds 304, 368 and the first and second conductive features 306, 376. Accordingly, the height (e.g., thickness) of the device 100 may be reduced as compared to, for example, conventional semiconductor devices having wire bonds coupling the uppermost die in the device— and therefore having a wire-loop height above the uppermost die. Likewise, since the molded material 346 need not extend above the upper surface 373a of the fourth semiconductor die 370, the total amount of molded material 346 used in the device 300 can be reduced (e.g., to reduce costs and/or warpage of the device 300).[0046] Figure 4 is a top plan view of a semiconductor device 400 ("device 400") in accordance with another embodiment of the present technology. This example more specifically illustrates stacked semiconductor dies that are laterally offset along two axes of the semiconductor device. The device 400 can include features generally similar to those of the semiconductor device 100 described in detail above. For example, the device 400 includes a first semiconductor die 410 coupled to a package substrate 430 and a second semiconductor die 420 stacked over and laterally offset from the first semiconductor die 410 (collectively "semiconductor dies 410, 420"). In contrast to many of the embodiments described in detail with reference to Figures 1A-3B, the second semiconductor die 420 is laterally offset from two sides of the first semiconductor die 410. More specifically, the first semiconductor die 410 can include opposing first sides 416 and opposing second sides 418. The second semiconductor die 420 can extend beyond (e.g., in a direction along an axis X4generally parallel to the second sides 418) one of the first sides 416 (partially shown in phantom) and beyond (e.g., in a direction along an axis Y4generally parallel to the first sides 416) one of the second sides 418 (partially shown in phantom) to define an overhang portion 424 of the second semiconductor die 420 and an open portion 414 of the first semiconductor die 410. In the embodiment illustrated in Figure 4, both the overhang portion 424 of the second semiconductor die 420 and the open portion 414 of the first semiconductor die 410 have a generally "L-like" shape. In some embodiments, where the semiconductor dies 410, 420 have the same planform shape and dimensions, the dimensions of the open portion 414 and overhang portion 424 can be the same. In other embodiments, the semiconductor dies 410, 420 can have different planform shapes and/or dimensions such that the overhang portion 424 and open portion 414 have different shapes and/or dimensions. For example, where one of the two semiconductor dies 410, 420 is larger than the other, the open portion 414 and/or overhang portion 424 can have a generally "U-like" shape along three edges of the larger die.[0047] As further shown in Figure 4, the first semiconductor die 410 can have first bond pads 412 on an upper surface of the first semiconductor die 410 and exposed at the open portion 414. Similarly, the second semiconductor die 420 can have second bond pads 422 (shown in phantom) on a lower surface of the second semiconductor die 420, exposed at the overhang portion 424, and facing the package substrate 430. As illustrated in Figure 4, the first and second bond pads 412, 422 (collectively "bond pads 412, 422") can be arranged in an L-like shape along the open portion 414 of the first semiconductor die 410 and the overhang portion 424 of the second semiconductor die 420, respectively. In other embodiments, the bond pads 412, 422 can have other arrangements (e.g., positioned adjacent only a single side of the semiconductor dies 410, 420, positioned in more than one row and/or column, etc.). In certain embodiments, the semiconductor dies 410, 420 are laterally offset depending on the configuration of the bond pads 412, 422 of the semiconductor devices 410, 420. For example, the offset of the semiconductor dies 410, 420 can be selected such that each of the first bond pads 412 of the first semiconductor die 410 are exposed at the open portion 414, and each of the second bonds 422 of the second semiconductor die 420 are exposed at the overhang portion 424.[0048] The package substrate 430 can include first contacts 432 and second contacts (obscured in Figure 4; e.g., vertically aligned below the second bond pads 422). The device 400 further includes wire bonds 404 electrically coupling the first bond pads 412 of the first semiconductor die 410 to the first contacts 432 of the package substrate 430, and conductive features (not pictured; e.g., conductive pillars) electrically coupling the second bond pads 422 of the second semiconductor die 420 to the second contacts of the package substrate 430. The first contacts 432 and second contacts can have any suitable arrangement. For example, in some embodiments, the package substrate 430 is a redistribution structure that does not include a preformed substrate and that is additively built up (Figures 2A-2D). Accordingly, the package substrate 430 can be a flexible structure that is adaptable to the particular arrangement of the semiconductor dies 410, 420 and the bond pads 412, 422.[0049] Any one of the semiconductor devices having the features described above with reference to Figures 1A-4 can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is system 500 shown schematically in Figure 5. The system 500 can include a processor 502, a memory 504 (e.g., SRAM, DRAM, flash, and/or other memory devices), input/output devices 505, and/or other subsystems or components 508. The semiconductor devices described above with reference to Figures 1A-4 can be included in any of the elements shown in Figure 5. The resulting system 500 can be configured to perform any of a wide variety of suitable computing, processing, storage, sensing, imaging, and/or other functions. Accordingly, representative examples of the system 500 include, without limitation, computers and/or other data processors, such as desktop computers, laptop computers, Internet appliances, hand-held devices (e.g., palm-top computers, wearable computers, cellular or mobile phones, personal digital assistants, music players, etc.), tablets, multi-processor systems, processor-based or programmable consumer electronics, network computers, and minicomputers. Additional representative examples of the system 500 include lights, cameras, vehicles, etc. With regard to these and other example, the system 500 can be housed in a single unit or distributed over multiple interconnected units, e.g., through a communication network. The components of the system 500 can accordingly include local and/or remote memory storage devices and any of a wide variety of suitable computer-readable media.[0050] From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. Furthermore, certain aspects of the present technology described in the context of particular embodiments may also be combined or eliminated in other embodiments. For example, the various embodiments described with reference to Figures 1A-4 may be combined to incorporate different numbers of stacked semiconductor dies (e.g., three dies, five dies, six dies, eight dies, etc.) that are laterally offset in different manners. Accordingly, the invention is not limited except as by the appended claims. Moreover, although advantages associated with certain embodiments of the new technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.
A three-state CMOS output buffer (200), having protective circuitry and an output node (OUT) connected to a bus, prevents damage to a connected integrated circuit when the bus voltage exceeds a power supply reference voltage (VCC). A final output stage of the output buffer (200) includes a first pull-up transistor (QP200), a clamping transistor (QN202), and a pull-down transistor (QN204). A half-pass circuit (QN200) blocks the output voltage from propagating through the final output stage to damage the output buffer (200) when the output voltage applied to the output node (OUT) exceeds the supply voltage. The protective circuitry uses a clamping circuit (210), a switching circuit (212) and a backgate bias circuit (206) to prevent a leakage path between the output node (OUT) and the power supply reference (VCC) through the source/bulk junction of biased transistors in the output buffer (200). The clamping circuit (210) turns the pull-up transistor (QP200) fully off when the output buffer (200) is enabled and an input signal (VIN) is high and when the output buffer (200) is disabled. When the output buffer (200) is disabled, the switching circuit (212) turns the clamping circuit (210) off prior to turning the half pass circuit (QN200) and the pull-up transistor (QP200) off. The backgate bias circuit (206) provides a bias voltage equivalent to the power supply reference voltage (VCC), as long as the bus voltage is not higher than the power supply reference voltage (VCC), and bias equivalent to the bus voltage, when the bus voltage exceeds the power supply reference voltage (VCC). Thus, the protective circuitry provides protection without a glitch of bus voltage propagating through the final output stage.
What is claimed is: 1. A three-state CMOS output buffer having a first power supply reference voltage, second power supply reference voltage, and an output coupled to a bus, the bus having a voltage, the three-state CMOS output buffer comprising: a final output stage comprising a pull-up transistor, a clamping transistor, and a pull-down transistor connected respectively in series between a power supply rail and ground and having an common output node between the clamping transistor and the pull-up transistor; a half-pass circuit coupled to the final output stage, the half-pass circuit blocks the bus voltage from propagating through to damage the output buffer when the output voltage applied to the output node exceeds the first power supply reference voltage; a control circuit coupled to the half-pass circuit, the control circuit supplied with an input data signal, an enable/disable signal and a complemented enable/disable signal for activating and deactivating the final output stage; an invertor coupled to the control circuit; a clamping circuit coupled to the invertor and the final output stage to turn the pull-up transistor fully off when the output buffer is enabled and the input data signal is low and when the output buffer is disabled; a switching circuit coupled to the half-pass circuit, the clamping circuit and the pull-up transistor, such that when the output buffer is disabled, the switching circuit turns the clamping circuit off prior to turning the half pass circuit and the pull-up transistor off for guarding the output buffer and the first power supply against voltages applied to the output node of the buffer that exceed the first power supply reference voltage; and a backgate bias circuit coupled to the backgate of the pull-up transistor, the clamping circuit, and the switching circuit, the backgate bias circuit supplies the first power supply reference voltage as long as the output node is not higher than a supply voltage, the backgate bias circuit supplies the output voltage when the output node is higher than the first power supply reference voltage. 2. The three-state CMOS output buffer of claim 1, wherein the half-pass circuit includes a first transistor, the first transistor connected between the control circuit and the pull-up transistor, the first transistor having a gate coupled to the first power supply reference voltage and a backgate coupled to ground. 3. The three-state CMOS output buffer of claim 1, wherein the control circuit comprises: a first logic gate supplied with an input data signal and an enable/disable signal, the first logic gate driving the pull-up transistor; and a second logic gate supplied with an input data signal and the complement of the an enable/disable signal, the first and second logic gates responsive to the enable/disable signal for activating a three-state mode in which the pull-up transistor and the pull-down transistor are both deactivated. 4. The three-state CMOS output buffer of claim 3, wherein the first logic gate includes an output node and at least two input nodes and comprises a NAND gate having a first transistor, a second transistor, a third transistor and a fourth transistor, the first transistor and the second transistor coupled between the voltage supply and the output node of the first logic gate, the third transistor coupled between the fourth transistor and the coupled first and second transistors, the first and third transistors coupled to a first one of the two input nodes, the second and fourth transistors coupled to the second one of the two input nodes. 5. The three-state CMOS output buffer of claim 3, wherein the second logic gate including a output node and at least two input nodes, the first input node coupled to the input data signal, the second input node coupled to the enable/disable signal, comprises a NOR gate having a first transistor, a second transistor, a third transistor and a fourth transistor, the first transistor coupled between the voltage supply and the second transistor, the third transistor and the fourth transistor coupled between the second transistor and ground, the second and third transistors coupled to the first input node, the first and fourth transistors coupled to the second input node. 6. The three-state CMOS output buffer of claim 1, wherein the clamping circuit comprises a first transistor and an second transistor coupled in series between the voltage supply and a gate of the pull-up transistor, the first transistor and the second transistor having a backgate coupled to the backgate bias circuit. 7. The three-state CMOS output buffer of claim 1, wherein the switching circuit comprises: a first transistor coupled between the clamping circuit and the output node, the first transistor having a backgate coupled to the backgate bias circuit and a gate; a second transistor having a drain coupled to the source of the first transistor, a gate coupled to the first power supply reference voltage, and a directly coupled backgate and source, the directly coupled backgate and source couple to ground; a third transistor having a gate, a drain coupled to the gate of the first transistor and a directly coupled backgate and source, the directly coupled backgate and source couple to the first power supply reference voltage; a fourth transistor having a directly coupled gate and drain, a backgate coupled to the first power supply reference voltage, a source coupled to the drain of the third transistor; and a fifth transistor having a drain coupled to the gate of the fourth transistor, a gate coupled to the first power supply reference voltage and a directly coupled backgate and source coupled to ground. 8. The three-state CMOS output buffer of claim 1, wherein the backgate bias circuit comprises: a first transistor having a drain coupled to the complemented enable/disable signal, a backgate coupled to ground, a gate and a source; a second transistor having a gate coupled to the source of the first transistor, a source coupled to the gate of the first transistor, and a directly coupled backgate and drain to form a bias output node; and a third transistor having a source coupled to the gate of the second transistor, a gate coupled to the enable/disable signal, and a directly coupled backgate and drain coupled to the bias output node.
FIELD OF THE INVENTION This invention relates generally to the field of output buffers in high speed applications. In particular, the invention is related to circuitry within the output buffer of a 3.3V Low Voltage Differential Signaling receiver operable to prevent damage when the receiver is exposed to a voltage level above its supply voltage. BACKGROUND OF THE INVENTION Consumers are demanding more realistic, visual information in the office and in the home. Their demands are driving the need to move video, 3-D graphics, and photo-realistic image data from camera to personal computers and printers through local access network, phone, and satellite systems to home set top boxes and digital video cam recorders. Low Voltage Differential Signaling (LVDS) provides a solution to this consumer demand in a variety of applications in the areas of personal computing, telecommunications, and consumer/commercial electronics. It is an inexpensive and extremely high performance solution for moving this high speed digital data both very short and very long distances: on a printed circuit board and across fiber or satellite networks. Its low swing, differential signaling technology allows single channel data transmission at hundreds of megabits per second (Mbps). In addition, its low swing and current mode driver outputs create low noise, meeting FCC/CISPR EMI requirements, and provide a very low power consumption across frequency. There are LVDS standards under two standards organizations: a Scalable Coherent Interface standard (SCI-LVDS) and an American National Standards Institute Telecommunications Industry Association Electronic Industries Association standard (ANSI/TIA/EIA). In an interest of promoting a wider standard, these standards define no specific process technology, medium, or power voltages. This means that LVDS can be implemented in CMOS, GaAs or other applicable technologies, migrating from 5 volts to 3.3 volts to sub-3 volt power supplies, and transmitting over PCB or cable thereby serving a broad range of applications. Thus, a valuable characteristic of LVDS is that the LVDS drivers and receivers do not depend on a specific power supply, such as 5 volts. Therefore, LVDS has an easy migration path to lower supply voltages such as 3.3 volts or even 2.5 volts, while maintaining the same signaling levels and performance. This same valuable characteristic of drivers and receivers independent of power supply specifications poses a disadvantage in that difficulty arises when there are several receivers of multiple voltages integrated within a LVDS application accessible to one bus. Such is the case as shown in FIG. 1 where a 3.3V LVDS receiver 16 and 5V LVDS receiver 22 use the same bus 24 within an LVDS application such as a telecommunication router 10. As discussed, the power supply of each receiver 16 and 22 may be any combination of either 2.5, 3.3, or 5 volts since LVDS technology standards require no specific power supply voltage. The router 10 receives two signals from the drivers 12 and 18 of two switches (not shown). Both LVDS drivers 12 and 18 are coupled to two respective LVDS buses 14 and 20. At the opposite end of each LVDS bus 14 and 20, an LVDS receiver is coupled, 16 and 22, to each respective bus 14 and 20. The first receiver 16 has a 3.3V power supply and the second receiver 22 has a 5V power supply. Each LVDS receiver 16 and 22 is coupled to a bus 24 within the router 10 and generates current to drive a load attached to the bus 24. For this particular example, the load is a microprocessor 26. In operation, when one receiver accesses the bus 24, the other goes into a high impedance mode disabling itself from the bus 24. Accordingly, when each receiver 16 and 22 uses the bus 24, its power supply charges the bus 24. Thus, when the 5V receiver 22 gains access to the bus 24, its output buffer (not shown) drives the bus 24 from ground to 5 volts. The first receiver 16 at 3.3V power supply must be able to survive exposure to 5 volts during the high impedance mode without conduction of leakage currents flowing into the internal circuitry of the receiver 16. In summary, the output buffer of every receiver on the bus must be able to survive exposure to a voltage at least equal to the highest supply voltage of any receiver on the bus in order to prevent the conduction of leakage currents from flowing from the bus to the receiver. Designing the output buffer of a 3.3 V LVDS receiver 16 using thick oxide 5 volt transistors is an approach towards preventing damage from exposure of higher power supply voltages. LVDS high speed applications, such as 400 Mbps applications, use fabrication processes suitable for high-speed, mixed signal designs. Yet, the implementation of thick oxide transistors in fabrication processes suitable for high-speed digital data has a negative impact on the speed of the receiver. Thus, the implementation of thick oxide transistors is not an acceptable solution. As illustrated in FIG. 2, Davis describes a three-state output buffer circuit having a protection circuit in U.S. Pat. No. 5,455,732, which is hereby incorporated by reference. Davis provides a built-in protection against power-rail corruption by bus-imposed voltages when the buffer is in its high-impedance state. In particular the circuit uses a pseudo-power rail which can be used to adjust the bias on the output transistor's bulk and so to prevent a leakage path from occurring between the output node and a power rail via the output transistor source/bulk junction. NMOS transistor QN80 is the output pull-down transistor, driven by pull-down-transistor driver transistor QN60. Transistor QN70 is the pull-down transistor disabler. The gate of transistor QP10 is coupled to the input. QN10 is coupled in series to QP10. QN50 is coupled in series to QN10. QP20, QN20, QN40, QP50 and QN70 are all coupled in series with one another in this respective order. The enabling signal EB feeds the gates of transistors QP50 and QN70; while the enabling signal E feeds the gate of transistor QP20, QP30, and QN50. The source of QP30 is coupled to the circuit LINK+. The function of LINK+ is to enable the high-potential power rail to energize PVCC, to be coupled to VCC, but only when the voltage of the power rail is higher than that of the pseudo-rail PVCC, the rail coupled to the node common to QP30 and LINK+. Pull-up transistor QP40, coupled to the drain of QP30, is coupled to the comparison circuit COMP. The output signal lead OUT taken from the node common to transistors QP40 and QN80 is coupled to the comparison circuit COMP. This design, however, incorporates low turn-on threshold voltage transistors, QN10, QN20 and QN40, which increase the complexity of design and thus, cost. In addition, during the high impedance mode when the output buffer is disabled from the bus, the voltage applied to the gate of QP40 is VCC minus a threshold voltage of approximately 0.4 to 0.5 volts. Accordingly, a leakage current will exist across this transistor QP40 when the voltage on the output lead OUT is greater than VCC. Thus, this design does not eliminate leakage current completely. In addition, QP10 is required to be a thick oxide transistor which unfortunately has a negative impact on the speed of the receiver and, thus, is not an acceptable solution for high speed applications such as 400 Mbps applications using the fabrication processes suitable for high-speed, mixed signal designs. FIG. 3 illustrates a third design approach for implementation of the output buffer in a LVDS receiver using a first and second Schottky diode, S1 and S2 to prevent current from conducting into the output buffer. In addition to diodes S1 and S2, the output buffer 100 includes a plurality of p-channel transistors QP100, QP102 and QP104, an n-channel transistor QN100 and a current source I1. Transistor QP100 has a source coupled to a first power supply rail VCC, a gate coupled to an input node IN, a drain coupled to a first diode S1 and a backgate. The first Schottky diode S1 is coupled between transistor QP100 and the current source I1. Transistor QP104 has a gate coupled to power supply rail VCC. Transistor QP102 has a gate coupled to the source of transistor QP104 and the common node to Schottky diode S1 and current source I1. The second Schottky diode S2 is coupled between the first power supply rail VCC and the backgates of transistors QP100, QP102 and QP104 for driving the output. The output node OUT and drains of transistors QP102 and QP104 are tied to the drain of transistor QN100. Transistor QN100 has a gate coupled to the input node IN and a backgate and source coupled to the second power supply rail GND. In operation, when voltage applied to a bus coupled to the output node OUT is greater than the power supply reference voltage Vcc, p-channel transistor QP104 turns on. Accordingly, p-channel transistor QP102 turns off, preventing current from flowing into the first power supply rail VCC. To prevent the backgate parasitic diodes of transistors QP100, QP102, and QP104 from conducting current though to the first power supply rail VCC, a Schottky diode S2 is used to block this path from the output node OUT to the first power supply rail VCC. In addition, the Schottky diode S1 blocks the voltage from damaging transistor QP100 and the rest of the circuitry internal to the receiver. Diode S1 also prevents current from conducting into the power supply rail VCC. Unfortunately, many fabrication processes for LVDS do not include a Schottky diode design implementation; thus, this approach is not feasible. Fabrication processes that do include Schottky diode implementation typically suffer an increase in cost, gain in die area and increase in process complexity. Lentini et al describes a three-state CMOS output buffer circuit having a protective circuit in U.S. Pat. No. 5,852,382, which is hereby incorporated by reference. FIG. 4 illustrates the output buffer 150 which couples the bulk electrode of the pull-up transistor to a line whose voltage is always the highest between the supply voltage of the integrated circuit and the voltage of the external bus. The buffer 150 includes an inverter 7, a NOR gate 5, a NAND gate 3, an auxiliary circuit 9, a pull-up transistor M15, and a pull-down transistor M16. The pull-up transistor M15 has a bulk electrode connected to a switchable bulk line 2. The auxiliary circuit 9 keeps the switchable bulk line 2 connected to the voltage supply VDD as long as the voltage of the output node O is not higher than the supply voltage VDD. The NAND gate 3 includes circuitry for transferring the voltage of the output node to the switchable bulk line when the voltage of the output node exceeds the supply voltage. This design, however, fosters significant damage to the integrated circuit when a voltage higher than 5 volts is applied to the external bus. In the high impedance mode, the enable/disable signal E is low. Since the enable signal is coupled to the gate of transistor M11, zero volts is applied to the gate. If, hypothetically, a voltage higher than 5 volts is applied to the external bus when the output buffer 150 is in the high impedance mode, this same voltage will be applied to the source of M11. Thus, transistor M11 will experience a gate to source voltage that is greater than 5 volts. Particularly in a process where the gate voltage cannot exceed 5 volts, M11 will be stressed and damaged. Even though this design eliminates leakage current, it does not protect the internal circuitry from exposure to a higher voltage and, thus, damage may result. In conclusion, there are existing designs that use 3V transistors and circuit techniques to prevent damage to internal circuitry and to prevent conduction of leakage currents. These techniques, however, are either too slow for LVDS applications or use components not available in most LVDS fabrication processes because of the cost and complexity added to the process. Hence, a need exists for an output buffer design of an LVDS receiver that prevents damage of internal circuitry of the receiver when exposed to bus voltages higher than the output buffer's power supply voltage. SUMMARY OF THE INVENTION A three-state CMOS output buffer of an LVDS receiver has the capability to prevent voltage damage to the internal circuitry of the receiver and leakage current from exposure a voltage higher a supply voltage of the LVDS receiver on a common bus. The output buffer includes a final output stage, a half-pass circuit, a control circuit, an invertor, clamping circuit, and a switching circuit. The final output stage includes a first pull-up transistor, a clamping transistor, and a pull-down transistor connected respectively in series between a voltage supply and ground. The node common to the first pull-up transistor and the clamping transistor form an output node. The half-pass circuit couples to the final amplifier stage, the half pass circuit blocks the output voltage from propagating through to damage the output buffer when the output voltage applied to the output node exceeds the supply voltage. The control circuit couples to the half-pass circuit. The control circuit is supplied with an input data signal, an enable/disable signal and a complemented enable/disable signal for activating and deactivating the final output stage. The invertor is coupled to the control circuit. The clamping circuit couples to the invertor and the final output stage to turn the pull-up transistor fully off when the output buffer is enabled and the input signal is high and when the output buffer is disabled. The switching circuit connects to the half-pass circuit, the clamping circuit and the pull-up transistor, such that when the output buffer is disabled, the switching circuit turns the clamping circuit off prior to turning the half pass circuit and the pull-up transistor off for guarding the output buffer and the power supply rail against voltages applied to an output node of the buffer, when the voltage exceeds the supply voltage. The backgate bias circuit couples to the backgate of pull-up transistor, the clamping circuit, and the switching circuit. The backgate bias circuit supplies the supply voltage as long as the output node is not higher than a supply voltage. The backgate bias circuit supplies the output voltage to the backgates of the coupled transistors when the output node is higher than the supply voltage. A technical advantage of the present invention is that it prevents damage from voltages supplied on a bus that is higher than its power supply. This increases the reliability and flexibility of the LVDS receiver in LVDS applications. It also makes the LVDS receiver compatible with requirements of modern applications. BRIEF DESCRIPTION OF THE DRAWINGS For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings in which like reference numbers indicate like features and wherein: FIG. 1 is a diagram of a router configuration using LVDS receivers; FIG. 2 is a schematic of a known output buffer with protective circuit; FIG. 3 is a schematic of another known output buffer for a LVDS receiver using Schottky diodes; FIG. 4 is another schematic of a known output buffer with protective circuit; FIG. 5 is a partial logic gate and block diagram schematic of an output buffer for a LVDS receiver in accordance with the present invention; and FIG. 6 is a more detailed schematic of FIG. 5. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Conventional three state CMOS output buffer circuits include a CMOS final driving stage, which in turn includes a p-channel MOSFET (pull-up) and a n-channel MOSFET (pull-down) connected in series between a voltage supply line VCC and a common ground (GND). The circuits further include control circuitry for the activation of the CMOS final driving stage; such control circuitry mixes the input data signal with an enable/disable signal for the activation of the three-state (or high impedance) mode, in which both the MOSFETs of the final stage are off. In its most simple form, the control circuitry includes a NAND gate, at whose inputs the input data signal and the enable/disable signal are applied and whose output drives the gate of the p-channel pull-up, and a NOR gate, at whose inputs the input data signal and the complemented enable/disable signal are applied and whose output drives the gate of the n-channel pull-down. FIG. 5 illustrates an embodiment of an output buffer 200 in accordance with the present invention. The three-state output buffer 200 includes a control circuit 222, a backgate bias circuit 206, a half pass circuit 224, a switching circuit 226, a pull-up transistor QP200, a clamping transistor QN202, a pull-down transistor QN204, an inverter 208, and a pull-up transistor circuit 210. The pull-up transistor QP200, clamping transistor QN202 and pull-down transistor QN204 form the final output stage of the output buffer 200. The control circuit 222 includes a NAND gate 202 and a NOR gate 204. The half-pass circuit 224 includes a transistor QN200. The switching circuit 226 includes a switch 212 and a switching transistor QP202. Connected in series between a voltage supply VCC and a common ground GND, lie the pull-up transistor QP200, the clamping transistor QN202, and the pull-down transistor QN204, respectively. The node 214 common to pull-up transistor QP200 and clamping transistor QN202 forms an output data signal node OUT. NAND gate 202 includes inputs of the input data signal VIN and the enable/disable signal EN. NOR gate 204 includes inputs of the input data signal VIN and the complemented enable/disable signal EN*. The output node of NAND gate 202 is coupled to the drain of half-pass transistor QN200, while the output node of NOR gate 204 is coupled to the gate of pull-down transistor QN204. Backgate bias circuit 206 includes inputs from both enable/disable signals, EN and EN*. The output of backgate bias circuit 206 provides backgate bias for several of the transistors in the output buffer 200 as will be further explained. Transistor QN200 includes a gate coupled to the first power supply rail VCC, a backgate coupled to the second power supply rail GND, and a source. A switching transistor QP202 includes a source coupled to the source of QN200, a gate coupled to the first power supply rail VCC, a drain coupled to a node 214 and a backgate coupled to the output of backgate bias 206. Node 214 couples to an output node OUT. Pull-up transistor QP200 includes a gate coupled to the source of transistor QN200, a source coupled to the first power supply rail VCC, a drain coupled to node 214 and a backgate coupled to the output of backgate bias circuit 206. Clamping transistor QN202 includes a gate coupled to the first power supply VCC, a backgate coupled to the second power supply rail GND, a drain coupled to node 214 and a source. Pull-down transistor QN204 includes a drain coupled to the source of transistor QN202 and a directly coupled backgate and source that are coupled to the second power supply rail GND. a pull-up transistor circuit 210 comprises transistor QP212 and QP204. Transistor QP204 includes a backgate tied to the output of backgate bias circuit 206, a drain coupled to the gate of transistor QP200, a source and a gate. Transistor QP212 includes a drain coupled to the source of transistor QP204, a backgate coupled to the output of backgate bias circuit 206, a source coupled to first power supply rail VCC and a gate. Inverter 208 is coupled between the gate of QP212 and the output of NAND gate 202. Switching circuit 212 receives inputs from the gate of QP204, the output of backgate bias circuit 206 and generates an output to node 214. FIG. 6 illustrates in further detail the design of the buffer 200 represented in FIG. 5. Specifically NAND gate 202 includes p-channel transistors, QP224 and QP226, and n-channel transistors, QN218 and QN220. NAND gate 202 applies the power supply reference voltage Vcc to the gate of QN200 during the high impedance mode. Transistors QP224 and QP226 have directly coupled sources and backgates that couple to power supply rail VCC. Transistors QP224 and QP226 also include directly coupled drains that are coupled to node 220. Node 220 couples to the drain of transistor QN218 as well as the drain of transistor QN200. Transistor QN218 includes a source and a backgate that is coupled to the second power supply rail GND. Transistor QN220 includes a drain coupled to the source of transistor QN218, a gate coupled to the gate of QP226 and a directly coupled backgate and source that couple to the second power supply rail GND. Input data signal VIN ties to the gates of transistors QP224 and QN218. NOR gate 204 includes p-channel transistors, QP220 and QP222, and n-channel transistors, QN214 and QN216. The NOR gate 204 is used to ground the gate of pull-down transistor QN204 during the high impedance mode. Transistor QP220 has a drain, a gate, and a directly coupled backgate and source that couple to the first power supply rail VCC. Transistor QP222 has a source coupled to the drain of transistor QP220 and a backgate coupled to the first power supply rail VCC. Transistors QN214 and QN216 include directly coupled drains connected to the source of transistor QP222 to form the output node 218 of NOR gate 204. Transistors QN214 and QN216 also include directly coupled backgates and sources that connect to the second power supply rail GND. Input data signal VIN ties to the gates of transistors QN214 and QP222. Backgate bias circuit 206 includes transistor QN212, QP216 and QP218. Transistor QN212 includes a drain coupled to a complemented enable/disable signal EN*, a backgate coupled to a second power supply rail GND, a gate coupled to the first power supply rail VCC and a source coupled to the gate of transistor QP216 and the source of transistor QP218. Transistor QP216 has a source coupled to the first power supply rail VCC and a directly coupled backgate and drain that are coupled to backgate reference node 216. Transistor QP218 has a directly coupled backgate and drain tied to backgate reference node 216 as well. The gate of transistor QP218 connects to enable/disable signal EN, the complemented enable/disable signal EN* for enabling specific portions of the circuit to operate relative to the input data signal VIN. Enable/disable signal EN couples to the gates of transistors QP226 and QN220; while complemented enable/disable signal EN* couples to the gate of transistors QN216 and QP220. Inverter 208, comprising transistor QP214 and QN206, serves as a buffer to prevent voltage from propagating back to NAND gate 202. Transistor QP214 has a directly coupled backgate and source that are coupled to the first power supply rail VCC. Transistor QP214 includes a gate coupled to node 220 and a drain coupled to the gate of transistor QP212. Transistor QN206 includes a drain coupled to the drain of transistor QP214, a gate coupled to node 220, and a directly coupled backgate and drain that are coupled to the second power supply rail GND. Switching circuit 212 includes p-channel transistors QP206, QP208, QP210 and n-channel transistors QN208 and QN210. Transistor QN208 includes a drain coupled to the gate of transistor QP204, a gate coupled to the first power supply rail VCC, and a directly coupled backgate and source that are coupled to the second power supply rail GND. Transistor QP206 includes a gate, a source coupled to the drain of transistor QN208, a drain coupled to output node OUT, and a backgate coupled to backgate reference node 216. Transistor QP208 includes a directly coupled backgate and source that are coupled to first power supply rail VCC, a gate and a drain. Transistor QP210 includes a source coupled to the drain of transistor QP208, a directly coupled drain and gate that are coupled to the gate of transistor QP208, and a backgate coupled to first power supply rail VCC. Transistor QN210 includes a gate tied to first power supply rail VCC, a drain coupled to the drain of QP210 and a directly coupled backgate and source that are coupled to the second power supply rail GND. During operation when the output buffer 200 is enabled, the enable/disable signal EN is high ("1") and its complemented signal EN* is low ("0"). In this state, the output node 220 of NAND gate 202 and the output node 218 of NOR gate 204 are high or low relative to the logic state of the input data signal VIN. More particularly, when the input signal VIN ="0", this input signal VIN applied to transistors QP224 and QN218 turns transistor QP224 on and transistor QN218 off. Transistor QP224 pulls node 220 to power supply reference voltage VCC or high. The high enable/disable signal EN applied to the gates of transistors QP226 and QN220 turns transistor QP226 off and transistor QN220 on. The input signal VIN applied to transistors QP222 and QN214 turns transistor QP222 on and transistor QN214 off. The complemented enable/disable signal EN* applied to the gates of transistors QP220 and QN216 turns transistor QP220 on and transistor QN216 off. Transistors QP220 and QP222 drive node 218 high. Thus, the output node 218 of NOR gate 204 is "1" and the output node 220 of NAND gate 202 is "1." Since node 220 is high, when it is applied to n-channel field effect transistor QN200, transistor QN200 turns off. Node 220 applies a high to the gates of transistors QP214 and QN206. As a result, transistor QP214 turns off and QN206 turns on. Transistor QN206 pulls the gate of transistor QP212 to ground, turning transistor QP212 on. Transistor QP212 drives the source of transistor QP204 high and transistor QN208 of switching circuit 226 applies a ground to the gate of transistor QP204. Thus, transistor QP204 turns on. Transistor QP204 drives the gate of transistor QP200 high, turning this transistor off. In addition, transistor QP204 drives the source of transistor QP202 high, turning this transistor off. The high ("1") signal from node 218 of NOR gate 204, coupled to the gate of transistor QN204, turns transistor QN204 on. As a result, transistor QN204 pulls the source of transistor QN202 to ground and transistor QN202 turns on. Thus, the output OUT is pulled to ground. Transistors QP208, QP210, QN208, and QN210 are always on during the enable of output buffer 200. They supply a voltage reference for the switch 226. As a result, the voltage applied to the gate of transistor QP206 is high, which turns this transistor off. In an effort to prevent leakage current from the output to the power supply reference VCC, transistors QN212, QP216, and QP218 of backgate bias circuit 206 supply a backgate reference voltage through backgate reference node 216. During enable mode, transistor QN212 and QP216 are on. Transistor QP218 is off. Thus, during the enable mode, the backgate reference node 216 is driven by transistor QP216 to high. Backgate reference node 216 is tied to the backgates of transistors QP202, QP212, QP206 and drives these backgates to the power supply voltage VCC. Conversely, when the output buffer 200 is enabled and the voltage at input data signal VIN is a "1", the output node of 218 of NOR gate 204 is "0" and the output node 220 of NAND gate 202 is "0." Specifically, when the input signal VIN ="1", this input signal VIN applied to transistors QP224 and QN218 turns transistor QP224 off and transistor QN218 on. The high enable/disable signal EN applied to the gates of transistors QP226 and QN220 turns transistor QP226 off and transistor QN220 on. Thus, transistors QN220 and QN218 pull node 220 to ground. The input signal VIN applied to transistors QP222 and QN214 turns transistor QP222 off and transistor QN214 on. The complemented enable/disable signal EN* applied to the gates of transistors QP220 and QN216 turns transistor QP220 on and transistor QN216 off. Thus, transistor QP220 pulls node 218 to ground. The low at node 218 applied to transistor QN204, turns transistor QN204 off. Transistor QN202 turns off. The low of node 220 turns transistor QN200 on. Transistor QN200 applies a low to the gate of transistor QP200 and the source of transistor QP202. Transistor QP202 remains off and transistor QP200 turns on, driving the output node OUT high. Node 220 applies a low to the gates of transistors QP214 and QN206. As a result, transistor QP214 turns on and QN206 turns off. Transistor QP214 pulls the gate of transistor QP212 to power supply reference voltage VCC, turning transistor QP212 off. As a result, transistor QP204 is off. As stated above, transistors QP208, QP210, QN208, and QN210 are always on during the enable of output buffer 200. They supply a voltage reference for the switch 226. As a result, the voltage applied to the gate of transistor QP206 is high, which keeps this transistor off. Assuming now that the output buffer circuit is disabled, i.e. in the high impedance mode, EN="0", EN*="1." Given that the voltage at the output node OUT is less than the power supply voltage VCC plus the threshold voltage minus 300 mV, the enable/disable signal EN applied to transistors QP226 and QN220 turns transistor QP226 on and transistor QN220 off. Transistor QP226 drives node 220 high. Thus, the output node 220 of NAND gate 202 is always a "1" during the high impedance mode independent of the input signal VIN. Likewise, the complemented enable/disable signal EN* applied to transistors QP220 and QN216 turns transistor QP220 off and transistor QN216 on. Transistor QN216 pulls node 218 to ground. Node 218 will remain at ground during the high impedance mode independent of the input signal VIN. Since node 220 is high, transistor QN200 is off. Node 220 applies a high to the gates of transistors QP214 and QN206. As a result, transistor QP214 turns off and QN206 turns on. Transistor QN206 pulls the gate of transistor QP212 to ground, turning transistor QP212 on. Transistor QP212 drives the source of transistor QP204 high and transistor QN208 of switching circuit 226 applies a ground to the gate of transistor QP204. Thus, transistor QP204 turns on. Transistor QP204 drives the gate of transistor QP200 high, turning this transistor off. In addition, transistor QP204 drives the source of transistor QP202 high, turning this transistor off. Node 218 applies "0" to the gate of pull-down transistor QN204, turning this transistor off. Transistor QN202 is off as a result of transistor QN204 turning off. Thus, the final stage of the output buffer 200, pull-up transistor QP200, clamping transistor QN202, and pull down transistor QN204, are turned off. The output node OUT of the output buffer 30 is in a high-impedance condition. Since enable/disable signal EN is applied to the gate of transistor QP218, transistor QP218 turns on. Given the voltage of output node OUT is between the power supply voltage VCC and the power supply voltage VCC plus the threshold voltage minus the 300 mV, a voltage higher that the power supply voltage VCC will be applied to the drain and backgate of transistor QP216, turning this transistor off. The complemented enable/disable signal EN* is applied to the gate of transistor QN212 and, as a result, transistor QN212 turns on slightly, having a source voltage one threshold voltage beneath the power supply voltage VCC. Thus, backgate reference node 216 remains at the power supply voltage VCC. Backgate reference node 216 is tied to the backgates of transistors QP202, QP212, QP206. Given the voltage applied to the output node OUT rises one threshold voltage Vt above the power supply rail voltage VCC minus 300 mV, transistor QP206 will turn on since the reference voltage supplied by transistors QP208, QP210, QN208 and QN210 will always be one threshold voltage Vt below VCC (approximately 300 mv for 3V transistor processes). As a result, transistor QP204 turns off which keeps pull-up transistor QP200 off during the high impedance mode. Since enable/disable signal EN is applied to the gate of transistor QP218, transistor QP218 turns on. Given the voltage of output node OUT is the power supply voltage VCC plus the threshold voltage minus the 300 mV, a voltage higher that the power supply voltage VCC will be applied to drain and backgate of QP216, keeping this transistor off. The complemented enable/disable signal EN* is applied to the gate of transistor QN212 and, as a result, transistor QN212 turns on slightly, having a source voltage one threshold voltage beneath the power supply voltage VCC. Thus, backgate reference node 216 remains at the power supply voltage VCC. Backgate reference node 216 is tied to the backgates of transistors QP202, QP212, QP206. Given another output buffer circuit is on the bus coupled to the output node OUT supplying an output voltage one threshold voltage higher than Vcc, the output voltage will be applied to the drain of QP202. The parasitic diode of QP202 will conduct the output voltage across its drain to its backgate which is coupled to backgate reference node 216. As a result, the voltage at the output node OUT appears at backgate reference node 216. The backgate reference node applies this voltage to the drain and backgate of transistor QP218 which is on. QP218 pulls the voltage of the source of transistor QN212 up to the voltage at the output node. Transistor QN212 turns off. Effectively, transistor QN212 clamps the output voltage from entering the rest of the integrated circuit coupled to the output buffer 200. To summarize the function of the backgate bias circuit 206, when the output voltage is less than the power supply voltage Vcc plus the threshold voltage, transistors QN212 and QP218 supply backgate reference node 216 a voltage equivalent to that applied to complemented enable/disable signal EN*, which equals power supply voltage Vcc. When the output voltage rises above Vcc plus the threshold voltage, the output voltage is applied to backgate reference node 216. This keeps the backgate reference node 216 always tied to the highest potential, either the power supply reference voltage VCC or the output voltage, thus keeping the transistors QP202, QP212 and QP206 off in the condition where the output signal rises above Vcc. Accordingly, as the voltage applied to the output node OUT rises one threshold voltage Vt above the power supply rail voltage VCC, transistor QP202 turns on. Transistor QP202 applies the output voltage to the gate of transistor QP200 and the source of QN200, turning transistor QN200 off. If transistor QP204 had not been turned off prior to transistor QP202 turning on, a leakage current would propagate through transistor QP202 to the power supply reference VCC. Thus, turning off transistor QP204, prior to the turning on of transistor QP202, prevents a small amount of leakage current from propagating through transistor QP202 to the power supply reference VCC. In addition, turning transistor QN200 off blocks the output voltage from propagating through QN200 to rest of the circuitry. Thus, the embodiment of the present invention prevents damage to the circuitry and leakage current from flowing through to the power supply rail VCC by blocking an output voltage in excess of the power supply voltage VCC. Those skilled in the art to which the invention relates will appreciate that various substitutions, modifications and additions can be made to the described embodiments, without departing from the spirit and scope of the invention as defined by the claims.
A system and methodology are disclosed for monitoring and controlling a semiconductor fabrication process. One or more structures formed on a wafer matriculating through the process facilitate concurrent measurement of critical dimensions and overlay via scatterometry or a scanning electron microscope (SEM). The concurrent measurements mitigate fabrication inefficiencies, thereby reducing time and real estate required for the fabrication process. The measurements can be utilized to generate feedback and/or feed-forward data to selectively control one or more fabrication components and/or operating parameters associated therewith to achieve desired critical dimensions and to mitigate overlay error.
What is claimed is:1. A system that monitors and controls a semiconductor fabrication process comprising:a structure formed on at least a portion of a wafer matriculating through the fabrication process that facilitates concurrent measurement of one or more critical dimensions and overlay,a measurement system that concurrently measures one or more critical dimensions and overlay by mapping the wafer into one or more grids that comprises one or more locations on which a grating structure is formed; anda control system operatively coupled to the measurement system and one or more fabrication components to selectively control one or more of the fabrication components or operating parameters of the fabrication components to mitigate overlay error and to bring critical dimensions within acceptable tolerances based upon one or more of the concurrent measurements taken by the measurement system.2. The system of claim 1 wherein the structure, to facilitate concurrent measurement of one or more critical dimensions and overlay via scatterometry, comprises one or more underlying gratings that facilitate overlay measurements and one or more overlying gratings that facilitate critical dimension measurements.3. The system of claim 2 wherein the overlying gratings comprise elongated raised portions.4. The system of claim 2 wherein a first grouping of the overlying gratings comprises multiple elongated portions oriented substantially in parallel with one another to facilitate measuring overlay in a first direction.5. The system of claim 2 wherein a second grouping of the overlying gratings comprises multiple elongated portions substantially in parallel with one another and oriented substantially perpendicular to the first grouping to facilitate measuring overlay in a second direction.6. The system of claim 1 wherein the structure, to facilitate concurrent measurement of one or more critical dimensions and overlay via SEM, comprises one or more gratings formed in a material layer of the structure and one or more features formed within a resist layer of the structure, the material layer comprising at least one of polysilicon, nitride, and silicon dioxide.7. The system of claim 6 wherein the SEM can interrogate the features for critical dimensions and portions of the gratings extending from the features for a determination of at least one of overlay and overlay error.8. The system of claim 6 wherein the structure is formed within an implant layer of a flash memory product.9. The system of claim 6 wherein the measurement system comprises:a beam generating system from which an electron beam is generated and projected through an electromagnetic lens onto the structure; andone or more detectors that detect electrons reflected off of the structure.10. The system of claim 1 wherein the measurement system includes one or more light sources that direct light incident to the gratings; andone or more light detecting components that collect light emitted from the gratings, the emitted light varying in at least one of angle, intensity, phase and polarization as the fabrication process progresses.11. The system of claim 10 wherein the emitted light can be analyzed to generate one or more signatures for comparison to one or more stored signatures to determine whether one or more critical dimensions fall outside of acceptable tolerances and/or whether overlay error is occurring.12. The system of claim 10 wherein the emitted light can be analyzed by an algorithm to obtain a measurement in order to determine whether one or more critical dimensions fall outside of acceptable tolerances and/or whether overlay error is occurring.13. The system of claim 1 wherein the control system controls at least one of alignment, exposure, post exposure baking, development, photolithography, etching, polishing, deposition, exposure time, exposure intensity, exposure magnification, exposure de-magnification, movement via a stepper motor, temperatures associated with the process, pressures associated with the process, concentration of gases applied to the process, concentration of chemicals applied to the process, flow rates of gases applied to the process, flow rates of chemicals applied to the process, excitation voltages associated with the process, illumination time, illumination intensity, concentration of slurry applied during CMP, rate of flow of slurry applied during CMP, degree of abrasiveness of slurry applied during CMP, pressure applied during CMP, baking time, balking temperatures and etchant concentrations.14. A system that facilitates concurrent measurement of one or more critical dimensions and overlay during a semiconductor fabrication process via scatterometry comprising;one or more underlying gratings formed within an underlying layer of at least a portion of a wafer undergoing the semiconductor fabrication process that facilitates measurement of overlay by analyzing light reflected from the gratings;one or more overlying gratings formed over the underlying gratings on an overlying layer of at least a portion of the wafer that facilitates measurement of one or more critical dimensions by analyzing reflected electrons; andone or more logical grids mapped to the wafer comprising one or more locations in which the underlying and overlying gratings for use in concurrent measurements is formed.15. The system of claim 14 wherein the overlying gratings comprise elongated raised portions.16. The system of claim 14 wherein a first grouping of overlying gratings comprises multiple elongated portions oriented substantially in parallel with one another to facilitate measuring overlay in a first direction.17. The system of claim 14 wherein a second grouping of overlying gratings comprises multiple elongated portions substantially in parallel with one another and oriented substantially perpendicular to the first grouping to facilitate measuring overlay in a second direction.18. A system that facilitates concurrent measurement of one or more critical dimensions and overlay error during a semiconductor fabrication process via a scanning electron microscope (SEM) comprising:one or more features formed in a resist layer of at least a portion of a wafer undergoing the fabrication process that facilitates measurement of one or more critical dimensions by analyzing electrons from an electron beam directed at and reflected from at least the portion of the wafer; andone or more gratings formed under the features in a polysilicon layer of at least a portion of the wafer by mapping the wafer into one or more blocks comprising one or more locations in which the one or more gratings for use in the concurrent measurement is formed, portions of the gratings extending out from tinder the features facilitating a determination of overlay error by analyzing reflected electrons, the features are formed within an implant layer of a flash memory product.19. A method for monitoring and controlling a semiconductor fabrication process comprising:providing a plurality of wafers undergoing the fabrication process;mapping the plurality of wafers into one or more logical grids comprising one or more portions in which a grating structure for use in concurrent measurements is formed;concurrently measuring one or more critical dimensions and overlay in a wafer undergoing the fabrication process;determining if one or more of the critical dimensions are outside of acceptable tolerances;determining whether an overlay error is occurring;developing control data based upon one or more concurrent measurements when at least one of an overlay error is occurring and one or more of the critical dimensions fall outside of acceptable tolerances; andfeeding forward or backward the control data to adjust one or more fabrication components or one or more operating parameters associated with the fabrication components when at least one of an overlay error is occurring and one or more of the critical dimensions fall outside of acceptable tolerances to mitigate overlay error and/or to bring critical dimension within acceptable tolerances.20. The method of claim 19 further comprising:forming at least one grating structure on at least a portion of the wafer that facilitates concurrent measurements of one or more critical dimensions and overlay using at least one of scatterometry and scanning electron microscope (SEM) techniques.21. The method of claim 19 wherein scatterometry is utilized to concurrently measure one or more critical dimensions and overlay, the method further comprising:forming one or more underlying gratings within an underlying layer of at least a portion of the wafer that facilitates measurement of overlay by reflecting light directed incident to the gratings; andforming one or more overlying gratings over the underlying gratings on an overlying layer of at least a portion of the wafer that facilitates measurement of one or more critical dimensions by reflecting the incident light.22. The method of claim 19 wherein SEM is utilized to concurrently measure one or more critical dimensions and overlay, the method further comprising:forming one or more features in a resist layer of at least a portion of the wafer that facilitates measurement of one or more critical dimensions by analyzing electrons emitted from an electron beam directed at and reflected from at least the portion of the wafer; andforming one or more gratings under the features in a polysilicon layer of at least a portion of the wafer, portions of the gratings extending out from under the features facilitating a determination of overlay error by analyzing the reflected electrons.23. A system that monitors and controls a semiconductor fabrication process comprising:means for mapping one or more logical grid blocks to at least one wafer;means for forming at least one grain structure according to the one or more logical grid blocks associated with at least a portion of the wafer undergoing the fabrication process that facilitates concurrent measurements of one or more critical dimensions and overlay;means for directing at least one of light and electrons onto the structure;means for collecting light reflected from the structure;means for analyzing the reflected light to determine whether at least one of an overlay error is occurring and one or more critical dimensions full outside of acceptable tolerances; andmeans for adjusting one or more fabrication components or one or more operating parameters associated with the fabrication components when at least one of overlay error is occurring and one or more of the critical dimensions fall outside of acceptable tolerances to mitigate overlay error and/or to bring critical dimension within acceptable tolerances.24. The system of claim 23 wherein the concurrent measurements are performed using at least one of scatterometry and SEM.25. The system of claim 23 further comprising means for developing control data based upon one or more of the concurrent measurements when at least one of an overlay error is occurring and one or more of the critical dimensions fall outside of acceptable tolerances.
TECHNICAL FIELDThe present invention generally relates to monitoring and/or controlling a semiconductor fabrication process, and in particular to a system and methodology for concurrently measuring critical dimensions and overlay during the fabrication process and controlling operating parameters to refine the process in response to the measurements.BACKGROUNDIn the semiconductor industry, there is a continuing trend toward higher device densities. To achieve these high densities, there has been and continues to be efforts toward scaling down device dimensions (e.g., at submicron levels) on semiconductor wafers. In order to accomplish such high device packing density, smaller and smaller feature sizes are required in integrated circuits (ICs) fabricated on small rectangular portions of the wafer, commonly known as dies. This may include the width and spacing of interconnecting lines, spacing and diameter of contact holes, the surface geometry such as corners and edges of various features as well as the surface geometry of other features. To scale down device dimensions, more precise control of fabrication processes are required. The dimensions of and between features can be referred to as critical dimensions (CDs). Reducing CDs, and reproducing more accurate CDs facilitates achieving higher device densities through scaled down device dimensions and increased packing densities.The process of manufacturing semiconductors or ICs typically includes numerous steps (e.g., exposing, baking, developing), during which hundreds of copies of an integrated circuit may be formed on a single wafer, and more particularly on each die of a wafer. In many of these steps, material is overlayed or removed from existing layers at specific locations to form desired elements of the integrated circuit. Generally, the manufacturing process involves creating several patterned layers on and into a substrate that ultimately forms the complete integrated circuit. This layering process creates electrically active regions in and on the semiconductor wafer surface. The layer to layer alignment and isolation of such electrically active regions depends, at least in part, on the precision with which features can be placed on a wafer. If the layers are not aligned within acceptable tolerances, overlay errors can occur compromising the performance of the electrically active regions and adversely affecting chip reliability.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its purpose is merely to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.According to one or more aspects of the present invention, one or more structures formed on a wafer matriculating through a semiconductor fabrication process facilitate concurrent measurement of overlay and one or more critical dimensions in the fabrication process with either scatterometry or a scanning electron microscope (SEM). The concurrent measurements mitigate fabrication inefficiencies as two operations are combined into one. The combined measurements facilitate a reduction in, among other things, time and real estate required for the fabrication process. The measurements can be utilized to generate control data that can be fed forward and/or backward to selectively adjust one or more fabrication components and/or operating parameters associated therewith to bring critical dimensions within acceptable tolerances and to mitigate overlay errors.To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which one or more of the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram schematically illustrating at a high level a system for monitoring and controlling a semiconductor fabrication process in accordance with one or more aspects of the present invention.FIG. 2 is a cross sectional side view of a structure in accordance with one or more aspects of the present invention that facilitates concurrent measurement of critical dimensions and overlay with scatterometry.FIG. 3 is a top view of a structure, such as that depicted in FIG. 2, that can be utilized to concurrently measure critical dimensions and overlay with scatterometry.FIG. 4 is a cross sectional side view of a structure in accordance with one or more aspects of the present invention that facilitates concurrent measurement of critical dimensions and overlay with a scanning electron microscope (SEM).FIG. 5 is a top view of a structure, such as that depicted in FIG. 4, that can be utilized to concurrently measure critical dimensions and overlay with SEM.FIG. 6 is a cross sectional side view of an alternative structure that facilitates concurrent measurement of critical dimensions and overlay with SEM.FIG. 7 illustrates a portion of a system for monitoring a semiconductor fabrication process with scatterometry according to one or more aspects of the present invention.FIG. 8 illustrates a system for monitoring and controlling a semiconductor fabrication process according to one or more aspects of the present invention.FIG. 9 illustrates a portion of a system for monitoring a semiconductor fabrication process with SEM according to one or more aspects of the present invention.FIG. 10 illustrates another system for monitoring and controlling a semiconductor fabrication process according to one or more aspects of the present invention.FIG. 11 illustrates a perspective view of a grid mapped wafer according to one or more aspects of the present invention.FIG. 12 illustrates plots of measurements taken at grid mapped locations on a wafer in accordance with one or more aspects of the present invention.FIG. 13 illustrates a table containing entries corresponding to measurements taken at respective at grid mapped locations on a wafer in accordance with one or more aspects of the present invention.FIG. 14 is flow diagram illustrating a methodology for monitoring and controlling an IC fabrication process according to one or more aspects of the present invention.FIG. 15 illustrates an exemplary scatterometry system suitable for implementation with one or more aspects of the present invention.FIG. 16 is a simplified perspective view of an incident light reflecting off a surface in accordance with one or more aspects of the present invention.FIG. 17 is another simplified perspective view of an incident light reflecting off a surface in accordance with one or more aspects of the present invention.FIG. 18 illustrates a complex reflected and refracted light produced when an incident light is directed onto a surface in accordance with one or more aspects of the present invention.FIG. 19 illustrates another complex reflected and refracted light produced when an incident light is directed onto a surface in accordance with one or more aspects of the present invention.FIG. 20 illustrates yet another complex reflected and refracted light produced when an incident light is directed onto a surface in accordance with one or more aspects of the present invention.FIG. 21 illustrates phase and/or intensity signals recorded from a complex reflected and refracted light produced when an incident light is directed onto a surface in accordance with one or more aspects of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, to one skilled in the art that one or more aspects of the present invention may be practiced with a lesser degree of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects of the present invention.The term "component" as used herein includes computer-related entities, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be a process running on a processor, a processor, an object, an executable, a thread of execution, a program and a computer. By way of illustration, both an application running on a server and the server can be components. By way of further illustration, both a stepper and a process controlling the stepper can be components.It is to be appreciated that various aspects of the present invention may employ technologies associated with facilitating unconstrained optimization and/or minimization of error costs. Thus, non-linear training systems/methodologies (e.g., back propagation, Bayesian, fuzzy sets, non-linear regression, or other neural networking paradigms including mixture of experts, cerebella model arithmetic computer (CMACS), radial basis functions, directed search networks and function link networks) may be employed.FIG. 1 illustrates a system 100 for monitoring and controlling an integrated circuit (IC) fabrication process according to one or more aspects of the present invention. The system 100 includes a control system 102, fabrication components 104 of the process, a measurement system 106 and a wafer 108 undergoing the fabrication process. The wafer 108 has one or more structures 110 formed therein according to one or more aspects of the present invention. The control system 102 is operatively coupled to the measurement system 106 and the fabrication components 104 and selectively controls of the fabrication components 104 and/or one or more operating parameters associated therewith (e.g., via feed forward and/or feedback) based upon readings taken by the measurement system 106. The measurement system 106 includes either a scatterometry system or a scanning electron microscope (SEM) system (not shown) which interacts with the structure 110 to concurrently measure critical dimensions and overlay. These concurrent measurements can be utilized to monitor and control the fabrication process while mitigating the amount of test equipment, real estate and time required for the fabrication process. The measurements can, in particular, be utilized for generating feedback and/or feed-forward data for mitigating overlay and/or bringing critical dimensions within acceptable tolerances.It is to be appreciated that any of a variety of fabrication components and/or operating parameters associated therewith can be selectively controlled based upon the readings taken by the measurement system 106. By way of example and not limitation, this can include, but is not limited to, temperatures associated with the process, pressures associated with the process, concentration of gases and chemicals within the process, composition of gases, chemicals and/or other ingredients within the process, flow rates of gases, chemicals and/or other ingredients within the process, timing parameters associated with the process and excitation voltages associated with the process. By way of further example, parameters associated with high-resolution photolithographic components utilized to develop IC's with small closely spaced apart features can be controlled to mitigate overlay errors and achieve desired critical dimensions. In general, lithography refers to processes for pattern transfer between various media and in semiconductor fabrication, a silicon slice, the wafer, is coated uniformly with a radiation-sensitive film, the photoresist. The photoresist coated substrate is baked to evaporate any solvent in the photoresist composition and to fix the photoresist coating onto the substrate. An exposing source (such as light, x-rays, or an electron beam) illuminates selected areas of the surface of the film through an intervening master template for a particular pattern. The lithographic coating is generally a radiation-sensitized coating suitable for receiving a projected image of the subject pattern. Once the image from the intervening master template is projected onto the photoresist, it is indelibly formed therein.Light projected onto the photoresist layer during photolithography changes properties (e.g., solubility) of the layer such that different portions thereof (e.g., the illuminated or un-illuminated portions, depending upon the photoresist type) can be manipulated in subsequent processing steps. For example, regions of a negative photoresist become insoluble when illuminated by an exposure source such that the application of a solvent to the photoresist during a subsequent development stage removes only non-illuminated regions of the photoresist. The pattern formed in the negative photoresist layer is, thus, the negative of the pattern defined by opaque regions of the template. By contrast, in a positive photoresist, illuminated regions of the photoresist become soluble and are removed via application of a solvent during development. Thus, the pattern formed in the positive photoresist is a positive image of opaque regions on the template. Controlling the degree to which a photoresist is exposed to illumination (e.g., time, intensity) can thus affect the fidelity of pattern transfer and resulting critical dimensions and overlay. For example, overexposure can create features that are too thin, resulting in spaces which are larger than desired, while underexposure can create features that are too wide, resulting in spaces which are smaller than desired.The type of illumination utilized to transfer the image onto a wafer can also be controlled to affect critical dimensions. For instance, as feature sizes are driven smaller and smaller, limits are approached due to the wavelengths of the optical radiation. As such, that type of radiation and thus the wavelengths of radiation utilized for pattern transfers can be controlled to adjust critical dimensions and mitigate overlay. For instance, radiation having more conducive wavelengths (e.g., extreme ultraviolet (EUV) and deep ultraviolet (DUV) radiation having wavelengths within the range of 5-200 nm) can be utilized for lithographic imaging in an effort to accurately achieve smaller feature sizes. However, such radiation can be highly absorbed by the photoresist material. Consequently, the penetration depth of the radiation into the photoresist can be limited. The limited penetration depth requires use of ultra-thin photoresists so that the radiation can penetrate the entire depth of the photoresist in order to effect patterning thereof. The performance of circuits formed through photolithographic processing is, thus, also affected by the thickness of photoresist layers. The thickness of photoresist layers can be reduced through chemical mechanical polishing (CMP). In general, CMP employs planarization techniques wherein a surface is processed by a polishing pad in the presence of an abrasive or non-abrasive liquid slurry. The slurry employed reacts with the photoresist at the surface/subsurface range. Preferably the degree of reaction is not great enough to cause rapid or measurable dissolution (e.g., chemical etching) of the photoresist, but merely sufficient to cause a minor modification of chemical bonding in the photoresist adequate to facilitate surface layer removal by applied mechanical stress (e.g., via use of a CMP polishing pad). Thus, critical dimensions and overlay can be affected by controlling the concentration, rate of flow and degree of abrasiveness of slurry applied during the CMP process as well as the amount of pressure applied between the polishing pad and water during the process.Depending upon the resist system utilized, post exposure baking may also be employed to activate chemical reactions in the photoresist to affect image transfer. The temperatures and/or times that portions of the wafer are exposed to particular temperatures can be controlled to regulate the uniformity of photoresist hardening (e.g., by reducing standing wave effects and/or to thermally catalyze chemical reactions that amplify the image). Higher temperatures can cause faster baking and faster hardening, while lower temperatures can cause slower baking and correspondingly slower hardening. The rate and uniformity of photoresist hardening can affect critical dimensions and overlay, such as, for example, by altering the consistency of a line width. Accordingly, time and temperature parameters can be controlled during post exposure baking to affect critical dimensions and overlay.Operating parameters of an etching stage can similarly be controlled to achieve desired critical dimensions and to mitigate overlay. After illumination, the pattern image is transferred into the wafer from the photoresist coating in an etching stage wherein an etchant, as well as other ingredients, are applied to the surface of the wafer by an excitation voltage or otherwise. The etchant removes or etches away portions of the wafer exposing during the development process. Portions of the wafer under less soluble areas of the photoresist are protected from the etchants. The less soluable portions of the photoresist are those portions that are not affected by the developer during the development process and that are not affected by the etchant during the etching process. These insoluble portions of the photoresist are removed in subsequent processing stage(s) to completely reveal the wafer and the pattern(s) formed therein. The concentration of materials utilized in etching can thus be controlled to achieve desired critical dimensions and to mitigate overlay for instance by affecting the accuracy with which selected portions of the wafer are etched away.Parameters relating to the type of template utilized to transfer an image onto a wafer can also be controlled to affect critical dimensions, layer to layer alignment and overlay. Where the template is a reticle, the pattern is transferred to only one (or a few) die per exposure, as opposed to where the template is a mask and all (or most) die on the wafer are exposed at once. Multiple exposures through a reticle are often performed in a step and scan fashion. After each exposure, a stage to which the wafer is mounted is moved or stepped to align the next die for exposure through the reticle and the process is repeated. This process may need to be performed as many times as there are die in the wafer. Thus, stepper movement can be controlled to mitigate overlay error (e.g., by feeding fed forward and/or backward measurements to a stepper motor). The pattern formed within the reticle is often an enlargement of the pattern to be transferred onto the wafer. This allows more detailed features to be designed within the reticle. Energy from light passed through the reticle can, however, heat the reticle when the image is exposed onto the wafer. This can cause mechanical distortions in the reticle due to thermal expansion and/or contraction of the reticle. Such distortions may alter the geometry of intricate features (e.g., by narrowing a line) and/or interfere with layer to layer registration to such a degree that a resulting circuit does not operate as planned when the image is transferred onto the wafer. Moreover, since the pattern is usually an enlargement of the pattern to be transferred onto the wafer, it typically has to be reduced (e.g., via a de-magnifying lens system) during the lithographic process. Shrinking an already distorted feature (e.g., a narrowed line) can have a deleterious effect on critical dimensions. Thus, while such a template may be effective to transfer more intricate pattern designs, it calls for highly accurate alignment and imaging to mitigate overlay errors and maintain critical dimensions to within acceptable tolerances. Temperature controls can thus be employed to mitigate thermally induced mechanical distortions in the reticle.Additionally, parameters relating to film growth or deposition components (e.g., producing metals, oxides, nitrides, poly, oxynitrides or insulators) can be controlled to achieve desired critical dimensions and mitigate overlay. Such films can be formed through thermal oxidation and nitridation of single crystal silicon and polysilicon, the formation of silicides by direct reaction of a deposited metal and the substrate, chemical vapor deposition (CVD), physical vapor deposition (PVD), low pressure CVD (LPCVD), plasma enhanced CVD (PECVD), rapid thermal CVD (RTCVD), metal organic chemical vapor deposition (MOCVD) and pulsed laser deposition (PLD). The rates of flow, temperature, pressures, concentrations and species of materials supplied during the semiconductor fabrication process can thus be controlled to govern film formation which bears on critical dimensions and overlay.Scatterometry or scanning electron microscope (SEM) techniques can be employed in accordance with one or more aspects of the present invention to concurrently measure critical dimensions and overlay at different points in an IC fabrication process to determine what effect, if any, the various processing components are having on the fabrication process. Different grating and/or feature heights and/or depths may, for example, be measured to generate different signatures that may be indicative of the effect that one or more processing components are having upon the fabrication process and which operating parameters of which processing components, if any, should thus be adjusted to rectify any undesirable processing. The processing components and/or operating parameters thereof can be controlled based upon feedback/feedforward information generated from the measurements. For example, at a first point in time a first signature may be generated that indicates that desired critical dimensions have not yet been achieved but are developing within acceptable tolerances, but that an overlay error is occurring. Thus, the process may be adapted in an attempt to mitigate overlay error, but not affect developing critical dimensions. Then, at a second point in time a second signature may be generated that indicates that an overlay error is no longer occurring, but that the desired critical dimensions still have not been achieved. Thus, the process may be allowed to continue until a later point in time when a corresponding signature indicates that the desired critical dimensions have been achieved without overlay error.Turning to FIG. 2, a cross sectional side view of a combined structure 200 in accordance with one or more aspects of the present invention is illustrated. The structure 200 facilitates concurrent measurement of critical dimensions and overlay with a scatterometry system and can be formed on a portion (e.g., a die) of a wafer matriculating through an IC fabrication process, for example. The structure includes one or more underlying gratings 202 that facilitate overlay 204 measurements and alignment with other layers and one or more overlying gratings 206 that facilitate critical dimension 208 measurements. It is to be appreciated that the overlying gratings 206 can be printed at any location on the wafer where it is desired to monitor critical dimensions. Measurements can thus be taken at more appropriate circuit locations, such as at a core memory array, for example, as opposed to test structures in scribe lines. Measuring critical dimensions and overlay in a single operation with a single tool (e.g., by scatterometry) mitigates fabrication duration and spacing requirements. This allows efficiency to be increased without sacrificing quality control.FIG. 3 illustrates a top view of a substrate 300 (e.g., a wafer) and an enlargement 302 of overlying gratings 304 formed on a portion 306 (e.g., a die) of the wafer in accordance with one or more aspects of the present invention. The gratings can, for example, correspond to the overlying gratings depicted in FIG. 2 which, along with the underlying gratings shown in FIG. 2, facilitate concurrent measurement of critical dimensions and overlay with a scatterometry system. It is to be appreciated that no underlying gratings are depicted in FIG. 3 for purposes of simplicity. In the example shown, the overlying gratings 304 are formed both horizontally 308 and vertically 310 on the wafer. The vertical gratings 310 facilitate measuring overlay in an X-direction while the horizontal gratings 308 facilitate measuring overlay in a Y-direction with a single scatterometry system. It will be appreciated that such gratings can be oriented in any suitable direction(s) to obtain desired measurements. Also, the gratings can be located between production regions 312 of the substrate so as to maximize real estate associated with the device being manufactured. The particular gratings depicted in FIG. 3 include a series of elongated marks, which can be implemented as raised portions in the substrate or as troughs, such as etched into the substrate. It is to be appreciated that more complex (e.g., nonlinear) grating patterns and/or substrate features (e.g., lines, connectors, etc) could also be utilized in accordance with one or more aspects of the present invention.FIG. 4 illustrates a cross sectional side view of a structure 400 according to one or more aspects of the present invention that facilitates concurrent measurement of critical dimensions and overlay with a scanning electron microscope (SEM). The structure can be formed on a portion (e.g., a die) of a wafer undergoing an IC fabrication process. By way of example and not limitation, one specific application for such a structure can be in an implant layer of a flash memory product. One or more gratings 402 are formed in a polysilicon layer 404 of the structure while one or more features 406 are formed within a resist layer 408 of the structure. Both the resist 408 and polysilicon 404 layers are formed over other underlying layers 410. The structure 400, and more particularly the features 406 in the resist layer 408, can be interrogated by an SEM system to reveal critical dimensions, such as line widths 412 and spacings 414 there-between. Similarly, overlay can be ascertained by finding the difference between first 416 and second 418 SEM interrogated measurements of underlying gratings 402. It will be appreciated that such a differencing function can be implemented by a simple software component. It will be further appreciated that the structure 400 can be formed at any suitable location on a wafer to obtain desired measurements.FIG. 5 depicts a top view of a structure 500, such as that illustrated in FIG. 4. The structure 500 can be formed within a portion (e.g., a die) of a wafer and can be utilized for example, to concurrently measure critical dimensions and overlay in an implant layer in a flash memory product. The structure includes features 502 formed in a resist layer and gratings 504 formed under the resist layer in a polysilicon layer. Other layers (not shown) underlie both the resist and polysilicon layers. An SEM system can interrogate the features 502 in the resist layer to ascertain critical dimensions, such as line widths 506 and spaces 508 there-between. Overlay (and/or overlay error) can be determined by differencing SEM measurements of first 510 and second 512 grating portions that project out from under features formed in the resist layer.FIG. 6 illustrates an alternative structure 600 for concurrently measuring critical dimensions and overlay with an SEM system in accordance with one or more aspects of the present invention. The structure can be formed on a portion (e.g., a die) of a wafer undergoing an IC fabrication process. Overlay, including overlay error, can be determined by interrogating features on the wafer with an SEM system and finding differences in spacings between the features. For instance, respective values of two different spacing measurements 602, 604 can periodically be obtained and subtracted from one another throughout the fabrication process to determine if an overlay error is occurring. Critical dimensions 606 can be monitoring with an SEM system by interrogating gratings/features 608 printed within top or upper layers 610 of the wafer.FIG. 7 illustrates a portion of a system 700 being employed to monitor (e.g., via scatterometry) a wafer 702 matriculating through a semiconductor fabrication process according to one or more aspects of the present invention. It will be appreciated that only a small portion of the wafer 702 is depicted in FIG. 7 for purposes of simplicity. The wafer 702 has a structure 704 formed thereon according to one or more aspects of the present invention. The structure includes one or more underlying gratings 706that facilitate overlay measurements and alignment with other layers and one or more overlying gratings 708 that facilitate critical dimension measurements. The structure allows these measurements to be taken concurrently with a single measuring tool, thus mitigating fabrication equipment, time and spacing requirements while improving feature accuracy and chip quality control.A light source 710 provides light to one or more light emitters 712 that direct a light 714 incident to the upper 708 and lower 706 gratings. The light 714 is reflected from the gratings as reflected light 716. The incident light 714 may be referred to as the reference beam, and thus the phase, intensity and/or polarization of the reference beam 714 may be recorded in a measurement system 718 to facilitate later comparisons to the reflected beam 716 (e.g., via signature comparison). The angle of the reflected light 716 from the gratings 706, 708 will vary in accordance with the evolving dimensions of the gratings and/or with the evolving dimensions of one or more patterns being developed in the wafer 702. Similarly, the intensity, phase and polarization properties of the specularly reflected light 716 may vary in accordance with the evolving dimensions. One or more light detecting components 720 collect the reflected light 716 and transmit the collected light, and/or data associated with the collected light, to the measurement system 718. The measurement system forwards this information to a processor 722, which may or may not be integral with the measurement system 718. The processor 722, or central processing unit (CPU), is programmed to control and carry out the various functions described herein. The processor 722 may be any of a plurality of processors, and the manner in which the processor can be programmed to carry out the functions described herein will be readily apparent to those having ordinary skill in the art based on the description provided herein. The reflected light 716 can, for example, be analyzed to generate one or more signatures that can be compared to one or more stored signatures to determine whether, for example, desired critical dimensions are being achieved and/or whether overlay error is occurring and thus whether, for example, feed forward and/or backward information should be generated and applied to selectively control and adjust one or more operating parameters of one or more IC fabrication components (e.g., alignment, post exposure baking, development, photolithography, etching, polishing, deposition) to achieve a desired result.Turning to FIG. 8, a system 800 for monitoring and controlling a semiconductor fabrication process according to one or more aspects of the present invention is illustrated. A wafer 802, or a portion thereof, is depicted as undergoing the fabrication process and has a structure 804 formed thereon according to one or more aspects of the present invention. The structure facilitates concurrent measurement with scatterometry techniques of overlay and critical dimensions via one or more underlying gratings 806 and one or more overlying gratings 808, respectively.One or more light sources 810 project light 812 onto respective portions of the structure 804, which cause the light to be reflected in different, quantifiable manners. Reflected light 814 is collected by one or more light detecting components 816, and processed by a measurement system 818 for a concurrent determination of critical dimensions and overlay. The reflected light 814 may, for example, be processed to generate signatures, which can be utilized to facilitate feedback and/or feed-forward control of one or more fabrication components 820 and/or operating parameters associated therewith as described herein to achieve desired critical dimensions and to mitigate overlay error.The measurement system 818 includes a scatterometry system 822, which can be any scatterometry system suitable for carrying out aspects of the present invention as described herein. A source of light 824 (e.g., a laser, broadband radiation in the visible and near ultraviolet range, arc lamp, or a similar device) provides light to the one or more light sources 810 via the measurement system 818. Preferably, the light source 824 is a frequency stabilized laser, however, it will be appreciated that any laser (e.g., laser diode or helium neon (HeNe) gas laser) or other light source suitable for carrying out the present invention may be employed. Similarly, any one or more light detecting components 816 suitable for carrying out aspects of the present invention may be employed (e.g., photo detector, photo diodes) for collecting reflected light.A processor 826 receives the measured data from the measurement system 818 and is programmed to control and operate the various components within the system 800 in order to carry out the various functions described herein. The processor, or CPU 826, may be any of a plurality of processors, and the manner in which the processor 826 can be programmed to carry out the functions described herein will be readily apparent to those having ordinary skill in the art based on the description provided herein.The processor 826 is also coupled to a fabrication component driving system 828 that drives the fabrication components 820. The processor 826 controls the fabrication component driving system 828 to selectively control one or more of the fabrication components 820 and/or one or more operating parameters associated therewith as described herein. The processor 826 monitors the process via the signatures generated by the reflected and/or diffracted light, and selectively regulates the fabrication process by controlling the corresponding fabrication components 820. Such regulation enables controlling critical dimensions and overlay error during fabrication and further facilitates initiating a subsequent fabrication phase with more precise initial data, which facilitates improved chip quality at higher packing densities.Though not shown in FIG. 8, it should be appreciated that the fabrication components 820 may be separate and independent from the measurement system 818 (e.g., SEM, scatterometry system 822). For example, the measurement system 818 may be linked and/or networked to the fabrication components 820 (e.g., an etcher tool) via a computer network (not shown). Measurements taken from the measurement system 818 may be communicated via the network to the etcher tool in order to adjust times, concentrations, and the like. Moreover, the fabrication components 820 and the measurement system 818 may be in the system 800 but either integrated on the same tool or as separate tools, depending on the application and as desired by a user.A memory 830 is also shown in the example illustrated in FIG. 8. The memory 830 is operable to store, among other things, program code executed by the processor 826 for carrying out one or more of the functions described herein. The memory may include, for example, read only memory (ROM) and random access memory (RAM). The RAM is the main memory into which the operating system and application programs are loaded. The memory 830 may also serve as a storage medium for temporarily storing information and data that may be useful in carrying out one or more aspects of the present invention. For mass data storage, the memory 830 may also include a hard disk drive (e.g., 50 Gigabyte hard drive).A power supply 832 is included to provide operating power to one or more components of the system 800. Any suitable power supply 832 (e.g., battery, line power) can be employed to carry out the present invention.A training system 834 may also be included. The training system 834 may be adapted to populate a data store 836 (which may be comprised within the memory 830) for use in subsequent monitoring. For example, the scatterometry system 822 can generate substantially unique scatterometry signatures that can be stored in the data store 836 via the training system 834. The data store 836 can be populated with an abundance of scatterometry signatures by examining a series of wafers and/or wafer dies. Scatterometry signatures can be compared to scatterometry measurements stored in the data store 836 to generate feed forward/backward control data that can be employed to control the fabrication process. It is to be appreciated that the data store 836 can store data in data structures including, but not limited to one or more lists, arrays, tables, databases, stacks, heaps, linked lists and data cubes. Furthermore the data store 836 can reside on one physical device and/or may be distributed between two or more physical devices (e.g., disk drives, tape drives, memory units).FIG. 9 illustrates a portion of a system 900 being employed to monitor (e.g., via SEM) the development of a wafer 902 undergoing a semiconductor fabrication process. It will be appreciated that only a small portion of the wafer is depicted in FIG. 9 for purposes of simplicity. The wafer 902 has a structure 904 formed thereon according to one or more aspects of the present invention. By way of example and not limitation, one specific application for such a structure can be in an implant layer of a flash memory product. One or more gratings 906 are formed in a polysilicon layer 908 of the structure while one or more features 910 are formed within a resist layer 912 of the structure 904. The structure 904, and more particularly the features 910 in the resist layer 912, can be interrogated by an SEM system to reveal critical dimensions, such as line widths and spacings there-between. Similarly, overlay and overlay error, in particular, can be ascertained by finding the difference between first and second SEM interrogated measurements of underlying gratings that jut out from under the features. The structure 904 allows these measurements to be taken concurrently with a single measuring tool, thus mitigating fabrication equipment, time and spacing requirements while improving precision and quality control.The wafer 902 is housed within a chamber 914 and is interrogated by an electron beam 916 protected from an electromagnetic lens 918. The electron beam 916 is created from high voltage supplied by a power supply 920 associated with a beam generating system 922 which includes an emission element 924. Various directing, focusing, and scanning elements (not shown) in the beam generating system 922 guide the electron beam 916 from the emission element (924) to the electromagnetic lens 918. The electron beam particles can be accelerated to energies from about 500 eV to 40 Kev which can yield, for example, resolutions from about 30 to about 40 Angstroms. When the electron beam 916 strikes a surface, electrons and x-rays, for example, are emitted 924, are detected by a detector 926, and are provided to a detection system 928. The most useful electron signals 924 are low energy secondary electrons that provide a substantial amount of current to the detector 926. Examples of electron signals 924 include backscattered electrons, reflected electrons, secondary electrons, x-rays, current, and the like.The detection system 928 can digitize the signal from the detector 926 and/or provide filtering or other signal processing to the information. The detection system 928 forwards this information to a processor 930, which may or may not be integral with the detection system 928. The processor 930, or central processing unit (CPU), is programmed to control and carry out the various functions described herein. The processor 930 may be any of a plurality of processors, and the manner in which the processor 930 can be programmed to carry out the functions described herein will be readily apparent to those having ordinary skill in the art based on the description provided herein.For example, the processor 930 may control the beam generating system 922 and perform signal analysis. The electron signals 924 can be analyzed to generate one or more signatures that can be compared to one or more stored signatures to determine whether, for example, desired critical dimensions are being achieved and/or whether overlay error is occurring and thus whether, for example, feed forward and/or backward information should be generated and applied to selectively control and adjust one or more operating parameters of one or more IC fabrication components (e.g., alignment, post exposure baking, development, photolithography, etching, polishing, deposition) to achieve a desired result.Alternatively or in addition, the electron signals 924 may be analyzed directly by an algorithm in order to obtain a measurement. Thus, generating a signature may be optional depending on the application and as desired by a user. Likewise, comparing a generated signature to a signature database and/or maintaining a signature database may also be optional according to the user's preferences.It will be appreciated that relative movement between the beam 916 and the wafer 902 can be controlled to facilitate obtaining desired measurements. The electron beam 916 can, for example, scan from point to point in a rectangular raster pattern to facilitate measurement of the width of a conductor, for example. Accelerating voltage, beam current and spot diameter can also be controlled to achieve desired measurements.Turning to FIG. 10, a system 1000 for monitoring and controlling a semiconductor fabrication process according to one or more aspects of the present invention is illustrated. A wafer 1002 or a portion thereof depicted as undergoing the fabrication process has a structure 1004 formed thereon according to one or more aspects of the present invention. The structure 1004 facilitates concurrent measurement of overlay and critical dimensions with SEM techniques. The structure can, for example, be in an implant layer of a flash memory product. One or more gratings 1006 are formed in a polysilicon layer 1008 of the structure 1004 while one or more features 1010 are formed within a resist layer 1012 of the structure 1004. The structure 1004, and more particularly the features 1010 in the resist layer 1012, can be interrogated by an SEM system to reveal critical dimensions, such as line widths and spacings there-between. Similarly, overlay and/or overlay error can be ascertained by finding the difference between first and second SEM interrogated measurements of underlying gratings that extend out from under the features 1010.The wafer 1002 is interrogated by an electron beam 1014 projected onto respective portions of the structure. The electron beam 1014 is part of an SEM system 1016, at least part of which may be integral with a measurement system 1018. The electron beam 1014 is created from high voltage in a beam generating system 1020 of the SEM which includes an emission element 1022. Various directing, focusing, and scanning elements (not shown) in the beam generating system 1020 guide the electron beam 1014 from the emission element 1022 to an electromagnetic lens 1024. When the electron beam 1014 strikes the structure 1004, electrons and x-rays, for example, are emitted 1026 and are detected by one or more detectors 1028 and are provided to the measurement system 1018 for a concurrent determination of critical dimensions and overlay. The detected electrons may be processed to generate signatures, which can be utilized to facilitate feedback and/or feed-forward control of one or more fabrication components 1030 and/or operating parameters associated therewith as described herein to achieve desired critical dimensions and to mitigate overlay error.Alternatively, the detected electrons may be automatically and directly analyzed by any number of algorithms to yield measurements without comparison or reference to stored signatures. Thus, processing time may be increased as well as the overall efficiency of the system 1000.A processor 1032 receives the measured data from the measurement system 1018 and is programmed to control and operate the various components within the system 1000 in order to carry out the various functions described herein. The processor, or CPU 1032, may be any of a plurality of processors, and the manner in which the processor 1032 can be programmed to carry out the functions described herein will be readily apparent to those having ordinary skill in the art based on the description provided herein.The processor 1032 is also coupled to a fabrication component driving system 1034 that drives the fabrication components 1030. The processor 1032 controls the fabrication component driving system 1034 to selectively control one or more of the fabrication components 1030 and/or one or more operating parameters associated therewith as described herein. The processor 1032 monitors the process via the optional signatures generated by the detected electrons, and selectively regulates the fabrication process by controlling the corresponding fabrication components 1030. Such regulation enables controlling critical dimensions and overlay (e.g., overlay error) during fabrication and further facilitates initiating a subsequent fabrication phase with more precise initial data, which facilitates improved chip quality at higher packing densities.A memory 1036 is also shown in the example illustrated in FIG. 10. The memory 1036 is operable to store, among other things, program code executed by the processor 1032 for carrying out one or more of the functions described herein. The memory 1036 may include, for example, read only memory (ROM) and random access memory (RAM). The RAM is the main memory into which the operating system and application programs are loaded. The memory 1036 may also serve as a storage medium for temporarily storing information and data that may be useful in carrying out one or more aspects of the present invention. For mass data storage, the memory 1036 may also include a hard disk drive (e.g., 50 Gigabyte hard drive).A power supply 1038 is included to provide operating power to one or more components of the system 1000. Any suitable power supply 1038 (e.g., battery, line power) can be employed to carry out the present invention.A training system 1040 may also be included. The training system 1040 may be adapted to populate a data store 1042 (which may be comprised within the memory 1036 for use in subsequent monitoring. For example, the SEM system 1016 can generate substantially unique signatures that can be stored in the data store 1042 via the training system 1040. The data store 1042 can be populated with an abundance of SEM signatures by examining a series of wafers and/or wafer dies. SEM signatures can be compared to SEM measurements stored in the data store 1042 to generate feed forward/backward control data that can be employed to control the fabrication process. It is to be appreciated that the data store 1042 can store data in data structures including, but not limited to one or more lists, arrays, tables, databases, stacks, heaps, linked lists and data cubes. Furthermore, the data store 1042 can reside on one physical device and/or may be distributed between two or more physical devices (e.g., disk drives, tape drives, memory units).Turning now to FIGS. 11-13, in accordance with one or more aspects of the present invention, a wafer 1102 (or one or more die located thereon) situated on a stage 1104 may be logically partitioned into grid blocks to facilitate concurrent measurements of critical dimensions and overlay as the wafer matriculates through a semiconductor fabrication process. This may facilitate selectively determining to what extent, if any, fabrication adjustments are necessary. Obtaining such information may also assist in determining problem areas associated with fabrication processes.FIG. 11 illustrates a perspective view of a steppable stage 1104 supporting a wafer 1102. The wafer 102 may be divided into a grid pattern as shown in FIG. 12.Each grid block (XY) of the grid pattern corresponds to a particular portion of the wafer 1102 (e.g., a die or a portion of a die). The grid blocks are individually monitored for fabrication progress by concurrently measuring critical dimensions and overlay with either scatterometry or scanning electron microscope (SEM) techniques.This may also be applicable in order to assess wafer-to-wafer and lot-to-lot variations. For example, a portion P (not shown) of a first wafer (not shown) may be compared to the corresponding portion P (not shown) of a second wafer. Thus, deviations between wafers and lots may be determined in order to calculate adjustments to the fabrication components which are necessary to accommodate for the wafer-to-wafer and/or lot-to-lot variations.In FIG. 12, one or more respective portions of a wafer 1102 (X1Y1 . . . X12, Y12) are concurrently monitored for critical dimensions and overlay utilizing either scatterometry or scanning electron microscope techniques. Exemplary measurements produced during fabrication for each grid block are illustrated as respective plots. The plots can, for example, be composite valuations of signatures of critical dimensions and overlay. Alternatively, critical dimensions and overlay values may be compared separately to their respective tolerance limits.As can be seen, the measurement at coordinate X7Y6 yields a plot that is substantially higher than the measurement of the other portions XY. This can be indicative of overlay, overlay error, and/or one or more critical dimension outside of acceptable tolerances. As such, fabrication components and/or operating parameters associated therewith can be adjusted accordingly to mitigate repetition of this aberrational measurement. It is to be appreciated that the wafer 1102 and or one or more die located thereon may be mapped into any suitable number and/or arrangement of grid blocks to effect desired monitoring and control.FIG. 13 is a representative table of concurrently measured critical dimensions and overlay taken at various portions of the wafer 1102 mapped to respective grid blocks. The measurements in the table can, for example, be amalgams of respective critical dimension and overlay signatures. As can be seen, all the grid blocks, except grid block X7Y6, have measurement values corresponding to an acceptable value (VA) (e.g., no overlay error is indicated and/or overlay measurements and critical dimensions are within acceptable tolerances), while grid block X7Y6 has an undesired value (VU) (e.g., overlay and critical dimensions are not within acceptable tolerances, thus at least an overlay or CD error exists). Thus, it has been determined that an undesirable fabrication condition exists at the portion of the wafer 1102 mapped by grid block X7Y6. Accordingly, fabrication process components and parameters may be adjusted as described herein to adapt the fabrication process accordingly to mitigate the re-occurrence or exaggeration of this unacceptable condition.Alternatively, a sufficient number of grid blocks may have desirable thickness measurements so that the single offensive grid block does not warrant scrapping the entire wafer. It is to be appreciated that fabrication process parameters may be adapted so as to maintain, increase, decrease and/or qualitatively change the fabrication of the respective portions of the wafer 1102 as desired. For example, when the fabrication process has reached a predetermined threshold level (e.g., X % of grid blocks have acceptable CDs and no overlay error exists), a fabrication step may be terminated.In view of the exemplary systems shown and described above, a methodology, which may be implemented in accordance with one or more aspects of the present invention, will be better appreciated with reference to the flow diagram of FIG. 14. While, for purposes of simplicity of explanation, the methodology is shown and described as a series of function blocks, it is to be understood and appreciated that the present invention is not limited by the order of the blocks, as some blocks may, in accordance with the present invention, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement a methodology in accordance with one or more aspects of the present invention. It is to be appreciated that the various blocks may be implemented via software, hardware a combination thereof or any other suitable means (e.g., device, system, process, component) for carrying out the functionality associated with the blocks. It is also to be appreciated that the blocks are merely to illustrate certain aspects of the present invention in a simplified form and that these aspects may be illustrated via a lesser and/or greater number of blocks.FIG. 14 is flow diagram illustrating a methodology 1400 for monitoring and controlling an IC fabrication process according to one or more aspects of the present invention. The methodology begins at 1402 wherein general initializations are performed. Such initializations can include, but are not limited to, establishing pointers, allocating memory, setting variables, establishing communication channels and/or instantiating one or more objects. At 1404, a grid map comprising one or more grid blocks "XY" is generated. Such grid blocks may correspond to dies on the wafer and or to portions of one or more die on a wafer, for example.At 1406, a structure such as that described herein is formed at respective grid mapped locations on the wafer to facilitate concurrent measurement of critical dimensions and overlay with either scatterometry or scanning electron microscope (SEM) techniques at the grid mapped locations. At 1408, as the wafer matriculates through the fabrication process, overlay and critical dimensions, such as depth, width, height, slope, and the like, are concurrently measured with either scatterometry or SEM at the grid mapped locations via the structure formed at the respective locations.At 1410, a determination is made as to whether measurements have been taken at all (or a sufficient number) of grid mapped locations. If the determination at 1410 is NO, then processing returns to 1408 so that additional measurements can be made. If the determination at 1400 is YES, then at 1412 the measurements are compared to acceptable values to determine if an overlay error is occurring and/or if critical dimensions are within acceptable tolerances.By way of example, measurements of critical dimensions and overlay can be analyzed to produce signatures. These signatures can then be compared to acceptable signature values for critical dimensions and overlay at the grid mapped locations. Additionally, respective critical dimension and overlay signatures can be aggregated for the respective grid mapped locations to produce a single value for comparison to an acceptable value for the grid mapped locations. At 1414, a determination is made as to whether an undesired value (VU) has been encountered (e.g., indicating that an overlay error is occurring and/or that one or more critical dimensions are outside of acceptable tolerances).If the determination at 1414 is NO, then at 1416 processing continues as normal. The methodology can then advance to 1418 and end. If, however, the determination at 1414 is YES, meaning that an undesired value was encountered, then at 1420, one or more fabrications components and/or operating parameters associated therewith can be adjusted as described herein according to feed forward control data derived from the measurements to mitigate or remedy the situation. For example, an exposing source can be turned off and/or data generated by sophisticated modeling techniques can be fed forward to post exposure baking and/or development stages to control processing parameters such as bake time and/or temperature to bring critical dimensions back to within acceptable tolerances and/or to mitigate overlay error.At 1422, control data derived from the measurements can also be feed back to adjust one or more fabrications components and/or operating parameters associated therewith to mitigate reoccurrence of the undesired event during subsequent processing. For instance, stepped alignment of the wafer can be adjusted to facilitate proper placement of a line on subsequently processed dies. Similarly, exposure time and/or intensity can be controlled so that a line having a proper width is formed within a photoresist layer. The methodology then ends at 1418. As mentioned above, events can occur in orders different from that depicted in FIG. 14. For example, measurements taken, as at 1406, can be compared to acceptable values, as at 1412, prior to determining whether measurements have been taken at all grid mapped locations, as at 1410.FIG. 15 illustrates an exemplary scatterometry system suitable for implementation with one or more aspects of the present invention. Light from a laser 1502 is brought to focus in any suitable manner to form a beam 1504. A sample, such as a wafer 1506, is placed in the path of the beam 1504 and a photo detector or photo multiplier 1508 of any suitable construction. Different detector methods and arrangements may be employed to determine the scattered and/or reflected power. A microprocessor 1510, of any suitable design, may be used to process detector readouts, including, but not limited to, intensity properties of the specularly reflected light, polarization properties of the specularly reflected light, and angular locations of different diffracted orders. Thus, light reflected from the sample 1506 may be accurately measured.Concepts of scatterometry and how they are employed in accordance with one or more aspects of the present invention are discussed with respect to FIGS. 16-21. Scatterometry is a technique for extracting information about a surface upon which an incident light has been directed. Scatterometry is a metrology that relates the geometry of a sample to its scattering effects. Scatterometry is based optical diffraction responses. Scatterometry can be employed to acquire information concerning properties including, but not limited to, horizontal/vertical alignment/shifting/compression/stretching, dishing, erosion, profile and critical dimensions of a surface and/or features present on a surface. The information can be extracted by comparing the phase and/or intensity of a reference light directed onto the surface with phase and/or intensity signals of a complex reflected and/or diffracted light resulting from the incident light reflecting from and/or diffracting through the surface upon which the incident light was directed. The intensity and/or the phase of the reflected and/or diffracted light will change based on properties of the surface upon which the light is directed. Such properties include, but are not limited to, the planarity of the surface, features on the surface, voids in the surface, the number and/or type of layers beneath the surface.Different combinations of the above-mentioned properties will have different effects on the phase and/or intensity of the incident light resulting in substantially unique intensity/phase signatures in the complex reflected and/or diffracted light. Thus, by examining a signal (signature or stored value) library of intensity/phase signatures, a determination can be made concerning the properties of the surface. Such substantially unique intensity/phase signatures are produced by light reflected from and/or refracted by different surfaces due, at least in part, to the complex index of refraction of the surface onto which the light is directed. The complex index of refraction (N) can be computed by examining the index of refraction (n) of the surface and an extinction coefficient (k). One such computation of the complex index of refraction can be described by the equation:N=n-jk, where j is an imaginary number.The signal (signature) library can be constructed from observed intensity/phase signatures and/or signatures generated by modeling and simulation. By way of illustration, when exposed to a first incident light of known intensity, wavelength and phase, a wafer can generate a first intensity/phase signature. Observed signatures can be combined with simulated and modeled signatures to form a signal (signature) library. Simulation and modeling can be employed to produce signatures against which measured intensity/phase signatures can be matched. In one exemplary aspect of the present invention, simulation, modeling and observed signatures are stored in a signal (signature) data store. Thus, when intensity/phase signals are received from scatterometry detecting components, the intensity/phase signals can be pattern matched, for example, to the library of signals to determine whether the signals correspond to a stored signature.To illustrate the principles described above, reference is now made to FIGS. 16 through 21. Referring initially to FIG. 16, an incident light 1602 is directed at a surface 1600, upon which one or more features 1606 may exist. The incident light 1602 is reflected as reflected light 1604. The properties of the surface 16, including but not limited to, thickness, uniformity, planarity, chemical composition and the presence of features, can affect the reflected light 1604. The features 1606 are raised upon the surface 1600, but could also be recessed therein. The phase and/or intensity of the reflected light 1604 can be measured and plotted, as partially shown, for example, in FIG. 21. Such plots can be employed to compare measured signals with signatures stored in a signature library using techniques like pattern matching, for example.Referring now to FIG. 17, an incident light 1712 is directed onto a surface 1710 upon which one or more depressions 1718 appear. The incident light 1712 is reflected as reflected light 1714. Depressions 1718 will affect the scatterometry signature to produce a substantially unique signature. It is to be appreciated that scatterometry can be employed to measure, among other things, features appearing on a surface, features appearing in a surface, features emerging in a pattern.Turning now to FIG. 18, complex reflections and refractions of an incident light 1840 are illustrated. The reflection and refraction of the incident light 1840 can be affected by factors including, but not limited to, the presence of one or more features 1828 and the composition of the substrate 1820 upon which the features 1828 reside. For example, properties of the substrate 1820 including, but not limited to the thickness of a layer 1822, the chemical properties of the layer 1822, the opacity and/or reflectivity of the layer 1822, the thickness of a layer 1824, the chemical properties of the layer 1824, the opacity and/or reflectivity of the layer 1824, the thickness of a layer 1826, the chemical properties of the layer 1826, and the opacity and/or reflectivity of the layer 1826 can affect the reflection and/or refraction of the incident light 1840. Thus, a complex reflected and/or refracted light 1842 may result from the incident light 1840 interacting with the features 1828, and/or the layers 1822, 1824 and 1826. Although three layers 1822, 1824 and 1826 are illustrated in FIG. 18, it is to be appreciated that a substrate can be formed of a greater or lesser number of such layers.Turning now to FIG. 19, one of the properties from FIG. 18 is illustrated in greater detail. The substrate 1920 can be formed of one or more layers 1922, 1924 and 1926. The phase 1950 of the reflected and/or refracted light 1942 from incident light 1940 can depend, at least in part, on the thickness of a layer, for example, the layer 1924. Thus, in FIG. 20, the phase 2052 of the reflected light 2042 differs from the phase 1950 due, at least in part, to the different thickness of the layer 2024 in FIG. 20.Thus, scatterometry is a technique that can be employed to extract information about a surface upon which an incident light has been directed. The information can be extracted by analyzing phase and/or intensity signals of a complex reflected and/or diffracted light. The intensity and/or the phase of the reflected and/or diffracted light will change based on properties of the surface upon which the light is directed, resulting in substantially unique signatures that can be analyzed to determine one or more properties of the surface upon which the incident light was directed.Using scatterometry in implementing one or more aspects of the present invention facilitates a relatively non-invasive approach to measuring opaque film thickness and other properties (e.g., CD, overlay, profile, etc.) and to reproducing successful fabrication processes in subsequent development cycles.Described above are preferred embodiments of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Methods, systems, and devices for speed bins to support memory compatibility are described. A host device may read a value of a register including serial presence detect data of a memory module. The serial presence detect data may be indicative of a timing constraint for operating the memory module at a first clock rate, where the timing constraint and the first clock rate may be associated with a first speed bin. The host device may select, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, where the host device may support operations according to a set of timing constraints that includes a set of values. The timing constraint may be selected from a subset of the set of timing constraints, where the subset may be exclusive of at least one of the set of values.
34CLAIMSWhat is claimed is:1. A method at a host device, comprising: reading, by the host device, a value of a register comprising serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin; selecting, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, wherein the host device supports operations according to a set of timing constraints that comprises a plurality of values, and wherein the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values; and communicating with the memory module according to the second speed bin.2. The method of claim 1 , further comprising: reading, by the host device, a value of a second register comprising SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin; selecting, for communication with the second memory module, the second speed bin associated with the second clock rate and the timing constraint, wherein the second timing constraint is one of the subset of the set of timing constraints; and communicating with the second memory module according to the second speed bin.3. The method of claim 2, further comprising: downclocking, at the host device, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register comprising the SPD data of the second memory module, wherein communicating with the memory module and the 35 second memory module according to the second speed bin is based at least in part on the downclocking.4. The method of claim 2, wherein communicating with the second memory module is based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.5. The method of claim 2, wherein communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.6. The method of claim 1 , further comprising: downclocking, at the host device, from a third clock rate to the second clock rate based at least in part on reading the value of the register comprising the SPD data of the memory module, wherein communicating with the memory module according to the second speed bin is based at least in part on the downclocking.7. The method of claim 1, further comprising: selecting the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.8. The method of claim 1, wherein each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.9. The method of claim 1, wherein the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.10. The method of claim 1, wherein the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.11. A method at a memory device, comprising: providing, to a host device, a value of a register comprising serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, wherein the memory module supports operations according to a set of timing constraints that comprises a plurality of values, and wherein the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values; and communicating with the host device according to the first speed bin.12. The method of claim 11, wherein each value of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.13. The method of claim 11, wherein the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.14. The method of claim 11, wherein the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.15. An apparatus, comprising: a circuit configured to cause the apparatus to: read, by the apparatus, a value of a register comprising serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the first clock rate associated with a first speed; select, for communication with the memory module, a second speed bin associated with a second clock rate at the apparatus and the timing constraint, wherein the apparatus supports operations according to a set of timing constraints that comprises a plurality of values, and wherein the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values; and communicate with the memory module according to the second speed bin.16. The apparatus of claim 15, wherein the circuit is configured to cause the apparatus to: read, by the apparatus, a value of a second register comprising SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin; select, for communication with the second memory module, the second speed bin associated with the second clock rate and the apparatus and the timing constraint, wherein the second timing constraint is one of the subset of the set of timing constraints; and communicate with the second memory module according to the second speed bin.17. The apparatus of claim 16, wherein the circuit is further configured to cause the apparatus to: downclock, at the apparatus, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register comprising the SPD data of the second memory module, wherein communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the downclocking.18. The apparatus of claim 16, wherein communicating with the second memory module is based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.19. The apparatus of claim 16, wherein communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.20. The apparatus of claim 15, wherein the circuit is further configured to cause the apparatus to: downclock, at the apparatus, from a third clock rate to the second clock rate based at least in part on reading the value of the register comprising the SPD data of the memory module, wherein communicating with the memory module according to the second speed bin is based at least in part on the downclocking.21. The apparatus of claim 15, wherein the circuit is further configured to cause the apparatus to: 38 select the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.22. The apparatus of claim 15, wherein each value of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.23. The apparatus of claim 15, wherein the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.24. The apparatus of claim 15, wherein the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.25. An apparatus, comprising: a circuit configured to cause the apparatus to: provide, to a host device, a value of a register comprising serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, wherein the memory module supports operations according to a set of timing constraints that comprises a plurality of values, and wherein the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values; and communicate with the host device according to the first speed bin.26. The apparatus of claim 25, wherein each value of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.27. The apparatus of claim 25, wherein the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.28. The apparatus of claim 25, wherein the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module. 3929. A non-transitory computer-readable medium comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to: read, by a host device, a value of a register comprising serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed; select, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, wherein the host device supports operations according to a set of timing constraints that comprises a plurality of values, and wherein the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values; and communicate with the memory module according to the second speed bin.30. The non-transitory computer-readable medium of claim 29, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to: read, by the host device, a value of a second register comprising SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin; select, for communication with the second memory module, the second speed bin associated with the second clock rate and the timing constraint, wherein the second timing constraint is one of the subset of the set of timing constraints; and communicate with the second memory module according to the second speed bin.31. The non-transitory computer-readable medium of claim 30, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to: downclock, at the host device, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register comprising the SPD data of the second memory module, wherein communicating with the memory module and the 40 second memory module according to the second speed bin is based at least in part on the downclocking.32. The non-transitory computer-readable medium of claim 30, wherein communicating with the second memory module is based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.33. The non-transitory computer-readable medium of claim 30, wherein communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.34. The non-transitory computer-readable medium of claim 29, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to: downclock, at the host device, from a third clock rate to the second clock rate based at least in part on reading the value of the register comprising the SPD data of the memory module, wherein communicating with the memory module according to the second speed bin is based at least in part on the downclocking.35. The non-transitory computer-readable medium of claim 29, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to: select the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.
SPEED BINS TO SUPPORT MEMORY COMPATIBILITYCROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/585,253 by Pohlmann et al., entitled “SPEED BINS TO SUPPORT MEMORY COMPATIBILITY”, filed January 26, 2022, and U.S. Provisional Patent Application No. 63/145,296 by Pohlmann et al., entitled “SPEED BINS TO SUPPORT MEMORY COMPATIBILITY”, filed February 3, 2021; each of which is assigned to the assignee hereof and each of which is expressly incorporated by reference in its entirety herein.FIELD OF TECHNOLOGY[0002] The following relates generally to one or more systems for memory and more specifically to speed bins to support memory compatibility.BACKGROUND[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often denoted by a logic 1 or a logic 0. In some examples, a single memory cell may support more than two states, any one of which may be stored. To access the stored information, a component may read, or sense, at least one stored state in the memory device. To store information, a component may write, or program, the state in the memory device.[0004] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), self-selecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state when disconnected from an external power source. BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example of a system that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0006] FIG. 2 illustrates an example of a downclocking scheme that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0007] FIG. 3 illustrates an example of a downclocking scheme that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0008] FIG. 4 illustrates an example of a process flow that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0009] FIG. 5 shows a block diagram of a host device that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0010] FIG. 6 shows a block diagram of a memory device that supports speed bins to support memory compatibility in accordance with examples as disclosed herein.[0011] FIGs. 7 and 8 show flowcharts illustrating a method or methods that support speed bins to support memory compatibility in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0012] A host device may be configured to support a set of speed bins, where each supported speed bin may be related to one or more timings that may be associated with a respective clock rate, a respective data rate, or both. The timings of the speed bins may include a respective timing constraint in some examples (e.g., an array access delay, a row precharge delay, a row address to column address delay). When a host device is coupled with the memory module, the memory module may indicate (e.g., via a register that includes serial presence detect (SPD) data) one or both of a clock rate (e.g., a maximum clock rate) or a timing constraint which the memory module supports. In some examples, the host device may adjust a clock rate of the host device to a different clock rate based on the clock rate and the timing constraint supported by the memory module (e.g., a lower clock rate, which may be referred to as downclocking). When the host device downclocks to the lower clock rate, the host device may select a speed bin from the set of supported speed bins that is compatible with the clock rate and/or timing constraint indicated by the memory module. Upon selecting the compatible speed bin, the host device may operate according to a timing constraint associated with the compatible speed bin.[0013] In some examples, the host device may be coupled with two memory modules, among other examples, that may each be associated with different supported clock rates (e.g., different maximum support clock rates) or different values of timing constraints or both. When performing downclocking in some such examples using other different techniques, however, the host device may fail to support a speed bin that is compatible with both two memory modules for at least one clock rate at the host device. For instance, the speed bins associated with a given value of a timing constraint (e.g., 20 nanoseconds) may not support communications with two memory modules for each possible combination of supported clock rates, timing constraints, or both for the two memory modules. As such, using these other different techniques, the host device may be incapable of communicating with both memory modules simultaneously in at least some instances.[0014] In contrast and related to the techniques of the present disclosure, to enable the host device to support concurrent (e.g., at least partially overlapping) or simultaneous communication with two or more memory modules, speed bins associated with the given value of the timing constraint (e.g., 20 nanoseconds) may be excluded from the set of speed bins among which the host device selects. Additionally or alternatively, at least one speed bin associated with a clock rate (e.g., each clock rate) supported by the host device may be compatible with each supported clock rate of the memory modules equal to or higher than the clock rate associated with the speed bin. As such, the host device may ensure that there is at least one speed bin compatible with multiple memory modules when downclocking to a given clock rate, a given data rate, or both.[0015] Features of the disclosure are initially described in the context of systems as described with reference to FIG. 1. Features of the disclosure are described in the context of downclocking schemes and a process flow as described with reference to FIGs. 2-4. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to speed bins to support memory compatibility as described with reference to FIGs. 5-8.[0016] FIG. 1 illustrates an example of a system 100 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. The system 100 may include a host device 105, memory modules 107-a and 107-b, memory devices 110-a and 110-b, and a plurality of channels 115 -a and 115-b coupling the host device 105 with memory modules 107-a and 107-b, respectively. The system 100 may include one or more memory devices 110, but aspects of the one or more memory devices 110 may be described in the context of a single memory device (e.g., memory device 110). In some examples, one or more memory modules 107 may include or may otherwise be coupled with one or more memory devices 110.[0017] The system 100 may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, or other systems. For example, the system 100 may illustrate aspects of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet- connected device, a vehicle controller, or the like. The memory device 110 may be a component of the system operable to store data for one or more other components of the system 100.[0018] At least portions of the system 100 may be examples of the host device 105. The host device 105 may be an example of a processor or other circuitry within a device that uses memory to execute processes, such as within a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, a system on a chip (SoC), or some other stationary or portable electronic device, among other examples. In some examples, the host device 105 may refer to the hardware, firmware, software, or a combination thereof that implements the functions of an external memory controller 120. In some examples, the external memory controller 120 may be referred to as a host or a host device 105.[0019] Each memory module 107 may include one or more respective memory devices 110 in some examples. For instance, memory module 107-a may include memory device 110-a and memory module 107-b may include memory device 110-b. In some examples, memory modules 107-a and 107-b may include additional memory devices. In some examples, each memory module may be an example of a dual in-line memory module (DIMM). Additionally or alternatively, each memory module 107 may be coupled with, but not include one or more respective memory devices 110 in some examples.[0020] A memory device 110 (e.g., memory device 110-a or 110-b or both) may be an independent device or a component that is operable to provide physical memory addresses/space that may be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with one or more different types of host devices. Signaling between the host device 105 and the memory device 110 may be operable to support one or more of: modulation schemes to modulate the signals, various pin configurations for communicating the signals, various form factors for physical packaging of the host device 105 and the memory device 110, clock signaling and synchronization between the host device 105 and the memory device 110, timing conventions, or other factors.[0021] The memory device 110 may be operable to store data for the components of the host device 105. In some examples, the memory device 110 may act as a slave-type device to the host device 105 (e.g., responding to and executing commands provided by the host device 105 through the external memory controller 120). Such commands may include one or more of a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands.[0022] The host device 105 may include one or more of an external memory controller 120, a processor 125, a basic input/output system (BIOS) component 130, or other components such as one or more peripheral components or one or more input/output controllers. The components of host device 105 may be coupled with one another using a bus 135.[0023] The processor 125 may be operable to provide control or other functionality for at least portions of the system 100 or at least portions of the host device 105. The processor 125 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination of these components. In such examples, the processor 125 may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or an SoC, among other examples. In some examples, the external memory controller 120 may be implemented by or be a part of the processor 125.[0024] The BIOS component 130 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100 or the host device 105. The BIOS component 130 may also manage data flow between the processor 125 and the various components of the system 100 or the host device 105. The BIOS component 130 may include a program or software stored in one or more of read-only memory (ROM), flash memory, or other non-volatile memory.[0025] In some examples, the system 100 or the host device 105 may include various peripheral components. The peripheral components may be any input device or output device, or an interface (e.g., a bus, a set of pins) for such devices, that may be integrated into or with the system 100 or the host device 105. Examples may include one or more of: a disk controller, a sound controller, a graphics controller, an Ethernet controller, a modem, a universal serial bus (USB) controller, a serial or parallel port, or a peripheral card slot such as peripheral component interconnect (PCI) or specialized graphics ports. The peripheral component(s) may be other components understood by a person having ordinary skill in the art as a peripheral.[0026] In some examples, the system 100 or the host device 105 may include an I/O controller. An I/O controller may manage data communication between the processor 125 and the peripheral component(s), input devices, or output devices. The I/O controller may manage peripherals that are not integrated into or with the system 100 or the host device 105. In some examples, the I/O controller may represent a physical connection or port to external peripheral components.[0027] In some examples, the system 100 or the host device 105 may include an input component, an output component, or both. An input component may represent a device or signal external to the system 100 that provides information, signals, or data to the system 100 or its components. In some examples, and input component may include a user interface or interface with or between other devices. In some examples, an input component may be a peripheral that interfaces with system 100 via one or more peripheral components or may be managed by an I/O controller. An output component may represent a device or signal external to the system 100 operable to receive an output from the system 100 or any of its components. Examples of an output component may include a display, audio speakers, a printing device, another processor on a printed circuit board, and others. In some examples, an output may be a peripheral that interfaces with the system 100 via one or more peripheral components or may be managed by an I/O controller.[0028] The memory device 110 may include a device memory controller 155 and one or more memory dies 160 (e.g., memory chips) to support a desired capacity or a specified capacity for data storage. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, local memory controller 165-A) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, memory array 170-7V). A memory array 170 may be a collection (e.g., one or more grids, one or more banks, one or more tiles, one or more sections) of memory cells, with each memory cell being operable to store at least one bit of data. A memory device 110 including two or more memory dies may be referred to as a multi-die memory or a multi-die package or a multi-chip memory or a multi-chip package.[0029] The device memory controller 155 may include circuits, logic, or components operable to control operation of the memory device 110. The device memory controller 155 may include the hardware, the firmware, or the instructions that enable the memory device 110 to perform various operations and may be operable to receive, transmit, or execute commands, data, or control information related to the components of the memory device 110. The device memory controller 155 may be operable to communicate with one or more of the external memory controller 120, the one or more memory dies 160, or the processor 125. In some examples, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160.[0030] In some examples, the memory device 110 may receive data or commands or both from the host device 105. For example, the memory device 110 may receive a write command indicating that the memory device 110 is to store data for the host device 105 or a read command indicating that the memory device 110 is to provide data stored in a memory die 160 to the host device 105.[0031] A local memory controller 165 (e.g., local to a memory die 160) may include circuits, logic, or components operable to control operation of the memory die 160. In some examples, a local memory controller 165 may be operable to communicate (e.g., receive or transmit data or commands or both) with the device memory controller 155. In some examples, a memory device 110 may not include a device memory controller 155, and a local memory controller 165, or the external memory controller 120 may perform various functions described herein. As such, a local memory controller 165 may be operable to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 120, or the processor 125, or a combination thereof. Examples of components that may be included in the device memory controller 155 or the local memory controllers 165 or both may include receivers for receiving signals (e.g., from the external memory controller 120), transmitters for transmitting signals (e.g., to the external memory controller 120), decoders for decoding or demodulating received signals, encoders for encoding or modulating signals to be transmitted, or various other circuits or controllers operable for supporting described operations of the device memory controller 155 or local memory controller 165 or both.[0032] The external memory controller 120 may be operable to enable communication of one or more of information, data, or commands between components of the system 100 or the host device 105 (e.g., the processor 125) and the memory device 110. The external memory controller 120 may convert or translate communications exchanged between the components of the host device 105 and the memory device 110. In some examples, the external memory controller 120 or other component of the system 100 or the host device 105, or its functions described herein, may be implemented by the processor 125. For example, the external memory controller 120 may be hardware, firmware, or software, or some combination thereof implemented by the processor 125 or other component of the system 100 or the host device 105. Although the external memory controller 120 is depicted as being external to the memory device 110, in some examples, the external memory controller 120, or its functions described herein, may be implemented by one or more components of a memory device 110 (e.g., a device memory controller 155, a local memory controller 165) or vice versa.[0033] The components of the host device 105 may exchange information with the memory device 110 using one or more channels 115. The channels 115 may be operable to support communications between the external memory controller 120 and the memory device 110. Each channel 115 may be examples of transmission mediums that carry information between the host device 105 and the memory device. Each channel 115 may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system 100. A signal path may be an example of a conductive path operable to carry a signal. For example, a channel 115 may include a first terminal including one or more pins or pads at the host device 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be operable to act as part of a channel.[0034] Channels 115 (e.g., channels 115-a and or 115-b or both) and associated signal paths and terminals may be dedicated to communicating one or more types of information. For example, the channels 115 may include one or more command and address (CA) channels 186, one or more clock signal (CK) channels 188, one or more data (DQ) channels 190, one or more other channels 192, or a combination thereof. In some examples, signaling may be communicated over the channels 115 using single data rate (SDR) signaling or double data rate (DDR) signaling. In SDR signaling, one modulation symbol (e.g., signal level) of a signal may be registered for each clock cycle (e.g., on a rising or falling edge of a clock signal). In DDR signaling, two modulation symbols (e.g., signal levels) of a signal may be registered for each clock cycle (e.g., on both a rising edge and a falling edge of a clock signal).[0035] A host device 105 may be configured to support a set of speed bins, where each supported speed bin may be associated with a respective clock rate and a respective timing constraint (e.g., an array access delay, a row precharge delay, a row address to column address delay). Memory module 107-a (e.g., a memory module including a memory device 110-a) may be configured to support a respective maximum clock rate and a respective timing constraint. When the host device 105 is coupled with the memory module 107-a, the memory module 107-a may indicate (e.g., via a register that includes serial presence detect (SPD) data) the maximum supported clock rate and the timing constraint. When the host device 105 downclocks to a lower clock rate, the host device 105 may select a speed bin from the set of supported speed bins that is compatible with the maximum supported clock rate and the timing constraint. Accordingly, the host device 105 may select a speed bin from the set of supported speed bins that is compatible with the maximum supported clock rate and the timing constraint. Upon selecting the compatible speed bin, the host device 105 may operate according to a timing constraint associated with the compatible speed bin.[0036] In some examples, the host device 105 may be coupled with two memory modules 107-a and 107-b associated with different maximum supported clock rates or different values of timing constraints or both. When performing downclocking in such examples using other different techniques, the host device 105 may fail to support a speed bin that is compatible with both memory modules 107-a and 107-b for at least one clock rate at the host device. For instance, the speed bins associated with a particular value of a timing constraint (e.g., 20 nanoseconds) may not support communications with memory modules 107-a and 107-b for each possible combination of maximum supported clock rates, timing constraints, or both for memory modules 107-a and 107-b. As such, using other different techniques, the host device 105 may be incapable of communicating with both memory modules 107 concurrently or simultaneously in at least some instances.[0037] In contrast, to enable the host device 105 to support concurrent (e.g., at least partially overlapping) or simultaneous communication with two or more memory modules 107 (e.g., memory modules 107-a and 107-b), speed bins associated with the particular value of the timing constraint (e.g., 20 nanoseconds) may be excluded from the set of speed bins among which the host device 105 selects. Additionally or alternatively, at least one speed bin associated with a clock rate (e.g., each clock rate) supported by the host device 105 may be compatible with each clock rate supported by a memory module equal to or higher than the clock rate associated with the speed bin supported by the host device 105. As such, the host device 105 may ensure that there is at least one speed bin compatible with multiple memory modules 107 when downclocking to a given clock rate, a given data rate, or both.[0038] FIG. 2 illustrates an example of a downclocking scheme 200 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. Downclocking scheme 200 may implement or may be implemented by one or more components (e.g., an external memory controller 120) described with reference to system 100 of FIG. 1, among other examples. Downclocking scheme 200 shows a table that represents whether a speed bin at a host device is compatible with a memory device for various data rates (e.g., in units of megahertz (MHz)) or clock rates. Each column of the table may correspond to a particular speed bin identified by a host device to be associated with a memory module according to an identified supported clock rate (e.g., a maximum supported clock rate), an identified supported timing constraint, or both associated with the memory module. Each row of the table may correspond to a particular speed bin for a particular data rate or clock rate at a host device. Each speed bin for a particular data rate or clock rate may correspond to a particular value of a timing constraint (e.g., tRP, tRCD, tAA). For instance, speed bin “B” (i.e., B bin) may correspond to a lowest value of a timing constraint for each data rate or clock rate, speed bin “Dump” (i.e., dump bin) may correspond to a highest value of the timing constraint for each data rate or clock rate, and speed bin “C” (i.e., C Bin) may correspond to a value of the timing constraint in between the lowest value and the highest value for each data rate or clock rate. In some examples, speed bin “A” may be present for each clock rate at or above 3200 MHz and may correspond to a value of the timing constraint that is in lower than that of the B bin. [0039] Each entry of the table may correspond to an unsupported configuration 205, a supported configuration 210, or an optionally supported configuration 215. An unsupported configuration 205 may represent a configuration in which a speed bin for a host device is not compatible with a memory module. For instance, unsupported configuration 205-a may indicate that a host device operating using a dump bin at a data rate of 3200 MHz may not support communications with a memory module with a maximum supported data rate of 4800 MHz and a timing constraint associated with a B bin. Supported configuration 210 may represent a configuration in which a speed bin for a host device is compatible with a memory module. For instance, supported configuration 210-a may indicate that a host device operating using a C bin at 4400 MHz supports communications with a memory module with a maximum supported data rate of 4400 MHz and a timing constraint associated with a C bin. An optionally supported configuration 215 may represent a configuration in which a speed bin for a host device may be selectively configured to be compatible with a memory module. For instance, optionally supported configuration 215-a may indicate that a host device may be selectively configured to support communications with a memory module when the host device is operating using a C bin at a data rate of 3200 MHz and when the memory module has a maximum supported data rate of 4400 MHz and a timing constraint associated with a C bin.[0040] In some examples, the memory module being associated with a particular speed bin (e.g., a dump bin, A bin, B bin, C bin) may be a result of the memory module being configured and manufactured to support operating at determined conditions for the particular speed bin. For instance, after the memory module is configured, manufactured, or both, the memory module may be tested according to a set of manufacturing or configuration testing parameters associated with a testing configuration. The manufacturing or configuration testing parameters used in the testing configuration be used to determine the particular speed bin that the memory module is capable of supporting. In some examples, testing configurations associated with more stringent (e.g., more strict, more precise) manufacturing or configuration testing parameters may enable a host device to use (e.g., run) optionally supported bins (e.g., optionally supported configurations 215) with the memory module, whereas testing configurations associated with less stringent (e.g., less strict, less precise) manufacturing or configuration testing parameters may not enable the host device to use (e.g., run) optionally supported bins with the memory module. In some examples, the register of the memory module may include information associated with the manufacturing or configuration testing parameters.[0041] The following describes some examples of related speed bins associated with downclocking scheme 200. For 2100 MHz, the timing constraint may be equal to 20.952 nanoseconds (e.g., 22 clock cycles). For 3200 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 32 clock cycles); the timing constraint for the C bin may be equal to 17.5 nanoseconds (e.g., 28 clock cycles); and the timing constraint for the B bin may be equal to 16.25 nanoseconds (e.g., 26 clock cycles). For 3600 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 36 clock cycles); the timing constraint for the C bin may be equal to 17.777 nanoseconds (e.g., 32 clock cycles); and the timing constraint for the B bin may be equal to 16.666 nanoseconds (e.g., 30 clock cycles). For 4000 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 40 clock cycles); the timing constraint for the C bin may be equal to 18 nanoseconds (e.g., 36 clock cycles); and the timing constraint for the B bin may be equal to 16 nanoseconds (e.g., 32 clock cycles). For 4400 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 44 clock cycles); the timing constraint for the B bin may be equal to 18.181 nanoseconds (e.g., 40 clock cycles); and the timing constraint for the C bin may be equal to 16.363 nanoseconds (e.g., 36 clock cycles). For 4800 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 48 clock cycles); the timing constraint for the C bin may be equal to 17.5 nanoseconds (e.g., 42 clock cycles); and the timing constraint for the B bin may be equal to 16.666 nanoseconds (e.g., 40 clock cycles). For 5200 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 52 clock cycles); the timing constraint for the C bin may be equal to 17.692 nanoseconds (e.g., 46 clock cycles); and the timing constraint for the B bin may be equal to 16.153 nanoseconds (e.g., 42 clock cycles). For 5600 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 56 clock cycles); the timing constraint for the C bin may be equal to 17.857 nanoseconds (e.g., 50 clock cycles); and the timing constraint for the B bin may be equal to 16.428 nanoseconds (e.g., 46 clock cycles). For 6000 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 60 clock cycles); the timing constraint for the C bin may be equal to 18 nanoseconds (e.g., 54 clock cycles); and the timing constraint for the B bin may be equal to 16 nanoseconds (e.g., 48 clock cycles). For 6400 MHz, the timing constraint for the dump bin may be equal to 20 nanoseconds (e.g., 64 clock cycles); the timing constraint for the C bin may be equal to 17.5 nanoseconds (e.g., 56 clock cycles); and the timing constraint for the B bin may be equal to 16.25 nanoseconds (e.g., 52 clock cycles). In some examples, the timing constraint values listed herein may be examples of nominal values (e.g., values of the timing constraints after performing rounding, such as truncating, on the timing constrain values).[0042] For 2100 MHz at the host device, each speed bin associated with the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, 4400 MHz, 4000 MHz, 3600 MHz, and 3200 MHz. For 3200 MHz at the host device using the Dump bin, the B bin at the memory module may be supported for 6000 MHz, 5600 MHz, 5200 MHz, 4400 MHz, 4000 MHz, and 3600 MHz and the C bin at the memory module may be supported for 6000 MHz, 5600 MHz, 5200 MHz, 4400 MHz, 4000 MHz, and 3600 MHz. For 3200 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, 4400 MHz, 4000 MHz, 3600 MHz, and 3200 MHz and the C bin at the memory module may be supported for 6400 MHz, 4800 MHz, and 3200 MHz and optionally supported for 6000 MHz, 5600 MHz, 5200 MHz, 4400 MHz, and 3600 MHz. For 3200 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5200 MHz, 4000 MHz, and 3200 MHz and may be optionally supported for 5600 MHz, 4800 MHz, 4400 MHz, and 3600 MHz.[0043] For 3600 MHz at the host device using the Dump bin, the B bin and the C bin at the memory module may be supported for 6000 MHz, 5600 MHz, 4400 MHz, and 4000 MHz. For 3600 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, 4400 MHz, 4000 MHz, and 3600 MHz and the C bin at the memory module may be supported for 6400 MHz, 5200 MHz, 4800 MHz, and 3600 MHz and optionally supported for 6000 MHz, 5600 MHz, 4400 MHz, and 4000 MHz. For 3600 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 5200 MHz, 4800 MHz, 440 MHz, 4000 MHz, and 3600 MHz. For 4000 MHz at the host device using the Dump bin, the B bin and the C bin at the memory module may be supported for 4400 MHz. For 4000 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, 4400 MHz, and 4000 MHz, and the C bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, and 4000 MHz and optionally supported for 4400 MHz. For 4000 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6000 MHz and 4000 MHz and optionally supported for 6400 MHz, 5600 MHz, 5200 MHz, 4800 MHz, and 4400 MHz.[0044] For 4400 MHz at the host device using the Dump bin, neither the B bin and the C bin at the memory module may be supported. For 4400 MHz at the host device using the C bin, the B bin and the C bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 4800 MHz, and 4400 MHz. For 4400 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5200 MHz, and 4400 MHz and optionally supported for 5600 MHz and 4800 MHz. For 4800 MHz at the host device using the Dump bin, the B bin and the C bin at the memory module may be supported for 6000 MHz, 5600 MHz, and 5200 MHz. For 4800 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, and 4800 MHz, and the C bin at the memory module may be supported for 6400 MHz and 4800 MHz and optionally supported for 6000 MHz, 5600 MHz, and 5200 MHz. For 4800 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, 5200 MHz, and 4800 MHz.[0045] For 5200 MHz at the host device using the Dump bin, the B bin and the C bin at the memory module may be supported for 6000 MHz and 5600 MHz. For 5200 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, 5600 MHz, and 5200 MHz, and the C bin at the memory module may be supported for 6400 MHz and 5200 MHz and optionally supported for 6000 MHz and 5600 MHz. For 5200 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6000 MHz and 5200 MHz and optionally supported for 6400 MHz and 5600 MHz. For 5600 MHz at the host device using the Dump bin, the B bin and the C bin at the memory module may be supported for 6000 MHz. For 5600 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, and 5600 MHz, and the C bin at the memory module may be supported for 6400 MHz and 5600 MHz and optionally supported for 6000 MHz. For 5200 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6400 MHz, 6000 MHz, and 5600 MHz.[0046] For 6000 MHz at the host device using the Dump bin, neither the B bin or the C bin at the memory module may be supported. For 6000 MHz at the host device using the B bin, the B bin and the C bin at the memory module may be supported for 6400 MHz and 6000 MHz. For 6000 MHz at the host device using the B bin, the B bin at the memory module may be supported for 6000 MHz and optionally supported for 6400 MHz. For 6400 MHz at the host device using the Dump bin, neither the B bin or the C bin at the memory module may be supported. For 6400 MHz at the host device using the C bin, the B bin and the C bin at the memory module may be supported for 6400 MHz. For 6400 MHz at the host device using the C bin, the B bin at the memory module may be supported for 6400 MHz.[0047] In some examples, a host device may initially be configured to use a C bin at a data rate of 4400 MHz (e.g., the host device may be in supported configuration 210-a) when communicating with a first memory module that has a supported data rate (e.g., maximum supported data rate) of 4400 MHz and a timing constraint associated with a C bin. In some examples, the host device may be downclocked from 4400 MHz to a data rate of 3200 MHz. If optionally supported configuration 215-a is supported (e.g., by the first memory module, by the host device) the host device may select to use the C bin at 3200 MHz. However, in instances where optionally supported configuration 215-a is not supported (e.g., by the first memory module, by the host device) the host device may instead select to use the dump bin at 3200 MHz (e.g., supported configuration 210-b).[0048] In some examples, the host device may coupled with a second memory module that has a supported data rate (e.g., maximum supported data rate) of 4800 MHz and a timing constraint associated with a B bin (e.g., when the host device and the second memory module are in supported configuration 210-c). The host device may not be capable of communicating using multiple data rates or speed bins or both at once. Accordingly, when the host device downclocks to 3200 MHz, the host device may attempt to communicate with the first and the second memory modules at 3200 MHz. If optionally supported configuration 215-a is supported (e.g., by the first memory module, the host device), the host device may communicate with the first memory module and the second memory module according to the C bin at 3200 MHz (e.g., as the C bin at 3200 MHz is supported for the first memory module and the second memory module). If optionally supported configuration 215-a is not supported (e.g., by the first memory module, by the host device), however, the host device may attempt to communicate with the first and second memory module according to the dump bin at 3200 MHz. In some examples, the dump bin at 3200 MHz may be supported for the first memory module (e.g., supported configuration 210-b), but may not be supported for the second memory module (e.g., unsupported configuration 205-a). Accordingly, if the host device downclocks to 3200 MHz, the host device may be unable to communicate with the second memory module.[0049] To enable the host device to communicate with the first memory module and the second memory module, for example, when downclocking, the host device may eliminate support for one or more dump bins and may convert optionally supported configurations 215 for C bins at the host device to supported configurations 210. Additional details about enabling the host device in this manner may be described herein, for example, at least with reference to FIG. 3.[0050] FIG. 3 illustrates an example of a downclocking scheme 300 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. Downclocking scheme 300 may implement or may be implemented by one or more components described with reference to system 100 of FIG. 1 (e.g., an external memory controller 120), among other examples. Downclocking scheme 300 may represent the table of downclocking scheme 200 after eliminating support for dump bins (e.g., having each configuration when using a dump bin be an unsupported configuration 205) and converting each optionally supported configuration for C bins at the host device to a supported configuration (e.g., converting optionally supported configurations 215 to supported configurations 210).[0051] In some examples, a host device may initially be configured to use a C bin at a data rate of 4400 MHz (e.g., the host device may be in supported configuration 310-a) when communicating with a first memory module that has a supported data rate (e.g., a maximum supported data rate) of 4400 MHz and a timing constraint associated with a C bin. In some examples, the host device may be downclocked from 4400 MHz to a data rate of 3200 MHz. The host device may support supported configuration 310-b, and may therefore select to use the C bin at 3200 MHz.[0052] In some examples, the host device may be coupled with a second memory module that has a supported data rate (e.g., maximum supported data rate) of 4800 MHz and a timing constraint associated with a B bin (e.g., when the host device and the second memory module are in supported configuration 310-c). The host device, however, may not be capable of communicating using multiple data rates or speed bins or both at once. Accordingly, when the host device downclocks to 3200 MHz, the host device may attempt to communicate with the first and the second memory modules at 3200 MHz. As the host device supports using the C bin for the first and second memory modules (e.g., the host device supports supported configurations 310-b and 310-d, respectively), the host device may communicate with the first memory module and the second memory module according to the C bin at 3200 MHz. Accordingly, by eliminating support for configurations using the dump bin and converting optionally supported configurations for C bins at the host device to supported configurations, the host device may communicate with both memory modules after performing downclocking.[0053] Performing the methods as described herein, for instance regarding the features described with reference to FIG. 3, may be associated with one or more advantages. For instance, host devices that use downclocking scheme 300 when downclocking may be able to communicate with multiple memory modules regardless of the speed bins used by the memory modules. Accordingly, host devices that use downclocking scheme 300 may have greater flexibility in adjusting their clock rates or data rates when maintaining communication with multiple memory modules. Additionally, host devices that use downclocking scheme 300 may maintain backwards compatibility with memory modules.[0054] FIG. 4 illustrates an example of a process flow 400 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. In some examples, host device 105-a may be an example of a host device 105 as described with reference to FIG. 1 and memory modules 107-c and 107-d may be examples of memory modules 107 (e.g., memory modules 107-a and 107-b) as described with reference to FIG. 1.[0055] At 405-a, memory module 107-c may provide, to host device 105-a, a value of a register including SPD data of memory module 107-c, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the first clock rate (e.g., a first maximum supported clock rate) associated with a first speed bin. At 405-b, memory module 107-d may provide, to host device 105-b, a value of a second register including SPD data of memory module 107-d, the SPD data of a second timing constraint for operating memory module 107-d at a corresponding second clock rate (e.g., a second maximum supported clock rate), the second timing constraint and the third clock rate associated with a second speed bin. In some examples, host device 105-a may read the SPD data of memory module 107-c from memory module 107-c and may read the SPD data of memory module 107-d from memory module 107-d. [0056] In some alternative examples, the memory module 107-c or the memory module 107-d (or both concurrently, simultaneously, or serially) may provide (e.g., transmit) various information to the host device 105-a. For example, at 405-a, memory module 107-c may provide (e.g., transmit, convey, indicate), to host device 105-a, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of memory module 107-c, the SPD data indicative of or including a first timing constraint for operating the memory module 107-c at a corresponding first clock rate, the timing constraint and the first clock rate associated with a first speed bin. At 405-b, memory module 107-d may provide (e.g., transmit, convey, indicate), to host device 105-b, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of memory module 107-d, the SPD data indicative of or including a second timing constraint for operating memory module 107-d at a corresponding second clock rate, the second timing constraint and the second clock rate associated with a second speed bin.[0057] In some alternative examples, host device 105-a may read (e.g., receive) from the memory module 107-c or the memory module 107-d (or both concurrently, simultaneously, or serially) various information. For example, at 405-a, host device 105-a may read (e.g., receive, retrieve) from memory module 107-c, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of memory module 107-c, the SPD data indicative of or including a first timing constraint for operating the memory module 107-c at a corresponding first clock rate, the timing constraint and the first clock rate associated with a first speed bin. At 405-b, host device 105-a may read (e.g., receive, retrieve) from memory module 107-c, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of memory module 107-d, the SPD data indicative of or including a of timing constraint for operation the memory module 107-d, the SPD data indicative of or including a second timing constraint for operating memory module 107-d at a corresponding second clock rate, the second timing constraint and the second clock rate associated with a second speed bin.[0058] In some examples, the timing constraint may correspond to one or more of a row precharge delay (e.g., tRP), a row address to column address delay (e.g., tRCD), or an array access delay (e.g., tAA). In some examples, the timing constraint may correspond to a quantity of clock cycles for accessing a memory array of memory module 107-c and/or the second timing constraint may correspond to a quantity of clock cycles for accessing a memory array of memory module 107-d.[0059] At 410, host device 105-a may select, for communication with the memory module, a third speed bin associated with a third clock rate at host device 105-a and the timing constraint. Host device 105-a may support operations according to a set of timing constraints that include a set of values. Memory module 107-c and memory module 107-d may support operations according to the set of timing constraints. The timing constraint may be selected from a subset of the set of timing constraints, where the subset is exclusive of at least one of the set of values (e.g., 20 nanoseconds). In some examples, the second timing constraint may be one of the subset of timing constraints. In some examples, each of the at least one of the set of values excluded from the subset may have a higher magnitude than each other value or each one or more remaining values of the set in the subset.[0060] At 415, host device 105-a may downclock from a fourth clock rate to the third clock rate based on reading the value of the second register including the SPD data of memory module 107-d. Additionally or alternatively, host device 105-a may downclock from a fourth clock rate to the third clock rate based on reading the value of the register including the SPD data of memory module 107-c.[0061] At 420-a, host device 105-a may communicate with the memory module 107-c according to the third speed bin and memory module 107-c may communicate with host device 105-a according to the first speed bin. At 420-b, host device 105-a may communicate with memory module 107-d according to the third speed bin and memory module 107-d may communicate with host device 105-a according to the second speed bin. In some examples, communicating with memory module 107-c and/or memory module 107-d may be based on the downclocking. In some examples, communicating with memory module 107-d may be based on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint associated with memory module 107-c. In some examples, communicating with memory module 107-c and memory module 107-d may be based on the first clock rate and the second clock rate being equal to or greater than the third clock rate of host device 105-a.[0062] Performing the methods as described herein, for instance regarding the features described with reference to FIG. 4, may have one or more advantages. For instance, by excluding the at least one value from the set of values, the host device may prevent a selection of a speed bin that is not supported for communications with at least one of memory modules 107-c and 107-d.[0063] FIG. 5 shows a block diagram 500 of a host device 520 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. The host device 520 may be an example of aspects of a host device as described with reference to FIGs. 1 through 4. The host device 520, or various components thereof, may be an example of means for performing various aspects of speed bins to support memory compatibility as described herein. For example, the host device 520 may include a reading component 525, a speed bin selection component 530, a communication component 535, a downclocking component 540, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0064] The reading component 525 may be configured as or otherwise support a means for reading, by the host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin. The speed bin selection component 530 may be configured as or otherwise support a means for selecting, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, where the host device supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values. The communication component 535 may be configured as or otherwise support a means for communicating with the memory module according to the second speed bin.[0065] In some examples, the reading component 525 may be configured as or otherwise support a means for reading, by the host device, a value of a second register including SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin. In some examples, the speed bin selection component 530 may be configured as or otherwise support a means for selecting, for communication with the memory module, the second speed bin associated with the second clock rate and the timing constraint, where the second timing constraint is one of the subset of the set of timing constraints. In some examples, the communication component 535 may be configured as or otherwise support a means for communicating with the second memory module according to the second speed bin.[0066] In some examples, the downclocking component 540 may be configured as or otherwise support a means for downclocking, at the host device, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register including the SPD data of the second memory module, where communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the downclocking.[0067] In some examples, communicating with the second memory module is based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.[0068] In some examples, communicating with the memory module and the second memory module according to the second speed bin is based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.[0069] In some examples, the downclocking component 540 may be configured as or otherwise support a means for downclocking, at the host device, from a third clock rate to the second clock rate based at least in part on reading the value of the register including the SPD data of the memory module, where communicating with the memory module according to the second speed bin is based at least in part on the downclocking.[0070] In some examples, the speed bin selection component 530 may be configured as or otherwise support a means for selecting the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.[0071] In some examples, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[0072] In some examples, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay. [0073] In some examples, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.[0074] FIG. 6 shows a block diagram 600 of a memory device 620 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. The memory device 620 may be an example of aspects of a memory device as described with reference to FIGs. 1 through 4. The memory device 620, or various components thereof, may be an example of means for performing various aspects of speed bins to support memory compatibility as described herein. For example, the memory device 620 may include a data providing component 625 a communication component 630, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0075] The data providing component 625 may be configured as or otherwise support a means for providing, to a host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, where the memory module supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values. The communication component 630 may be configured as or otherwise support a means for communicating with the host device according to the first speed bin.[0076] In some examples, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[0077] In some examples, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.[0078] In some examples, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.[0079] FIG. 7 shows a flowchart illustrating a method 700 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. The operations of method 700 may be implemented by a host device or its components as described herein. For example, the operations of method 700 may be performed by a host device as described with reference to FIGs. 1 through 5. In some examples, a host device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the host device may perform aspects of the described functions using special-purpose hardware.[0080] At 705, the method may include reading, by the host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin. The operations of 705 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 705 may be performed by a reading component 525 as described with reference to FIG. 5.[0081] In some alternative examples, the host device may read (e.g., receive) from a first memory module or a second memory module (or both concurrently, simultaneously, or serially) various information. For example, the host device may read (e.g., receive, retrieve) from the first memory module, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of the first memory module, the SPD data indicative of or including a first timing constraint for operating the first memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin. Additionally or alternatively, the host device may read (e.g., receive, retrieve) from the second memory module, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of the second memory module, the SPD data indicative of or including a of timing constraint for operation the second memory module, the SPD data indicative of or including a second timing constraint for operating the second memory module at a corresponding second clock rate, the second timing constraint and the second clock rate associated with a second speed bin.[0082] At 710, the method may include selecting, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, where the host device supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values. The operations of 710 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 710 may be performed by a speed bin selection component 530 as described with reference to FIG. 5.[0083] At 715, the method may include communicating with the memory module according to the second speed bin. The operations of 715 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 715 may be performed by a communication component 535 as described with reference to FIG. 5.[0084] In some examples, an apparatus as described herein may perform a method or methods, such as the method 700. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for reading, by the host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, selecting, for communication with the memory module, a second speed bin associated with a second clock rate at the host device and the timing constraint, where the host device supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values, and communicating with the memory module according to the second speed bin.[0085] Some examples of the method 700 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for reading, by the host device, a value of a second register including SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin, selecting, for communication with the memory module, the second speed bin associated with the second clock rate and the timing constraint, where the second timing constraint may be one of the subset of the set of timing constraints, and communicating with the second memory module according to the second speed bin.[0086] Some examples of the method 700 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for downclocking, at the host device, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register including the SPD data of the second memory module, where communicating with the memory module and the second memory module according to the second speed bin may be based at least in part on the downclocking.[0087] In some examples of the method 700 and the apparatus described herein, communicating with the second memory module may be based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.[0088] In some examples of the method 700 and the apparatus described herein, communicating with the memory module and the second memory module according to the second speed bin may be based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.[0089] Some examples of the method 700 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for downclocking, at the host device, from a third clock rate to the second clock rate based at least in part on reading the value of the register including the SPD data of the memory module, where communicating with the memory module according to the second speed bin may be based at least in part on the downclocking.[0090] Some examples of the method 700 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for selecting the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.[0091] In some examples of the method 700 and the apparatus described herein, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[0092] In some examples of the method 700 and the apparatus described herein, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.[0093] In some examples of the method 700 and the apparatus described herein, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module. [0094] FIG. 8 shows a flowchart illustrating a method 800 that supports speed bins to support memory compatibility in accordance with examples as disclosed herein. The operations of method 800 may be implemented by a memory device or its components as described herein. For example, the operations of method 800 may be performed by a memory device as described with reference to FIGs. 1 through 4 and 6. In some examples, a memory device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory device may perform aspects of the described functions using special-purpose hardware.[0095] At 805, the method may include providing, to a host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, where the memory module supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values. The operations of 805 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 805 may be performed by a data providing component 625 as described with reference to FIG. 6.[0096] In some alternative examples, the memory module may provide (e.g., transmit) various information to the host device 105 -a. For example the memory module may provide (e.g., transmit, convey, indicate), to the host device, an indication of a value of or a value itself of a location (e.g., a register) including SPD data of the memory module, the SPD data indicative of or including a first timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin. In some examples, the memory module may provide (e.g., transmit) the various information concurrently, simultaneously, or serially with another memory module.[0097] At 810, the method may include communicating with the host device according to the first speed bin. The operations of 810 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 810 may be performed by a communication component 630 as described with reference to FIG. 6. [0098] In some examples, an apparatus as described herein may perform a method or methods, such as the method 800. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for providing, to a host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, where the memory module supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values and communicating with the host device according to the first speed bin.[0099] In some examples of the method 800 and the apparatus described herein, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[0100] In some examples of the method 800 and the apparatus described herein, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.[0101] In some examples of the method 800 and the apparatus described herein, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.[0102] It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined.[0103] Another apparatus is described. The apparatus may include a circuit configured to cause the apparatus to, read, by the apparatus, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed, select, for communication with the memory module, a second speed bin associated with a second clock rate at the apparatus and the timing constraint, where the apparatus supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values, and communicate with the memory module according to the second speed bin[0104] In some examples, the circuit may be further configured to cause the apparatus to read, by the apparatus, a value of a second register including SPD data of a second memory module, the SPD data of the second memory module indicative of a second timing constraint for operating the second memory module at a corresponding third clock rate, the second timing constraint and a third clock rate associated with a third speed bin, select, for communication with the second memory module, the second speed bin associated with the second clock rate and the apparatus and the timing constraint, where the second timing constraint may be one of the subset of the set of timing constraints, and communicate with the second memory module according to the second speed bin.[0105] In some examples of the apparatus, the circuit may be further configured to cause the apparatus to downclock, at the apparatus, from a fourth clock rate to the second clock rate based at least in part on reading the value of the second register including the SPD data of the second memory module, where communicating with the memory module and the second memory module according to the second speed bin may be based at least in part on the downclocking.[0106] In some examples, communicating with the second memory module may be based at least in part on a first value of the second timing constraint being associated with a shorter duration than a second value of the timing constraint.[0107] In some examples, communicating with the memory module and the second memory module according to the second speed bin may be based at least in part on the corresponding first clock rate and the third clock rate each being equal to or greater than the second clock rate.[0108] In some examples of the apparatus, the circuit may be further configured to cause the apparatus to downclock, at the apparatus, from a third clock rate to the second clock rate based at least in part on reading the value of the register including the SPD data of the memory module, where communicating with the memory module according to the second speed bin may be based at least in part on the downclocking. [0109] In some examples, the apparatus may include select the timing constraint from the subset of the set of timing constraints as part of selecting the second speed bin.[0110] In some examples of the apparatus, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[OHl] In some examples of the apparatus, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.[0112] In some examples of the apparatus, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.[0113] Another apparatus is described. The apparatus may include a circuit configured to cause the apparatus to, provide, to a host device, a value of a register including serial presence detect (SPD) data of a memory module, the SPD data indicative of a timing constraint for operating the memory module at a corresponding first clock rate, the timing constraint and the corresponding first clock rate associated with a first speed bin, where the memory module supports operations according to a set of timing constraints that includes a plurality of values, and where the timing constraint is selected from a subset of the set of timing constraints, the subset exclusive of at least one of the plurality of values, and communicate with the host device according to the first speed bin[0114] In some examples of the apparatus, each of the at least one of the plurality of values excluded from the subset has a higher magnitude than each value of the plurality of values in the subset.[0115] In some examples of the apparatus, the timing constraint corresponds to one or more of a row precharge delay, a row address to column address delay, or an array access delay.[0116] In some examples of the apparatus, the timing constraint corresponds to a quantity of clock cycles for accessing a memory array of the memory module.[0117] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0118] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0119] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0120] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0121] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon- on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0122] A switching component or a transistor discussed herein may represent a fieldeffect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0123] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent each of 1 the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0124] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0125] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0126] For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0127] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e. , A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0128] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0129] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
Apparatuses, methods, and electronic devices are disclosed for antenna and cabling unification. In an example aspect, an electronic device includes a cabling connector, a digital interface, and a radio. The cabling connector includes multiple pins configured to couple the electronic device to a device connector of an external cabling apparatus. The multiple pins include a positive digital pin, a negative digital pin, and a first other pin. The digital interface includes a positive interface node and a negative interface node. The positive interface node is coupled to the positive digital pin, and the negative interface node is coupled to the negative digital pin. The radio includes an antenna node coupled to at least the first other pin.
CLAIMSWhat is claimed is:1. An electronic device, comprising:a cabling connector including multiple pins configured to couple the electronic device to a device connector of an external cabling apparatus, the multiple pins including a positive digital pin, a negative digital pin, and a first other pin;a digital interface including a positive interface node and a negative interface node; the positive interface node coupled to the positive digital pin, and the negative interface node coupled to the negative digital pin; anda radio including an antenna node coupled to at least the first other pin.2. The electronic device of claim 1 , wherein the radio is configured to demodulate an analog wireless signal obtained via the antenna node, and wherein the digital interface is configured to provide digital audio data via the positive interface node and the negative interface node based on the demodulated analog wireless signal.3. The electronic device of claim 2, wherein the radio comprises a frequency modulated (FM) radio, and wherein the digital interface is configured to provide digital audio data based on a demodulated analog FM radio wireless signal.4. The electronic device of claim 1, wherein:the cabling connector comprises a Universal Serial Bus (USB) Type-C connector; andthe first other pin comprises a sideband use (SBU) pin.5. The electronic device of claim 4, wherein the positive digital pin comprises a positive differential (DN) pin, and the negative digital pin comprises a negative differential (DN) pin.6. The electronic device of claim 4, further comprising:switching circuitry coupled to the cabling connector, the switching circuitry including a first switch,wherein the antenna node is switchably coupled to a first SBU (SBUl) pin or a second SBU (SBU2) pin via the first switch.7. The electronic device of claim 6, further comprising:a connector interface controller configured to determine a pin of the SBUl pin or the SBU2 pin that is coupled to an antenna of the cabling apparatus, wherein: the first switch includes:a multi-node side that is coupled to the SBUl pin and the SBU2 pin; anda single node side that is coupled to the radio; andthe connector interface controller is configured to cause the first switch to enter a switch state that couples the determined pin of the SBUl pin or the SBU2 pin to the radio.8. The electronic device of claim 6, further comprising:analog audio codec circuitry coupled to the switching circuitry, wherein the switching circuitry includes:a second switch having a single node side that is coupled to the DN pin and having a multi-node side that is coupled to the analog audio codec circuitry and the digital interface; anda third switch having a single node side that is coupled to the DP pin and having a multi-node side that is coupled to the analog audio codec circuitry and the digital interface.9. A method, comprising:determining a first terminal node of multiple terminal nodes that is coupled to an antenna of a cabling apparatus, the multiple terminal nodes coupled to a cabling connector of a mobile device that is connected to the cabling apparatus; receiving a wireless signal from the antenna via the first terminal node; converting the wireless signal to digital audio data; andtransmitting the digital audio data over multiple audio wires of the cabling apparatus via a second terminal node and a third terminal node of the multiple terminal nodes.10. The method of claim 9, wherein the determining comprises analyzing the first terminal node and a fourth terminal node of the multiple terminal nodes, including at least one of:sensing if the first terminal node or the fourth terminal node is coupled to ground;detecting an impedance at the first terminal node or the fourth terminal node; ordemodulating a radio wireless signal received via the first terminal node or the fourth terminal node.11. The method of claim 9, wherein the transmitting comprises:generating a differential digital signal based on the digital audio data; and propagating the differential digital signal over a positive audio signal wire and a negative audio signal wire of the multiple audio wires respectively via the second terminal node and the third terminal node of the multiple terminal nodes.12. The method of claim 9, wherein the determining comprises causing a switch to toggle between the first terminal node and a fourth terminal node of the multiple terminal nodes.13. The method of claim 12, wherein:one terminal node of the first terminal node or the fourth terminal node is coupled to a first sideband use (SBUl) pin of the cabling connector, and another terminal node of the first terminal node or the fourth terminal node is coupled to a second sideband use (SBU2) pin of the cabling connector; andthe determining comprises causing the switch to select to couple the first terminal node to a radio.14. The method of claim 9, wherein:the first terminal node is coupled to at least one of a first sideband use (SBUl) pin or a second sideband use (SBU2) pin of the cabling connector;the wireless signal comprises a frequency-modulated (FM) radio wireless signal;the second terminal node is coupled to a negative differential (DN) pin of the cabling connector, and the third terminal node is coupled to a positive differential (DP) pin of the cabling connector;the determining comprises causing a switch to couple the first terminal node to a radio;the receiving comprises receiving the FM radio wireless signal from the antenna via at least one of the SBUl pin or the SBU2 pin of the cabling connector; the converting comprises converting the FM radio wireless signal to the digital audio data; andthe transmitting comprises transmitting the digital audio data over the multiple audio wires of the cabling apparatus via the DN pin and the DP pin of the cabling connector.15. An apparatus comprising:a device connector including multiple terminal nodes;an audio endpoint including multiple endpoint nodes; anda cable coupled between the device connector and the audio endpoint, the cable including:multiple wires coupled between at least two respective nodes of the multiple terminal nodes of the device connector and at least two respective nodes of the multiple endpoint nodes of the audio endpoint; andan antenna coupled to at least one of the multiple terminal nodes of the device connector and uncoupled from the audio endpoint.16. The apparatus of claim 15, wherein:the cable further includes a shielding component; andthe shielding component is interposed between the antenna and at least a portion of the multiple wires.17. The apparatus of claim 16, wherein:the shielding component comprises a cylindrical shielding layer that encases the at least a portion of the multiple wires; andthe antenna is disposed external to the shielding component.18. The apparatus of claim 16, wherein the shielding component is configured to shield the antenna from radio frequency (RF) interference radiated from the at least a portion of the multiple wires as digital audio data propagates along the at least a portion of the multiple wires.19. The apparatus of claim 15, wherein the multiple wires coupled between the at least two respective nodes of the multiple terminal nodes of the device connector and the at least two respective nodes of the multiple endpoint nodes of the audio endpoint include:a positive audio signal wire;a negative audio signal wire;a power wire; anda ground wire.20. The apparatus of claim 19, wherein:the positive audio signal wire and the negative audio signal wire are configured to propagate digital audio data;the power wire is configured to provide to the audio endpoint power from an electronic device via the device connector;the ground wire is configured to provide a reference for the power provided from the electronic device via the device connector; andthe audio endpoint comprises a digital headset.21. The apparatus of claim 20, wherein the digital headset includes: a digital interface having the at least two of the multiple endpoint nodes; an analog interface communicatively coupled to the digital interface;at least one speaker coupled to an analog output of the analog interface; and at least one microphone coupled to an analog input of the analog interface.22. The apparatus of claim 20, wherein the digital headset includes: a power supply unit having a power endpoint node of the multiple endpoint nodes of the audio endpoint, the power endpoint node coupled to the power wire; anda ground endpoint node of the multiple endpoint nodes of the audio endpoint, the ground endpoint node coupled to the ground wire.23. The apparatus of claim 15, wherein the device connector is configured to comport with a Universal Serial Bus (USB) specification for a Type-C connector.24. The apparatus of claim 23, wherein:the device connector includes a sideband use (SBU) pin; andthe SBU pin is coupled to the at least one terminal node to which the antenna is coupled.25. The apparatus of claim 24, wherein:the device connector includes:a positive differential (DP) pin;a negative differential (DN) pin;a bus power (VBUS) pin; anda ground (GND) pin;the antenna is coupled to the SBU pin via the at least one terminal node of the device connector; andthe multiple wires include:a positive audio signal wire coupled to the DP pin via the at least two of the multiple terminal nodes of the device connector;a negative audio signal wire coupled to the DN pin via the at least two of the multiple terminal nodes of the device connector;a power wire coupled to the VBUS pin via a terminal node of the multiple terminal nodes of the device connector; anda ground wire coupled to the GND pin via another terminal node of the multiple terminal nodes of the device connector.26. The apparatus of claim 15, wherein:the apparatus comprises analog audio codec circuitry coupled between the multiple terminal nodes and the multiple endpoint nodes;the multiple wires are coupled between the at least two of the multiple terminal nodes of the device connector and the at least two of the multiple endpoint nodes of the audio endpoint via the analog audio codec circuitry; andthe antenna is uncoupled from the analog audio codec circuitry.27. An electronic device, comprising:means for coupling the electronic device to an external cabling apparatus; means for processing analog signals wirelessly received via an antenna, the means for processing analog signals being coupled to the means for coupling; and means for providing digital audio data based on the analog signals wirelessly received via the antenna, the means for providing digital audio data being coupled to the means for processing analog signals and the means for coupling.28. The electronic device of claim 27, wherein:the means for coupling comprises a receptacle comporting with a Universal Serial Bus (USB) Type-C configuration; andthe means for processing analog signals is coupled to a first sideband use (SBU1) pin or a second sideband use (SBU2) pin of the receptacle.29. The electronic device of claim 27, further comprising:means for switching coupled between a first pin and a second pin of the means for coupling, and the means for processing analog signals; andmeans for controlling the means for switching to selectively couple the first pin or the second pin to the means for processing analog signals.30. The electronic device of claim 27, wherein:the means for coupling comprises multiple pins including a positive digital a negative digital pin; andthe means for providing digital audio data is coupled to the positive digital the negative digital pin.
ANTENNA AND CABLING UNIFICATIONCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims the benefit of U.S. Provisional Application No. 62/548,379, filed 21 August 2017, the disclosure of which is hereby incorporated by reference in its entirety herein.TECHNICAL FIELD[0002] This disclosure relates generally to wireless signal reception and, more specifically, to enabling radio frequency (RF) wireless signal reception (e.g., for frequency modulation (FM) wireless signals) by unifying an antenna with a cabling apparatus that includes audio wires.BACKGROUND[0003] Examples of electronic devices include desktop computers, notebook computers, tablet computers, smartphones, and wearable devices such as a smartwatch, a fitness tracker, or intelligent glasses. People use electronic devices for productivity, communication, and entertainment purposes. For example, people play media, such as audio or video, using electronic devices. The media may be stored locally at an electronic device or transmitted to the electronic device. With the advent of high-bandwidth streaming capabilities, many people stream music, live radio, and podcasts to their electronic devices using Wi-Fi or cellular networks.[0004] However, Wi-Fi hotspots are typically limited to areas in or near buildings. Further, Wi-Fi hotspots sometimes require a fee or a privileged login access, like a membership or separate purchase. Cellular networks tend to offer a greater coverage area than Wi-Fi hotspots and are adept at servicing electronic devices that are in motion, such as if a user is traveling or exercising. Unfortunately, cellular networks are bandwidth limited in the sense that additional streamed bytes cost additional funds, either on a per-byte basis or by pre-purchasing a larger bucket of bytes (e.g., paying "X" dollars per gigabyte (GB)). In view of these issues, it is good that electronic devices have another option for obtaining audio that does not require streaming from Wi-Fi or cellular networks. This other option is terrestrial radio.[0005] Terrestrial radio includes, for example, frequency modulation (FM) signals that cover different portions of the electromagnetic spectrum in different countries across a range of 65 to 108 megahertz (MHz). For instance, the U.S. allocates 87.5 to 108.0 MHz to FM radio. This terrestrial radio option is attractive to many users of electronic devices for several reasons. It is an especially attractive option for broadcast radio because terrestrial radio is typically free. Additionally, radio signals can be received while an electronic device is in motion or is located in a remote area far from Wi-Fi hotspots or even cellular coverage. Further, listening to terrestrial radio can save battery power and enable a bandwidth allocation of a cellular data plan to be conserved. However, enabling an electronic device to receive terrestrial radio signals can be challenging.SUMMARY[0006] In some situations, it is desirable to receive radio signals, such as commercial frequency-modulated (FM) radio signals, using an electronic device, like a smartphone or a smartwatch. With traditional analog audio headsets (e.g., headphones or earbuds), a left (L) audio line or a right (R) audio line can be simultaneously used as a frequency-modulated (FM) radio antenna. However, with a digital audio headset, digital data that is propagating on audio data lines can interfere with FM radio signal reception.[0007] To address this problem, antenna and cabling unification is described herein. Example implementations employ a separate, partially floating, wire as a radio antenna that is included as part of a cabling apparatus. The antenna is coupled to a device connector of the cabling apparatus, and the device connector can be coupled to an associated electronic device. The antenna extends along a cable toward, but is not electrically coupled to, an audio endpoint on an end of the cabling apparatus that is opposite that of the device connector. In other example implementations, the antenna is shielded from other wires stretching along the cable, such as from digital data traffic on digital audio wires or from a power wire. In yet other example implementations, with a cabling apparatus that comports with a Universal Serial Bus (USB) Type-C architecture, the antenna can be coupled to at least one sideband use (SBU) pin of a device connector of the cabling apparatus. Further, an electronic device can use an antenna of such a cabling apparatus to receive a radio signal and then provide audio data on the cabling apparatus based on the received radio signal. In any one or more of these manners, a cabling apparatus can be used to support a radio operation that is provided by an associated electronic device. Moreover, other example implementations for antenna and cabling unification are described herein with respect to various apparatuses, systems, electronic devices, arrangements, methods, and so forth.[0008] In an example aspect, an electronic device is disclosed. The electronic device includes a cabling connector, a digital interface, and a radio. The cabling connector includes multiple pins configured to couple the electronic device to a device connector of an external cabling apparatus. The multiple pins include a positive digital pin, a negative digital pin, and a first other pin. The digital interface includes a positive interface node and a negative interface node. The positive interface node is coupled to the positive digital pin, and the negative interface node is coupled to the negative digital pin. The radio includes an antenna node coupled to at least the first other pin.[0009] In an example aspect, a method is disclosed. The method includes determining a first terminal node of multiple terminal nodes that is coupled to an antenna of a cabling apparatus. The multiple terminal nodes are coupled to a cabling connector of a mobile device that is connected to the cabling apparatus. The method also includes receiving a wireless signal from the antenna via the first terminal node. The method additionally includes converting the wireless signal to digital audio data. The method further includes transmitting the digital audio data over multiple audio wires of the cabling apparatus via a second terminal node and a third terminal node of the multiple terminal nodes.[0010] In an example aspect, an apparatus is disclosed. The apparatus includes a device connector, an audio endpoint, and a cable. The device connector includes multiple terminal nodes. The audio endpoint includes multiple endpoint nodes. The cable is coupled between the device connector and the audio endpoint. The cable includes multiple wires and an antenna. The multiple wires are coupled between at least two respective nodes of the multiple terminal nodes of the device connector and at least two respective nodes of the multiple endpoint nodes of the audio endpoint. The antenna is coupled to at least one of the multiple terminal nodes of the device connector and uncoupled from the audio endpoint.[0011] In an example aspect, an electronic device is disclosed. The electronic device includes means for coupling the electronic device to an external cabling apparatus. The electronic device also includes means for processing analog signals wirelessly received via an antenna, the means for processing analog signals being coupled to the means for coupling. The electronic device further includes means for providing digital audio data based on the analog signals wirelessly received via the antenna, the means for providing digital audio data being coupled to the means for processing analog signals and the means for coupling.BRIEF DESCRIPTION OF DRAWINGS[0012] FIG. 1-1 illustrates an example environment including an electronic device and an example cabling apparatus that can implement antenna and cabling unification, with the cabling apparatus including a device connector, a cable, and an audio endpoint.[0013] FIG. 1-2 illustrates an example environment including a cabling apparatus and an example electronic device that can implement antenna and cabling unification, with the electronic device including a cabling connector.[0014] FIG. 2 illustrates an exploded view of an example cable portion of a cabling apparatus.[0015] FIG. 3 illustrates a schematic view of an example cable portion of a cabling apparatus.[0016] FIG. 4 illustrates a schematic view of an example device connector portion of a cabling apparatus, including an example implementation that comports with a Universal Serial Bus (USB) Type-C architecture. [0017] FIG. 5 illustrates an example cabling apparatus in which the audio endpoint is implemented as a digital headset apparatus and the cable portion includes an antenna.[0018] FIG. 6 illustrates an example cabling apparatus in which the audio endpoint is implemented as an analog audio adapter socket and the cable portion includes an antenna.[0019] FIG. 7 illustrates an example electronic device including a cabling connector, switching circuitry, and a connector interface controller for antenna and cabling unification.[0020] FIG. 8 is a flow diagram illustrating an example process for antenna and cabling unification that can be performed by an electronic device that is coupled to a cabling apparatus as described herein.[0021] FIG. 9 illustrates an example electronic device that includes a cabling connector, switching circuitry, and a connector interface controller to implement antenna and cabling unification as described herein...DETAILED DESCRIPTION[0022] Although many users of electronic devices choose to stream audio digitally over a Wi-Fi or cellular connection, listening to terrestrial radio is also attractive to many users. This is especially true for receiving live or local broadcast radio, for saving battery power, or for reducing bandwidth usage on metered cellular plans. Unfortunately, integrating broadcast radio usage with digital electronic devices presents a number of problems. For example, portable electronic devices are usually smaller than a typical frequency modulation (FM) radio antenna. Further, the propagation of digital audio data can interfere with the reception of radio frequency (RF) wireless signals, such as those of FM wireless signals.[0023] Consider, for instance, the use of digital headsets that utilize a Universal Serial Bus (USB) connector, such as a USB Type-C connector. With a USB digital connector, the digital USB packet traffic, which includes audio data, can de-sense a sensitive FM reception. If an FM antenna is located proximate to a cable that is coupled to a digital headset, such as by being extended along a length of the cable, the FM antenna is exposed to the digital USB packet traffic. Unfortunately, a wireline FM antenna cannot be easily accommodated within a housing of a portable electronic device because the required antenna size is too large.[0024] In the context of FM radio, an antenna that is long relative to those used in Wi-Fi and cellular communications is employed. This is because commercial FM broadcasting stations typically operate on a lower frequency (e.g., the 76-108 Megahertz (MHz) frequency range) than those used for cellular radios (e.g., which can start around 700 MHz and reach into the gigahertz (GHz) frequency range). For example, much of the world devotes 87.5 to 108 MHz to commercial FM radio signals, and Japan assigns 76 to 95 MHz to commercial FM radio signals. An effective antenna for this frequency band includes, for instance, a monopole having a length of approximately one quarter of a wavelength of the signal, which is about 76 cm or 2.5 feet. Thus, an antenna for receiving terrestrial FM radio signals is typically approximately 30 inches long.[0025] Consequently, incorporating an FM radio antenna directly into a smartphone, for example, is prohibitively difficult due to the size differential between a smartphone form factor that is 4 to 6 inches (4-6") along a given dimension and an antenna that is over two feet long. To address this size difference, one approach to providing an FM radio antenna to an electronic device having a relatively small form factor is to leverage a wired cord extending between the electronic device and a headset, such as pair of headphones, earphones, or earbuds. For example, one of the analog audio lines (e.g., the left (L) audio line or the right (R) audio line) leading to an analog headset can be used, or shared, as the FM radio antenna. However, with the emergence of all-digital headsets, this has become problematic.[0026] Consider a digital headset that comports, for example, with a multipurpose USB Type-C protocol. Sharing one of the digital audio data lines as an FM antenna is not feasible because the digital USB data packets propagating between the electronic device and a digital interface in the headset generate electromagnetic (EM) noise. This EM noise at least adversely impacts, and can substantially cover, the FM frequency band. The EM noise therefore interferes with the wireless reception of FM radio signals.[0027] In contrast, example approaches as described herein employ a separate wire line as an antenna in a cable extending from a device connector to an audio endpoint. The device connector, the cable, and the audio endpoint can jointly form a cabling apparatus. In operation, the device connector is coupled to an electronic device via a cabling connector thereof, and the audio endpoint provides at least an audio output, such as a digital headset or an analog adapter. In some cabling apparatus implementations, the antenna is coupled to the device connector on one end and is left floating with respect to the other end. In other words, the antenna can be uncoupled from the audio endpoint. In other implementations, the antenna is electromagnetically separated from multiple other wires using a shielding component (e.g., by isolating the multiple wires). The multiple wires, such as audio wires or a power wire, extend from the device connector to the audio endpoint and can be enclosed within a shielding layer. In still other implementations, an antenna disposed along a cable portion of a cabling apparatus can be coupled to a device connector but uncoupled from an audio endpoint while also being shielded from the EM signaling occurring along multiple wires disposed along the cable portion.[0028] In example electronic device implementations, an electronic device includes switching circuitry and a connector interface controller to operate with a cabling apparatus as described herein. The switching circuitry is coupled to a cabling connector, which can be coupled to a device connector of the cabling apparatus. The switching circuitry includes multiple terminal nodes, at least some of which are individually coupled to respective ones of multiple wires and an antenna of a cable portion of the cabling apparatus via the cabling connector. The connector interface controller can determine which terminal node of multiple terminal nodes is coupled to the antenna and can activate at least one switch of the switching circuitry to route FM radio signals from the antenna to an FM receiver of the electronic device. [0029] In some example implementations, a cabling apparatus comports with a USB Type-C standard. A device connector of a cabling apparatus includes multiple terminal nodes that are coupled both to a cable of the cabling apparatus and to multiple pins (e.g., interface or connection pins). The pins of the device connector are configured to be connected to a cabling connector of an electronic device, which likewise comports with the USB Type-C standard. Thus, the device connector couples the multiple terminal nodes to the pins, and the pins comport with a USB Type-C connection standard. In such an implementation, an antenna of the cable can be coupled to at least one of two specific pins of a USB Type-C connector. For instance, the antenna can be coupled to at least one sideband use (SBU) pin (e.g., a first sideband use (SBU1) pin or a second sideband use (SBU2) pin), as is explained herein below.[0030] In these manners, a cabling apparatus that is designed for digital signal propagation can include an antenna for use with terrestrial radio, such as FM radio. A partially floating antenna wire can be shielded from other wires that carry digital audio data or power to enable the antenna to be sensitive to wireless radio signals. Further, the cabling apparatus can be implemented to comport with a USB Type-C connector by selectively coupling the antenna to a particular pin of a USB Type-C connector, such as an SBU pin. An electronic device can be configured to interface with a cabling apparatus and utilize the antenna thereof for radio reception.[0031] FIG. 1-1 illustrates an environment 100-1 including an electronic device 112 and an example cabling apparatus 102 that can implement antenna and audio cabling unification. As shown, the cabling apparatus 102 includes a device connector 104, a cable 106, and an audio endpoint 108. The electronic device 112 includes a cabling connector 1 14. The cabling connector 114 mates to the device connector 104 such that the cable 106 or the audio endpoint 108 is communicatively coupled to the electronic device 1 12. For example, the cabling connector 114 can be implemented as a receptacle, and the device connector 104 can be implemented as a plug. In some implementations, the cabling connector 1 14 and the device connector 104 may be configured in a form that complies with a USB Type-C connection standard. Also, although shown as an audio endpoint 108, an endpoint of the cabling apparatus 102 can alternatively be realized with functionalities other than, or in addition to, those involving audio.[0032] In example implementations, the cable 106 extends from the device connector 104 to the audio endpoint 108. The cable 106 includes multiple wires 110-1 to 110-n, with "n" representing some positive integer. The multiple wires 110-1 to 1 10-n include at least multiple audio wires and at least one power wire, as is described below with reference to FIG. 3. The multiple wires 1 10-1 to 110-n extend from the device connector 104 to the audio endpoint 108. Thus, the multiple wires 110-1 to 110-n are coupled to the device connector 104 and the audio endpoint 108. The cable 106 also includes at least one antenna 1 16. The antenna 116 can be, for example, configured or tuned to radiate at a frequency of a broadcast FM radio station, such as 76 to 108 MHz for commercial FM radio broadcasts in many countries. Accordingly, the antenna 1 16 can have a length that is, for instance, one-quarter to one-half a size of a wavelength of the signal to be received. For a 76 to 108 MHz frequency signal range, a corresponding wire length range is 2 to 4 feet. Although 3 to 4 feet may provide superior reception, a length of a wireline antenna can range between 24 and 36 inches to match a length of cable linking a device connector to a headset connection for the convenience of an end-user. However, other antenna lengths can alternatively be used. Further, a discrete chip antenna can be used instead as described below with particular reference to FIG. 6.[0033] Although the antenna 116 has one end coupled to the device connector 104, an opposite end of the antenna 116 can be floating— e.g., electrically uncoupled from the audio endpoint 108. This arrangement enables the antenna 116 to receive and be electrically excited by propagating EM wireless signals, such as those for FM radio, as represented by a wireless signal 118. The antenna 116 can then propagate a radio frequency (RF) signal to the device connector 104. Thus, the antenna 116 can provide at least one radiation mechanism for propagating at least one wireless signal 118 to the device connector 104. The multiple wires 110-1 to 110-n, the antenna 116, the connectors, the endpoints, and so forth are not necessarily depicted to scale in the various figures. As shown in FIG. 1-1 (e.g., and FIG. 1-2), the cabling apparatus 102 is disposed external to the electronic device 112. The cabling apparatus 102 can be external to the electronic device 112 if, for example, the cabling apparatus 102 is disposed outside of a housing (e.g., a metal, plastic, or glass frame or casing) of the electronic device 112, the cabling apparatus 102 can be connected to or disconnected from (e.g., is removably couplable to) the electronic device 112 by an end-user (e.g., without disassembling the electronic device 112), and so forth. Thus, the cabling apparatus 102 can comprise an external cabling apparatus 102.[0034] The cabling apparatus 102 can be realized in many different manners, with two examples depicted in dashed lines as a first cabling apparatus 102-1 and a second cabling apparatus 102-2. Accordingly, an audio endpoint 108 can be realized in multiple different manners. For example, the audio endpoint 108 can be realized as an integrated headset 108-1 such that the first cabling apparatus 102-1 forms a headset apparatus. Example implementations for headset apparatuses are described below with reference to FIG. 5. Alternatively, an audio endpoint 108 can be realized as an audio jack 108-2, such as an analog audio jack. Thus, as shown in conjunction with the audio jack 108-2 and the second cabling apparatus 102-2, but by way of example only, a cabling apparatus 102 can be implemented as an adapter (e.g., between a digital device and an analog headset). The adapter functionality is provided between the cabling connector 1 14, which has a digital interface for the electronic device 1 12, and an analog headset (not explicitly shown) by using the device connector 104 and the cable 106 in conjunction with the audio jack 108-2. In such cases, an audio codec (not shown in FIG. 1-1) can be included as part of the cabling apparatus 102-2. Example implementations for adapter apparatuses are described below with reference to FIG. 6.[0035] FIG. 1-2 illustrates an example environment 100-2 that includes an example electronic device 112 in which antenna and cabling unification can be implemented. In the environment 100-2, the electronic device 1 12 can communicate with a base station 154 via a cellular wireless signal 118-2 or with a radio station tower 152 via a radio wireless signal 118-1. As shown in the top portion of FIG. 1-2, the electronic device 112 communicates with the base station 154 through a wireless communication link as represented by the cellular wireless signal 118-2. In this example, the electronic device 112 is depicted as a smart phone. However, the electronic device 1 12 may be implemented as any suitable computing or other electronic device, such as a broadband router, access point, cellular or mobile phone, gaming device, navigation device, media device, laptop computer, desktop computer, tablet computer, server, network-attached storage (NAS) device, smart appliance, vehicle-based communication system, Internet-of-Things (IoT) device, wearable device, entertainment appliance, streaming device for audio or video, circuit board or chipset, and so forth.[0036] The base station 154 communicates with the electronic device 112 via the cellular wireless signal 118-2, which may be implemented as any suitable type of wireless link. Although depicted as a base station tower of a cellular radio network, the base station 154 may represent or be implemented as another device, such as a satellite, access point, peer-to-peer device, mesh network node, fiber optic line, server device, another electronic device generally, and so forth. Hence, the electronic device 112 may communicate with the base station 154 or another device via a wired connection, a wireless connection, or a combination thereof.[0037] The cellular wireless signal 1 18-2 can include a downlink of data or control information communicated from the base station 154 to the electronic device 112 and an uplink of other data or control information communicated from the electronic device 1 12 to the base station 154. The cellular wireless signal 1 18-2 may be implemented using any suitable communication protocol or standard, such as 3rd Generation Partnership Project Long-Term Evolution (3GPP LTE). Alternatively, a communication between the electronic device 1 12 and the base station 154 can comport with a Wi-Fi or other wireless standard, such as IEEE 802.1 1, IEEE 802.16, Bluetooth™, and so forth.[0038] As shown, the electronic device 1 12 includes a processor 158 and a computer-readable storage medium 160 (CRM 160). The processor 158 may include any type of processor, such as an application processor or a multi-core processor, that is configured to execute processor-executable instructions (e.g., code) stored by the CRM 160. The CRM 160 may include any suitable type of data storage media, such as volatile memory (e.g., random access memory (RAM)), non-volatile memory (e.g., Flash memory), optical media, magnetic media (e.g., disk or tape), memory with hard-coded instructions, and so forth. In the context of this disclosure, the CRM 160 is implemented to store instructions 162, data 164, and other information of the electronic device 112, and thus does not include transitory propagating signals or carrier waves.[0039] The electronic device 1 12 may also include input/output ports 166 (I/O ports 166) or a display 168. The I/O ports 166 enable data exchanges or interaction with other devices, networks, or users. The I/O ports 166 may include serial ports (e.g., universal serial bus (USB) ports), parallel ports, audio ports, infrared (IR) ports, and so forth. Thus, in some implementations, the I/O ports 166 can include at least one cabling connector 114 to accept a device connector 104. The display 168 can be realized as a screen or projection that presents graphics of the electronic device 1 12, such as a user interface associated with an operating system, program, or application. Alternatively or additionally, the display 168 may be implemented as a display port or virtual interface through which graphical content of the electronic device 112 is communicated or presented.[0040] For two-way or bi-directional communication purposes, the electronic device 1 12 also includes a communication processor 170, a wireless transceiver 156, and an antenna (not shown) that is internal to, or a part of, a housing of the electronic device 1 12. The wireless transceiver 156 provides connectivity to respective networks and other electronic devices connected therewith using radio- frequency (RF) wireless signals. Additionally or alternatively, the electronic device 112 may include a wired transceiver, such as an Ethernet or fiber optic interface for communicating over a personal or local network, an intranet, or the Internet. The wireless transceiver 156 may facilitate bi-directional communication over any suitable type of wireless network, such as a wireless local area network (LAN) (WLAN), a peer-to-peer (P2P) network, a mesh network, a cellular network, a wireless wide-area-network (WW AN), or a wireless personal-area- network (WPAN). In the context of the example environment 100-2, the wireless transceiver 156 enables the electronic device 1 12 to communicate with the base station 154 and networks connected therewith or "directly" with other electronic devices. Although not explicitly shown in FIG. 1-2, the electronic device 112 may also include a wireless receiver to access a navigational network (e.g., the Global Positioning System (GPS) of North America or another Global Navigation Satellite System (GNSS)).[0041] The communication processor 170 may be realized as a communication-oriented processor, such as a baseband modem. The communication processor 170 may be implemented as a system on-chip (SoC) that provides a digital communication interface for data, voice, messaging, and other applications of the electronic device 1 12. The communication processor 170 may also include baseband circuitry to perform high-rate sampling processes that can include analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), gain correction, skew correction, frequency translation, and so forth. The communication processor 170 may also include logic to perform in-phase/quadrature (I/Q) operations, such as synthesis, encoding, modulation, demodulation, and decoding. More generally, the communication processor 170 may be realized as a digital signal processor (DSP) or a processor that is configured to perform signal processing to support communication via one or more networks. Alternatively, ADC or DAC operations may be performed by a separate component or another illustrated component, such as the wireless transceiver 156.[0042] The wireless transceiver 156 can include circuitry, logic, and other hardware for transmitting and receiving a wireless signal for at least one communication frequency band. In operation, the wireless transceiver 156 can implement at least one, e.g., radio-frequency transceiver unit to process data and/or signals associated with communicating data of the electronic device 1 12. Generally, the wireless transceiver 156 can include filters, switches, amplifiers, and so forth for routing and conditioning signals that are transmitted from and received at the electronic device 1 12 via the cellular wireless signal 118-2. In some cases, components of the wireless transceiver 156 are implemented as separate receiver and transmitter entities. Additionally or alternatively, the wireless transceiver 156 can be realized using multiple or different sections to implement respective receiving and transmitting operations (e.g., using separate transmit and receive chains).[0043] As illustrated, the environment 100-2 also includes a radio station tower 152. The radio station tower 152 transmits (e.g., broadcasts) at least one radio wireless signal 118-1. In some implementations, the radio wireless signal 118-1 is transmitted in the 50 MHz to 150 MHz range, such as the 76 to 108 MHz range for commercial FM radio broadcasts in many countries. Accordingly, some effective lengths for monopole antennas to receive these transmissions are 2-3 feet, which is too long to efficiently incorporate directly into the electronic device 1 12. In such situations, a wire that is part of a cabling apparatus 102 (e.g., a headset apparatus 102-1) can be used as an antenna 116 (e.g., of FIGS. 1-1, 2, or 3).[0044] Thus, FIG. 1-2 depicts a headset apparatus 102-1 as an example cabling apparatus 102. The headset apparatus 102-1 includes an audio endpoint 108-1 that comprises a headset, a cable 106, and a device connector 104 (DC 104). The audio endpoint 108-1 can include at least one speaker, at least one microphone, and so forth. The audio endpoint 108-1 can be realized as headphones, earphones, earbuds, and the like. The microphone, if present, can alternatively be disposed somewhere along the cable 106. The cable 106 extends from the audio endpoint 108-1 to the device connector 104 and is coupled to both. As shown, the device connector 104 can comprise a male interface, such as plug that comports with a USB Type-C protocol and is configured to be inserted into a receptacle of the electronic device 112 that also comports with a USB Type-C protocol as described below with reference to FIG. 4.[0045] In example implementations, the electronic device 1 12 also includes components to receive or facilitate reception of radio wireless signals, such as the radio wireless signal 118-1. These components can include, in addition to the communication processor 170, a connector interface controller 172, switching circuitry 174, and a cabling connector 114. The switching circuitry 174 is coupled between the cabling connector 114 and the connector interface controller 172. The connector interface controller 172 can include, or can be communicatively coupled to, a radio receiver, an analog audio codec, and so forth, as described below with reference to FIG. 7. The connector interface controller 172 includes logic to control the switching circuitry 174 based on a device connector 104 that is inserted into the cabling connector 114. The cabling connector 114 can comprise a female interface or receptacle that is configured to accept the device connector 104 to establish an electrical connection therebetween across one or more pins (e.g., contacts) (not shown in FIG. 1). These pins can comport with a USB Type-C protocol as described with reference to FIGS. 4 and 7. The connector interface controller 172, the switching circuitry 174, and the cabling connector 114 can individually or jointly be configured to enable the electronic device 112 to operate with an apparatus implementing antenna and cabling unification.[0046] FIG. 2 illustrates an exploded view of an example cable 106 of a cabling apparatus 102. As shown, the cable 106 includes multiple wires 1 10-1 to 110-n and an antenna 1 16, with "n" representing some integer. Thus, although three wires 1 10-1, 110-2, and 110-n are explicitly depicted, more or fewer wires may alternatively be implemented. For example, four (as illustrated in subsequent figures) or more wires may be implemented as the multiple wires 110. In some implementations, the cable also includes at least one of a shielding component 206, protective insulation 204, or a cover 202. The cover 202 encases internal parts of the cable 106 and can repel liquids or dirt. The protective insulation 204 provides physical cushioning and mechanical resiliency to protect the multiple wires 110-1 to 110-3 and the antenna 116 of the cable 106.[0047] The shielding component 206 provides electromagnetic (EM) shielding to at least partially protect the antenna 116 from EM radiation, such as by substantially blocking or isolating the EM radiation generated by the multiple wires 110-1 to 1 10-3 from the antenna 1 16. For example, the shielding component 206 can be interposed between the antenna 116 and at least a portion of the multiple wires 110-1 to 1 10-3 to block EM radiation at least for frequencies at or around those for which the antenna 116 is expected to receive signals (e.g., 76-108 MHz for an FM radio broadcast). The shielding component 206 can be formed from any material that blocks or appreciably attenuates EM radiation, such as a braided metal sleeve. Consequently, digital data, including audio digital data, that is propagating along one or more of the multiple wires 110-1 to 1 10-3 does not adversely affect the ability of the antenna 116 to receive or be electrically excited by a wireless signal 1 18, at least not to an appreciable degree that prevents reception and demodulation of a desired radio signal. Thus, the shielding component 206 can provide at least one mechanism for shielding at least one wireless signal, which is being propagated along the antenna 116 to the device connector 104, from at least one audio signal propagating along audio wires of the multiple wires 110-1 to 110-3. The shielding component 206 may also isolate the multiple wires 1 10-1 to 110-3 from radiation originating from a source other than the wires 110.[0048] In example implementations, the shielding component 206 is realized as a shielding layer. As shown, particularly in the example depicted in FIG. 2, the shielding component 206 can be implemented as a cylindrical shielding layer or tube that wraps around and encloses the multiple wires 110-1 to 1 10-3 to electromagnetically isolate these wires from the antenna 1 16. However, the shielding component 206 can alternatively be implemented in a different shape or form while still providing at least a partial EM barrier between the multiple wires 1 10-1 to 1 10-3, which may be radiating RF interference, and the antenna 116. Although the cable 106 is depicted with a certain complement of parts arranged in a particular manner, the cabling apparatus 102 can be implemented in alternative manners. For example, the protective insulation 204 can be omitted, the cable 106 can be flat or oval, one or more of the multiple wires 1 10-1 to 110-3 can be individually shielded with respect to the antenna 1 16 or other external factors instead of being bundled together, the protective insulation 204 can surround the shielding component 206 and the antenna 116 as well as the multiple wires 1 10-1 to 110-3, and so forth.[0049] FIG. 3 illustrates a schematic view of an example cable 106 of the cabling apparatus 102. As shown, the cable 106 includes multiple wires 110-1 to 1 10-4, the antenna 116, and the shielding component 206. Although four wires 1 10-1, 110-2, 1 10-3, and 1 10-4 are explicitly depicted, more or fewer wires may alternatively be implemented. For example, the cable 106 may include other types of wires (e.g., non-data wires or non-audio wires), duplicate instances of the depicted wires, and so forth. The shielding component 206 provides an electromagnetic shield to protect the antenna 116 from electromagnetic radiation generated by any one or more of the multiple wires 110-1 to 1 10-4.[0050] In example implementations, the cable 106 includes four wires 110-1 to 110-4: a positive audio signal wire 110-1, a negative audio signal wire 1 10-2, a power wire 110-3, and a ground wire 110-4. In FIG. 3, the positive audio signal wire 110-1 (e.g., the positive digital audio data signal wire) and the negative audio signal wire 110-2 (e.g., the negative digital audio data signal wire) are indicated as multiple audio wires 302. However, in alternative implementations, other wires pertaining to the audio endpoint 108 may be included as part of the multiple audio wires 302. For example, the power wire 1 10-3 that provides power to an audio endpoint 108 or the ground wire 110-4 may be audio-related. Also, although indicated as audio wires 302, the depicted multiple wires 1 10-1 to 1 10-4 or other wires 1 10 of the cabling apparatus 102 can alternatively be utilized with functionalities other than, or in addition to, those involving audio. The wires may therefore be referred to more generally as data wires or data signal wires, and may for example, be configured as a data positive (D+ or DP) wire and a data negative (D- or DN) wire for differential signaling. Further, one or more wires, such as the power wire 1 10-3 or the ground wire 1 10-4, may be disposed outside of the shielding component 206, such as by being adjacent to the antenna 1 16.[0051] The antenna 1 16 is coupled to the device connector 104 but is uncoupled from the audio endpoint 108. In contrast, the multiple wires 110-1 to 1 10-4 extend between and are coupled to both the device connector 104 and the audio endpoint 108. In operation, at least one audio signal wire propagates audio data 304 between the device connector 104 and the audio endpoint 108 in either or both directions. For example, the positive audio signal wire 1 10-1 and the negative audio signal wire 1 10-2 can both propagate a signal carrying audio data 304 (e.g., sounds, music, or speech). If the audio endpoint 108 is functioning as a speaker, the positive audio signal wire 110-1 or the negative audio signal wire 1 10-2 propagates (e.g., individually or jointly) audio data 304 from the device connector 104 to the audio endpoint 108 for presenting aurally at the audio endpoint 108. If the audio endpoint 108 is functioning as a microphone, the positive audio signal wire 1 10-1 or the negative audio signal wire 110-2 propagates (e.g., individually or jointly) audio data 304 from the audio endpoint 108 to the device connector 104 for processing by the electronic device 112 (e.g., of FIGS. 1-1, 1-2, and 7). In some implementations, the audio endpoint 108 functions as both a microphone and a speaker, for example when the electronic device 112 is configured to process audio input from the audio endpoint 108 and generate noise cancellation information for use by an audio output of the audio endpoint 108 or for full duplex communications.[0052] In some implementations, the positive audio signal wire 110-1 and the negative audio signal wire 110-2 form a differential signal pair that can propagate a differential signal between the device connector 104 and the audio endpoint 108. For instance, the positive audio signal wire 1 10-1 can propagate a positive or plus portion of a differential signal, and the negative audio signal wire 110-2 can propagate a negative or minus portion of the differential signal. The differential signal can be implemented as a digital differential signal to propagate digital audio data 304. In operation, the power wire 110-3 provides power to the audio endpoint 108 by receiving a supply voltage from the electronic device 1 12 via the device connector 104 and distributing the supply voltage to the audio endpoint 108. The ground wire 110-4 provides a distribution mechanism for a ground potential of a ground node of the electronic device 112 or of a combined system including the electronic device 112 and the cabling apparatus 102. The ground potential can provide a reference for the supply voltage.[0053] In some implementations, one or more of the wires 1 10-1 to 1 10-4 are omitted. In an extreme example, all four of such wires are omitted. Further, the audio endpoint 108 can also be omitted or implemented as an endcap that does not include an electrically active component. Such an implementation can provide an end-user with an antenna 116, such as an FM antenna, that may be added to an electronic device 1 12 even if no other components (or wires) are being physically coupled to the electronic device 112 via the device connector 104 at that time with that cabling apparatus 102. In some such implementations, the antenna is implemented in a cable or cord. In other such implementations, the antenna is implemented in a case or cover that the user may attached to an electronic device 112. In addition to the case or cover having a device connector 104 to couple to a cabling connector 114 of an electronic device 112, the case or cover may itself have a cabling connector 1 14 configured to accept a connector similar to connect 104 from other apparatuses.[0054] FIG. 4 illustrates a schematic view of an example device connector 104 of the cabling apparatus 102, including an example implementation that comports with a Universal Serial Bus (USB) Type-C architecture. The device connector 104 includes multiple terminal nodes 402-1 to 402-n, with "n" representing some positive integer. Of these "n" terminal nodes 402-1 to 402-n, six example terminal nodes are specifically depicted in FIG. 4. A first terminal node 402-1 is coupled to the antenna 1 16, and a second terminal node 402-2 is coupled to a ground node 404. A third terminal node 402-3 is coupled to the positive audio signal wire 110-1, and a fourth terminal node 402-4 is coupled to the negative audio signal wire 1 10-2. A fifth terminal node 402-5 is coupled to the power wire 110-3, and a sixth terminal node 402-6 is coupled to the ground wire 110-4. Thus, the second terminal node 402-2 and the sixth terminal node 402-6 can both be coupled to the ground node 404 in some implementations.[0055] It is noted that particular numerical terms such as "first," "second," "third," and so forth may be used to differentiate like components or aspects within a given context for clarity. However, in a different context, a same numerical term may refer to a different component or aspect, or a same component or aspect may have a different numerical identifier for the different context. Thus, a second terminal node may be coupled to ground in one context but coupled to an audio signal wire in another context as indicated in the other context.[0056] In an example operation, these multiple terminal nodes 402-1 to 402-n of the device connector 104 are matched to a corresponding set of multiple terminal nodes at the cabling connector 114 (e.g., of FIGS. 1-1, 1-2, and 7) of the electronic device 1 12 via one or more input/output pins for each connector. In some implementations, the device connector 104 or the cabling connector 114 may comport with a USB standard, such as a USB Type-C connection specification. In an example USB Type-C connection implementation, the variable "n" represents 24, for there are 24 pins in a USB Type-C connection. Thus, within the device connector 104, each respective terminal node 402 from the cabling side is coupled to a respective pin that comports with a USB Type-C connection on the device side.[0057] An example USB Type-C plug 406 that comports with a Universal Serial Bus (USB) specification for a Type-C connector is depicted in FIG. 4. The USB Type-C plug 406 includes 24 pins arranged in two columns identified as "A" or "B" (where the two columns can be two rows in a different orientation). Four pins can correspond to a ground return (GND), and four pins can correspond to voltage-supplied bus power (VBUS). Eight pins can correspond to "super-speed" differential communication lines for transmission or reception with two differential RX/TX pairs of such pins (SSTXpl, SSTXnl, SSRXpl, SSRXnl, SSRXn2, SSRXp2, SSTXn2, and SSTXp2, due to the reversal feature of the connector). Additionally, four pins can correspond to differential signaling with one differential pair of such pins (DP, DN, DN, and DP, due to the reversal feature of the connector). Two pins can correspond to a configuration channel (CC1 and CC2). And two pins can correspond to "sideband use" (SBU1 and SBU2). Thus, the USB Type-C plug 406 can provide at least one connection mechanism for comporting with a Universal Serial Bus (USB) Type-C protocol.[0058] Further, the six illustrated terminal nodes 402-1 to 402-6 can be implemented using particular instances of the pins specified for a USB Type-C connection. For example, the first terminal node 402-1 that is tied to the antenna 116 can correspond to, or be coupled to, a first sideband use contact (e.g., an SBU1 or SBU_A pin). The second terminal node 402-2 that is tied to the ground node 404 can correspond to, or be coupled to, a second sideband use contact (e.g., an SBU2 or SBU_B pin). Alternatively, the sideband use pins (SBU1 and SBU2) can be swapped such that the first terminal node 402-1 that is tied to the antenna 1 16 corresponds to the second sideband use contact (e.g., the SBU2 or SBU_B pin) and the second terminal node 402-2 that is tied to the ground node 404 corresponds to the first sideband use contact (e.g., the SBU1 or SBU_A pin). Thus, in such implementations, one terminal node of the first terminal node 402-1 (which is coupled to the SBU1 pin) or the second terminal node 402-2 (which is coupled to the SBU2 pin) is coupled to the antenna 116, while another terminal node of the first terminal node 402-1 or the second terminal node 402-2 is grounded.[0059] This grounding of one terminal node may facilitate a determination by the electronic device 1 12 (e.g., of FIGS. 1-1, 1-2, and 7) of which of the two sideband use pins (SBU1 or SBU2) is coupled to the antenna 116. For example, the electronic device 112 may be configured to determine a presence of an antenna coupled to the device connector 104. In some implementations, the electronic device 112 is configured to detect such an antenna based on signals received over the antenna, e.g., by analyzing one or more pins to determine if signals having certain characteristics are present on those pins, or by applying a signal to those pins and analyzing the response. In other implementations, the electronic device 112 may sense that one or more pins are grounded and may conclude based on the ground sensing that another, non-grounded pin is coupled to an antenna. For instance, in one implementation, the electronic device 112 is configured to determine that one of the first terminal node 402-1 (e.g., the SBU1 pin) or the second terminal node 402-2 (e.g., the SBU2 pin) is coupled to an antenna if the electronic device 112 detects that one (but not both) of the sideband use pins is grounded. This is described further with reference to FIG. 7.[0060] Continuing with the description of FIG. 4, the third terminal node 402-3 that is tied to an audio signal wire 1 10-1 can correspond to, or be coupled to, a "positive differential" contact or positive differential (DP) pin (e.g., a DP or SDPpl pin). The fourth terminal node 402-4 that is tied to the other audio signal wire 110-2 can correspond to, or be coupled to, a "negative differential" contact or negative differential (DN) pin (e.g., a DN or SDPnl pin). Although the audio signal wires 110-1 and 110-2 that are coupled to these contacts or pins are respectively labeled as a positive audio signal wire and a negative audio signal wire in FIGS. 3 and 4, those of skill in the art will understand that this description is used as an example according to certain audio-related implementations that are described herein. However, these wires may alternatively be configured generally as Data+ and Data- wires or connections pursuant to one or more USB standards (such as the USB Type-C standard, for example when the cabling apparatus 102 is configured as an adapter for one or more devices). The fifth terminal node 402-5 that is tied to the power wire 110-3 can correspond to, or be coupled to, a "bus power" contact or bus power (VBUS) pin (e.g., a VBUS or PWR_VBUS 1 or PWR_VBUS2 pin). And the sixth terminal node 402-6 that is tied to the ground wire 110-4 can correspond to, or be coupled to, a "ground" contact or ground (GND) pin (e.g., a GND or GND PWRrtl or GND_PWRrt2 pin). The coupling of wires from the cable 106 to these six example pins for a USB Type-C connection are depicted more explicitly in FIGS. 5 and 6.[0061] However, these are merely example pins for implementing antenna and audio cabling unification with a USB Type-C connection. Aspects of antenna and audio cabling unification can be implemented with other USB connection types as well as with non-USB connection types. Also, although not illustrated in FIG. 4, a cabling connector 114 (e.g., of FIGS. 1-1, 1-2, and 7) of an electronic device 112 can include corresponding terminal nodes that are coupled to I/O or interface pins to interface with the device connector 104 of the cabling apparatus 102.[0062] FIG. 5 illustrates an example cabling apparatus 102-1 in which the audio endpoint 108-1 is implemented as a digital headset 502 and the cable 106 includes an antenna 116. Here, the device connector 104 is implemented as a USB Type-C plug 406. As shown in FIG. 5, the positive (digital) audio signal wire 110-1 is coupled to the DP pin, and the negative (digital) audio signal wire 110-2 is coupled to the DN pin. The power wire 1 10-3 is coupled to the VBUS pin, and the ground wire 110-4 is coupled to the GND pin. The antenna 1 16 is coupled to the SBUl pin, and the SBU2 pin is grounded. Alternatively, the antenna 116 can be coupled to the SBU2 pin, and the SBUl pin can be shorted by being coupled to ground. Other antenna-coupling options include: coupling the antenna 116 to both the SBUl pin and the SBU2 pin; or coupling the antenna 116 to either of the SBUl or the SBU2 pin, and also coupling the two SBU1 and SBU2 pins to each other via a capacitor. One end of the antenna 1 16 is floating or uncoupled from the audio endpoint 108-1. By disposing the antenna 116 outside of a ground shield (not shown in FIG. 5) that encloses one or more wires (e.g., the multiple wires 1 10-1 to 110-4), the antenna 116 can operate more quietly from an EM perspective.[0063] In some implementations, the audio endpoint 108-1 is implemented as a digital headset 502 including digital audio circuitry and multiple endpoint nodes 510-1 to 510-4 to couple with the wires of the cable 106. Although four endpoint nodes 510-1, 510-2, 510-3, and 510-4 are explicitly depicted, more or fewer endpoint nodes may alternatively be implemented. As shown on the left, the digital headset 502 includes a ground node 504, a power supply unit 508, and a digital interface 506. As shown on the right, the digital headset 502 also includes an analog interface 516, one or more speakers 518, and at least one microphone 520. The digital interface 506 includes a first endpoint node 510-1 coupled to the positive audio signal wire 110-1 and a second endpoint node 510-2 coupled to the negative audio signal wire 110-2. The power supply unit 508 includes a power endpoint node 510-3 that is coupled to the power wire 110-3. The ground node 504 corresponds to a ground endpoint node 510-4 that is coupled to the ground wire 110-4 to provide an extended ground reference between the digital headset 502 and an electronic device 112.[0064] The digital interface 506 transmits or receives digital differential data over the positive and negative audio signal wires 110-1 and 110-2 via the first and second endpoint nodes 510-1 and 510-2. The digital interface 506 can be realized using, for example, USB interface circuitry, such as logic circuitry that comports with a USB Type-C protocol. The power supply unit 508 receives a supply voltage from the power wire 110-3 via the power endpoint node 510-3. The power supply unit 508 can be implemented with, for example, a voltage converter or a voltage regulator (e.g., a switched-mode power supply (SMPS) or a low-voltage dropout (LDO) regulator) to generate a local supply voltage on a power rail 522. The power rail 522 distributes power at the local supply voltage to the digital interface 506 and the analog interface 516. [0065] The digital interface 506 is communicatively coupled to the analog interface 516 to exchange audio data therebetween. An analog output 514 of the analog interface 516 is coupled to the speaker 518, and an analog input 512 of the analog interface 516 is coupled to the microphone 520. The analog interface 516 includes circuitry to prepare an analog audio signal and provide the prepared analog audio signal to the speaker 518. The analog interface 516 can also include circuitry to receive an analog audio signal from the microphone 520 and process the received analog audio signal for transmission to the digital interface 506. In example operations, audio data is exchanged between the digital interface 506 and the analog interface 516. If the digital interface 506 includes ADC and DAC circuitry, the audio data can be exchanged in an analog format. On the other hand, if the analog interface 516 includes ADC and DAC circuitry, the audio data can be exchanged in a digital format. Regardless, the analog interface 516 exchanges analog audio data with the speaker 518 and the microphone 520. Thus, the digital headset 502 can provide at least one mechanism for providing an aural output responsive to a digital audio signal received via multiple wires, such as the wires 110-1 and 110-2.[0066] FIG. 6 illustrates an example cabling apparatus 102-2 in which the audio endpoint 108-2 is implemented as an analog audio adapter socket 602 and the cable 106 includes an antenna 116. As depicted, the cabling apparatus 102-2 includes analog audio codec circuitry 608, which includes at least a digital-to- analog converter (DAC) 610 or an analog-to-digital converter (ADC) 612. Here, the device connector 104 is implemented as a USB Type-C plug 406. As shown in FIG. 6, the positive (digital) audio signal wire 1 10-1 is coupled to the DP pin, and the negative (digital) audio signal wire 110-2 is coupled to the DN pin. The power wire 1 10-3 is coupled to the VBUS pin, and the ground wire 1 10-4 is coupled to the GND pin. The antenna 1 16 is coupled to the SBU1 pin. Alternatively, the antenna 1 16 can be coupled to the SBU2 pin. Further, the antenna 116 can instead be coupled to both the SBU1 and the SBU2 pins, or the antenna 1 16 can be coupled to either the SBU1 pin or the SBU2 pin with the two pins also coupled together via a capacitor (not shown). As illustrated, one end of the antenna 116 is floating or uncoupled from the audio endpoint 108-2 and from the analog audio codec circuitry 608.[0067] In some implementations, the audio endpoint 108-2 is implemented as an adapter socket 602. The illustrated analog audio adapter socket 602 includes a receptacle to accept an analog audio plug 604, such as for a 3.5 millimeter (mm) audio jack. The receptacle of the adapter socket 602 includes multiple contacts labeled as "R," "L," "MIC," and "GND." Each respective contact "R," "L," "MIC," and "GND" corresponds to a respective endpoint node 510-5, 510-6, 510-7, and 510-8 of multiple endpoint nodes 510-5 to 510-8. Although four endpoint nodes 510-5, 510-6, 510-7, and 510-8 are explicitly depicted, more or fewer endpoint nodes may alternatively be implemented at the adapter socket 602. Additionally, the contacts of the adapter socket 602, which are coupled to the multiple endpoint nodes 510-5 to 510-8, may be configured in a different linear order. For example, the depicted order of L-R-GND-MIC corresponds to a U.S. configuration. In Europe, however, the linear order of the contacts is L-R-MIC- GND (not shown). As depicted, the analog audio plug 604 includes corresponding contacts in a U.S. configuration to mate with the receptacle: a left audio contact 606-2 ("L"), a right audio contact 606-1 ("R"), a ground contact 606-4 ("GND"), and a microphone contact 606-3 ("MIC"). In Europe, the linear positions of the ground and microphone contacts are swapped.[0068] In example implementations, the cabling apparatus 102-2 couples the pins of the USB Type-C plug 406 to the multiple endpoint nodes 510-5 to 510-8 using the analog audio codec circuitry 608. For visual clarity, the analog audio codec circuitry 608 is depicted along the cable 106. However, the analog audio codec circuitry 608 may actually be disposed within a housing or a portion of the USB Type-C plug 406 or within a housing or a portion of the adapter socket 602, or be secured to one of the two. These two example physical locations are indicated with dashed lines for the analog audio codec circuitry at 608-1 and at 608-2. Thus, all or a significant portion of the cable 106 may be on one side or the other of the analog audio codec circuitry 608. [0069] As shown on the left, the analog audio codec circuitry 608 is coupled to the positive audio signal wire 1 10-1, the negative audio signal wire 110-2, the power wire 110-3, and the ground wire 110-4. The analog audio codec circuitry 608 is provided power via the power wire 110-3 and the ground wire 110-4. As shown on the right, the analog audio codec circuitry 608 is coupled individually to each endpoint node 510 of the multiple endpoint nodes 510-5 to 510-8. To facilitate the adaptation between digital and analog signaling, the analog audio codec circuitry 608 includes at least the DAC 610 to provide analog audio data to one or more speakers. In example operation, the DAC 610 receives digital differential signaling for audio data 304 (of FIG. 3) on the positive audio signal wire 110-1 and the negative audio signal wire 110-2. The DAC converts the digital audio data to analog audio data. The DAC 610 then provides the analog audio data to the adapter socket 602 via the "R" endpoint node 510-5 and the "L" endpoint node 510-6. If the analog audio plug 604 and the analog audio codec circuitry 608 both support microphone functionality, the analog audio codec circuitry 608 engages the ADC 612. Specifically, the ADC 612 converts analog microphone audio data received via the "MIC" endpoint node 510-7 to digital audio data and forwards the digital audio data to the DP and DN pins of the USB Type-C plug 406 via the positive audio signal wire 1 10-1 and the negative audio signal wire 110-2, respectively.[0070] Meanwhile, the antenna 116 is capable of radiating EM signaling and propagating radio signals to the SBU1 pin. In any described implementation, the antenna 116 can be realized using a discrete chip antenna (not explicitly shown) that is coupled to an SBU pin instead of a wireline antenna. With certain cabling apparatus 102-2 implementations, employing a discrete chip antenna enables a cable 106 of an adapter apparatus to be shorter and/or non-flexible. With these approaches that entail use of the analog audio codec circuitry 608, an electronic device 1 12 can therefore obtain radio signals radiated by the antenna 116 via the SBU1 pin while still providing analog signaling to or receiving analog signaling from an analog audio endpoint 108-2. In alternative implementations, the audio endpoint 108-2 comprises an analog headset with at least one speaker and optionally a microphone (not shown in FIG. 6) instead of an adapter socket 602.[0071] To accommodate a European-style analog audio plug 604, the GND and MIC contacts can be swapped in the adapter socket 602 or appropriately handled by the analog audio codec circuitry 608 with a fixed circuitry approach. Alternatively, the analog audio codec circuitry 608 can accommodate both the US and the European configurations by including switching circuitry that routes the endpoint nodes 510-7 and 510-8 depending on the inserted analog audio plug 604. Further, the analog audio codec circuitry 608 can include circuitry to sense ground or detect an impedance of a microphone to determine which contact is coupled to which functionality. The analog audio codec circuitry 608 then controls the switching circuitry responsive to the determination.[0072] FIG. 7 illustrates an example electronic device 112 including a cabling connector 114, switching circuitry 174, and a connector interface controller 172. Here, the cabling connector 114 is implemented as a USB Type-C receptacle 702. The switching circuitry 174 is coupled between the USB Type-C receptacle 702 and the connector interface controller 172. The switching circuitry 174 enables the electronic device 1 12 to operate with an analog or a digital cabling apparatus that is connected to the USB Type-C receptacle 702 under the control of the connector interface controller 172.[0073] In some implementations, the connector interface controller 172 includes a digital interface 704, an FM radio 706, and analog audio codec circuitry 708. In other implementations, the FM radio 706 or the analog audio codec circuitry 708 may be separate from the connector interface controller 172. Each of these components includes at least one input/output node. For example, the analog audio codec circuitry 708 includes a left analog audio node ("HPL"), a right analog audio node ("HPR"), a ground-sense node ("GND SENSE"), and a microphone node ("MIC"). The FM radio 706 (e.g., an FM transmitter, an FM receiver, or both) includes an antenna node ("ANT"), or antenna node 716-3. The digital interface 704 includes a digital negative node ("DN"), or negative interface node 716-2, and a digital positive node ("DP"), or positive interface node 716-1, such as for digital differential signaling. The digital interface 704 can be realized using appropriate logic, including USB interface circuitry, such as that which comports with a USB Type-C protocol. The connector interface controller 172 controls the switching circuitry 174 using at least one switch control signal 712. The analog audio codec circuitry 708 can be employed if an analog headset with a digital device connector 104 (e.g., a USB Type-C plug 406) may be connected to the cabling connector 114 (e.g., a USB Type-C receptacle 702). However, in some implementations, the analog audio codec circuitry 708 is omitted to save space or reduce costs. For example, if analog audio signaling is not to be directly or automatically supported by an electronic device 1 12 or at the associated connector, the analog audio codec circuitry 708 can be omitted. In some such implementations, certain one or ones of the switches 710-1 to 710-4, or the entirety of the switching circuitry 174, may be omitted. For example, in one implementation, a switch which selectively couples one or both of the SBU pins of the USB Type-C receptacle 702 to either the FM radio 706 or to one or more other circuits is included and the other switches are omitted.[0074] In the implementation illustrated in FIG. 7, the switch control signal 712 controls a position or switch state of one or more switches 710 of the switching circuitry 174. Thus, the switching circuitry 174 includes multiple switches 710-1 to 710-4. Although four switches 710 are explicitly shown, more or fewer switches may alternatively be implemented by the switching circuitry 174. Each switch 710 includes a single node side and a multi-node side. Each switch 710 can be controlled so as to cause a given switch 710 to enter a switch state that couples the node on the single node side to a selected node on the multi-node side. The switches 710-1 and 710-2 pertain to audio data in a digital form or in an analog form. The switches 710-3 and 710-4 pertain to a microphone signal, a ground reference, or an antenna signal.[0075] The switch 710-1 includes a negative digital data node ("DN_IN"), an analog left node ("L_IN"), and a negative/left combination node ("DN_L"). The switch 710-2 includes a positive digital data node ("DP_IN"), an analog right node ("R IN"), and a positive/right combination node ("DP R"). The switch 710-3 includes a second sideband use node ("SBU2"), a first sideband use node ("SBUl"), and a sense-out node ("SENSE OUT"). The switch 710-4 includes a second sideband use node ("SBU2"), a first sideband use node ("SBUl"), and a microphone-out node ("MIC_OUT").[0076] The various nodes are coupled to another node or to a pin of the USB Type-C receptacle 702 as follows. For the switch 710-1 : The negative digital data node ("DN_IN") is coupled to the digital negative node ("DN"). The analog left node ("L_IN") is coupled to the left analog audio node ("HPL"). The negative/left combination node ("DN L") is coupled to the DN pin of the USB Type-C receptacle 702. For the switch 710-2: The positive digital data node ("DP_IN") is coupled to the digital positive node ("DP"). The analog right node ("R_IN") is coupled to the antenna node ("ANT") and to the right analog audio node ("HPR"). The positive/right combination node ("DP R") is coupled to the DP pin of the USB Type-C receptacle 702.[0077] For the switches 710-3 and 710-4: The second sideband use nodes ("SBU2") of both switches are coupled to the SBU2 pin of the USB Type-C receptacle 702. The first sideband use nodes ("SBUl") of both switches are coupled to the SBUl pin of the USB Type-C receptacle 702. The microphone-out node ("MIC_OUT") of the switch 710-4 is coupled to the microphone node ("MIC"). The sense-out node ("SENSE_OUT") of the switch 710-3 is coupled to the ground-sense node ("GND SENSE") and to the antenna node ("ANT"). Although not depicted, the nodes or pins may be coupled together via one or more other components. For example, the analog right node ("R_IN") and the sense-out node ("SENSE_OUT") may each be coupled to the antenna node ("ANT") via a respective capacitor. Also, the different nodes coupled to the analog audio codec circuitry 708 can be coupled thereto via an inductive element.[0078] The nodes of the switching circuitry 174 that are coupled to the USB Type-C receptacle 702 comprise one or more terminal nodes 714. Examples of terminal nodes 714 include the negative/left combination node ("DN_L"), the positive/right combination node ("DP_R"), the two first sideband use nodes ("SBUl"), and the two second sideband use nodes ("SBU2"). For clarity, four terminal nodes are explicitly identified. With regard to the switch 710-3, the first sideband use node ("SBU1") comprises a first terminal node 714-1, and the second sideband use node ("SBU2") comprises a second terminal node 714-2. The switch 710-2 includes the positive/right combination node ("DP_R") that comprises a third terminal node 714-3, and the switch 710-1 includes the negative/left combination node ("DN_L") that comprises a fourth terminal node 714-4. If one or more of the multiple switches 710-1 to 710-4 are omitted, the corresponding nodes of the connector interface controller 172 can comprise terminal nodes. For example, if the switches 710-1 and 710-2 are omitted, the positive interface node 716-1 and the negative interface node 716-2 can comprise terminal nodes. Similarly, the antenna node 716-3 can comprise a terminal node.[0079] In example implementations, the connector interface controller 172 provides one or more switch control signals 712 to control the positions or switch states of one or more of the switches 710-1 to 710-4. If the cabling connector 114 is coupled to a digital cabling apparatus 102, the connector interface controller 172 uses the switch control signal 712 to place the switches 710-1 and 710-2 in a digital mode. For example, the switch 710-1 is positioned in a switch state to couple the negative/left combination node ("DN_L") to the negative digital data node ("DN_IN"). Also, the switch 710-2 is positioned in a switch state to couple the positive/right combination node ("DP_R") to the positive digital data node ("DP_IN"). On the other hand, if the cabling connector 1 14 is coupled to an analog cabling apparatus 102, the connector interface controller 172 uses the switch control signal 712 to place the switches 710-1 and 710-2 in an analog mode. For example, the switch 710-1 is positioned in a switch state to couple the negative/left combination node ("DN L") to the analog left node ("L_IN"). Also, the switch 710-2 is positioned in a switch state to couple the positive/right combination node ("DP R") to the analog right node ("R IN"). Thus, the switches 710-1 and 710-2 are both in the digital mode or both in the analog mode.[0080] In contrast, the switches 710-3 and 710-4 are in opposite positions with respect to the first and second sideband use nodes (SBU1 and SBU2). The connector interface controller 172 determines which of the SBU1 or SBU2 nodes is coupled to the antenna 116 of a connected cabling apparatus 102 (not shown in FIG. 7). The connector interface controller 172 can make this determination using any of multiple techniques that are applied, for example, to the first and second terminal nodes 714-1 and 714-2. First, the analog audio codec circuitry 708 can sense which terminal node— SBU1 or SBU2— is coupled to ground using ground- sense circuitry coupled to the ground-sense node ("GND_SENSE"). The other terminal node— SBU2 or SBU1, respectively— is therefore coupled to the antenna 116. Second, the FM radio 706 can search for and detect an FM station by demodulating a received radio wireless signal. The terminal node on which the FM signal is detected is coupled to the antenna 1 16. In some situations, the antenna 116 can be coupled to both the SBU1 pin and the SBU2 pin instead of one pin. In other situations, two separate antennas (not illustrated) can be coupled to the SBU1 pin and the SBU2 pin. Such antennas may be shielded from each other and/or may be placed on opposite sides of a set of other wires, for example one or more of the wires 1 10-1 to 110-3. The terminal node— SBU1 or SBU2— with the only FM reception or with the stronger FM reception can be selected as being coupled to the antenna 116. Third, the connector interface controller 172 can use a detected impedance to select between the SBU1 and SBU2 nodes. Some USB circuitry, for instance, can detect impedances.[0081] If the connector interface controller 172 determines that the antenna 116 is coupled to the SBU1 pin of the USB Type-C receptacle 702, one switch control signal 712 causes the switch 710-3 to enter a switch state that couples the sense-out node ("SENSE OUT"), and thus the antenna node ("ANT"), to the first sideband use node ("SBU1"). Another switch control signal 712 (e.g., a separate signal or an inverted version of the one switch control signal) therefore causes the switch 710-4 to enter a switch state that couples the microphone-out node ("MIC OUT") to the second sideband use node ("SBU2"). On the other hand, the connector interface controller 172 may determine that the antenna 1 16 is coupled to the SBU2 pin of the USB Type-C receptacle 702. If so, one switch control signal 712 causes the switch 710-3 to enter a switch state that couples the sense-out node ("SENSE OUT"), and thus the antenna node ("ANT"), to the second sideband use node ("SBU2"). Another switch control signal 712 therefore causes the switch 710-4 to enter a switch state that couples the microphone-out node ("MIC OUT") to the first sideband use node ("SBU2"). Thus, regardless of which SBU pin is coupled to the antenna, the connector interface controller 172 can cause the switching circuitry 174 to route received radio wireless signals to the FM radio 706.[0082] FIG. 8 is a flow diagram illustrating an example process 800 for antenna and audio unification that can be performed by an electronic device that is coupled to a cabling apparatus as described herein. The process 800 is described in the form of a set of blocks 802-810 that specify operations that can be performed. However, operations are not necessarily limited to the order shown in FIG. 8 or described herein, for the operations may be implemented in alternative orders or in fully or partially overlapping manners. Operations represented by the illustrated blocks of the process 800 may be performed by an electronic device (e.g., an electronic device 112 of FIGS. 1-1, 1-2, or 7 or an electronic device 902 of FIG. 9) that is coupled to a cabling apparatus 102 as described herein. More specifically, the operations of the process 800 may be performed by a connector interface controller 172 or a switching circuitry 174 as shown in FIG. 7.[0083] At block 802, a first terminal node of multiple terminal nodes is determined to be coupled to an antenna of a cabling apparatus, with the multiple terminal nodes coupled to a cabling connector of a mobile device that is connected to the cabling apparatus. For example, a connector interface controller 172 of an electronic device 1 12 (e.g., a mobile device such as a smartphone, tablet, or smartwatch) can determine a first terminal node of multiple terminal nodes 714 that is coupled to an antenna 116 of a cabling apparatus 102, with the multiple terminal nodes coupled to a cabling connector 114 that is connected to the cabling apparatus 102. To do so, the connector interface controller 172 can analyze at least a first terminal node 714 (e.g., a terminal node 714-1 and a terminal node 714-2) of multiple terminal nodes of switching circuitry 174 that is coupled to the cabling connector 114. The analysis may include, for example, sensing a ground potential or determining a radio signal exists or detecting an impedance level. During the analysis, the cabling connector 1 14 is coupled to a device connector 104 of the cabling apparatus 102. Based on the analysis, a first terminal node 714 of the multiple terminal nodes is identified as being coupled to the antenna 116 of the cabling apparatus 102. For instance, if a fourth terminal node 714 (e.g., the terminal node 714-2) of multiple terminal nodes is determined to be coupled to ground, or if the first terminal node (e.g., the terminal node 714-1) is determined to provide a radio signal based on the analysis, the connector interface controller 172 identifies the first terminal node (e.g., the terminal node 714-1) as being coupled to the antenna 116 of a cable 106 of the cabling apparatus 102.[0084] At block 804, a wireless signal is received from the antenna via the first terminal node. For example, a radio (e.g., the FM radio 706) can receive a radio wireless signal 1 18-1 from the antenna 1 16 via the first terminal node (e.g., the terminal node 714-1) that is coupled to a first pin of the cabling connector 114. The cabling connector 1 14 can be implemented in accordance with a USB Type-C protocol as a USB Type-C receptacle 702 such that the first pin corresponds to a first sideband use (SBU1) pin. To obtain the wireless signal, the connector interface controller 172 can provide a switch control signal 712 to a switch 710-3 of the switching circuitry 174 as part of the determining. The switch control signal 712 causes the switch 710-3 to enter a switch state that couples the first sideband use node ("SBU1") thereof to a sense-out node ("SENSE OUT") thereof. The sense-out node ("SENSE OUT") is coupled to an antenna node ("ANT") of the radio.[0085] At block 806, the wireless signal is converted to digital audio data. For example, the radio (e.g., the FM radio 706) or the digital interface 704 (e.g., USB circuitry) can convert a demodulated radio wireless signal 118-1 to a digital version of audio data 304. The radio can provide the audio data 304 to the digital interface 704 of the connector interface controller 172. The digital interface 704 can process the audio data 304, such as to prepare the digital audio data 304 for digital transmission over a differential propagation medium as a differential digital signal. [0086] At block 808, the digital audio data is transmitted over multiple audio wires of the cabling apparatus via a second terminal node and a third terminal node of the multiple terminal nodes. For example, the connector interface controller 172 can use the cabling connector 1 14 to transmit digital audio data 304 over multiple audio wires 302 of the cabling apparatus 102 via a second terminal node and a third terminal node (e.g., the terminal nodes 714-3 and 714-4) of the multiple terminal nodes 714. For instance, the differential digital signal may be propagated over a positive audio signal wire 1 10-1 and a negative audio signal wire 1 10-2 of the multiple audio wires 302 via the second and third terminal nodes. To do so, the connector interface controller 172 provides at least one switch control signal 712 to cause the switches 710-1 and 710-2 to be in a digital mode. For example, the switch 710-1 can enter a switch state that couples the negative/left combination node ("DN_L") to the negative digital data node ("DN_IN"), and the switch 710-2 can enter a switch state that couples the positive/right combination node ("DP_R") to the positive digital data node ("DP_IN"). The negative digital data node ("DN_IN") and the positive digital data node ("DP IN") are both coupled to the digital interface 704 to receive the audio data 304 that was demodulated by the radio.[0087] FIG. 9 illustrates an example electronic device 902 that includes a cabling connector 114, switching circuitry 174, a connector interface controller 172, and a radio 920. As shown, the electronic device 902 also includes an antenna 904, a transceiver 906, a user input/output (I/O) interface 908, and an integrated circuit 910. Illustrated examples of the integrated circuit 910, or cores thereof, include a microprocessor 912, a graphics processing unit (GPU) 914, a memory array 916, and a modem 918.[0088] In one or more implementations, antenna and audio cabling unification techniques as described herein can be implemented by the electronic device 902, which is an example of the electronic device 1 12 of FIGS. 1-1 and 1-2. The cabling connector 114 is coupled to the connector interface controller 172 via the switching circuitry 174. The connector interface controller 172 is coupled to the radio 920. In operation, the connector interface controller 172 enables other components, such as the radio 920, to interface with the cabling connector 114 using the switching circuitry 174. In some implementations, the connector interface controller 172 can be realized at least partially as a USB controller. In some aspects, the radio 920 is configured to at least receive and process wireless signals in the FM broadcast radio band and can be implemented in such situations with the FM radio 706 (of FIG. 7). The connector interface controller 172 may be configured to perform or cause the device to implement the functions related to determining whether an antenna is coupled to the cabling connector 114 as described above with reference to FIG. 7 or those related to using an antenna that is part of a cabling apparatus 102 for radio reception.[0089] The electronic device 902 can be a mobile or battery-powered device or a fixed device that is designed to be powered by an electrical grid. Examples of the electronic device 902 include a server computer, a network switch or router, a blade of a data center, a personal computer, a desktop computer, a notebook or laptop computer, a tablet computer, a smart phone, an entertainment appliance, or a wearable computing device such as a smartwatch, intelligent glasses, or an article of clothing. An electronic device 902 can also be a device, or a portion thereof, having embedded electronics. Examples of the electronic device 902 with embedded electronics include a passenger vehicle, industrial equipment, a refrigerator or other home appliance, a drone or other unmanned aerial vehicle (UAV), or a power tool.[0090] For an electronic device with a wireless capability, the electronic device 902 includes an antenna 904 that is coupled to a transceiver 906 to enable reception or transmission of one or more wireless signals, such as those in a cellular or Wi-Fi network band. The antenna 904 is, however, typically too small to facilitate satisfactory FM radio signal reception. The integrated circuit 910 may be coupled to the transceiver 906 to enable the integrated circuit 910 to have access to received wireless signals or to provide wireless signals for transmission via the antenna 904. The electronic device 902 as shown also includes at least one user I/O interface 908. Examples of the user I/O interface 908 include a keyboard, a mouse, a microphone, a touch-sensitive screen, a camera, an accelerometer, a haptic mechanism, a speaker, a display screen, a fingerprint or other biometric sensor, or a projector.[0091] The integrated circuit 910 may comprise, for example, one or more instances of a microprocessor 912, a GPU 914, a memory array 916, a modem 918, and so forth. The microprocessor 912 may function as a central processing unit (CPU) or other general-purpose processor. Some microprocessors include different parts, such as multiple processing cores, that may be individually powered on or off. The GPU 914 may be especially adapted to process visual-related data for display. The memory array 916 stores data for the microprocessor 912 or the GPU 914. Example types of memory for the memory array 916 include random access memory (RAM), such as dynamic RAM (DRAM) or static RAM (SRAM); flash memory; and so forth. The modem 918 demodulates a signal to extract encoded information or modulates a signal to encode information into the signal. If there is no information to decode from an inbound communication or to encode for an outbound communication, the modem 918 may be idled to reduce power consumption. The integrated circuit 910 may include additional or alternative parts than those that are shown, such as an I/O interface, a sensor such as an accelerometer, a transceiver or another part of a receive or transmit chain, a customized or hard-coded processor such as an application-specific integrated circuit (ASIC), at least part of the connector interface controller 172, and so forth.[0092] The integrated circuit 910 may also comprise a system on a chip (SOC). An SOC may integrate a sufficient number of different types of components to enable the SOC to provide computational functionality as a notebook computer, a mobile phone, an IoT device, or another electronic apparatus using one chip, at least primarily. Components of an SOC, or an integrated circuit 910 generally, may be termed cores or circuit blocks. Examples of cores or circuit blocks include, in addition to those that are illustrated in FIG. 9, a voltage or power regulator, a main memory or cache memory block, a memory controller, a general-purpose processor, a cryptographic processor, a video or image processor, a vector processor, a radio, an interface or communications subsystem, a wireless controller, a display controller, an audio codec, digital interface logic, the radio 920, the connector interface controller 172, or the switching circuitry 174. Any of these cores or circuit blocks, such as a processing or GPU core, may further include multiple internal cores or circuit blocks.[0093] In some implementations, the connector interface controller 172 can be realized using a processing unit and processor-executable instructions that are stored on non-transitory processor accessible media. Examples of a processing unit include a general-purpose processor, an application specific integrated circuit (ASIC), a microprocessor, a digital signal processor (DSP), hard-coded discrete logic, or a combination thereof. The processor-accessible media can include memory to retain the processor-executable instructions for software, firmware, hardware modules, and so forth. Memory may be volatile or nonvolatile memory, such as random access memory (RAM), read only memory (ROM), flash memory, static RAM (SRAM), or a combination thereof. Additionally or alternatively, a given controller can be realized using analog circuitry, such as resistors and comparators; digital circuitry, such as transistors and flip-flops; combinations thereof; and so forth. The processor-executable instructions, or other forms of circuitry or controller instantiations, can be implemented in accordance with the techniques and apparatuses described herein.[0094] Unless context dictates otherwise, use herein of the word "or" may be considered use of an "inclusive or," or a term that permits inclusion or application of one or more items that are linked by the word "or" (e.g. , a phrase "A or B" may be interpreted as permitting just "A," as permitting just "B," or as permitting both "A" and "B"). Further, items represented in the accompanying figures and terms discussed herein may be indicative of one or more items or terms, and thus reference may be made interchangeably to single or plural forms of the items and terms in this written description. Finally, although subject matter has been described in language specific to structural features or methodological operations, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or operations described above, including not necessarily being limited to the organizations in which features are arranged or the orders in which operations are performed.
Embodiments are described for a pattern-based control system that learns and applies device usage patterns for identifying and disabling devices exhibiting abnormal usage patterns. The system can learn a user's normal usage pattern or can learn abnormal usage patterns, such as a typical usage pattern for a stolen device. This learning can include human or algorithmic identification of particular sets of usage conditions (e.g., locations, changes in settings, personal data access events, application events, IMU data, etc.) or training a machine learning model to identify usage condition combinations or sequences. Constraints (e.g., particular times or locations) can specify circumstances where abnormal pattern matching is enabled or disabled. Upon identifying an abnormal usage pattern, the system can disable the device, e.g., by permanently destroying a physical component, semi-permanently disabling a component, or through a software lock or data encryption.
CLAIMSI/We claim:1. A method comprising: receiving, at a memory of a mobile device, data representative of a machine learning model trained to identify a usage pattern for the mobile device; evaluating a set of one or more constraints that specify one or more circumstances where abnormal pattern matching is disabled and, based on the evaluating, determining that no constraints, from the set of constraints, are active; in response to the determination that no constraints are active, determining, by applying the machine learning model to current usage conditions for the mobile device to identify the usage pattern for the mobile device, that an abnormal usage pattern is occurring; soliciting a disable override from a user of the mobile device; and in response to the determining that the abnormal usage pattern is occurring and that no disable override was received, permanently disabling one or more physical components of the mobile device.2. The method of claim 1 , wherein the machine learning model is trained, based on observed activities of the user, to identify a normal usage pattern for the user; and wherein determining that the abnormal usage pattern is occurring comprises determining that an output from the machine learning model indicates the current usage conditions are below a threshold match for the normal usage pattern for the user.3. The method of claim 1 , wherein the machine learning model is trained to identify the abnormal usage pattern corresponding to usage conditions for stolen devices; and wherein determining that the abnormal usage pattern is occurring comprises determining that an output from the machine learning model indicates the current usage conditions are above a threshold match for the usage conditions for stolen devices.4. The method of claim 1 , wherein the current usage conditions comprise values for three or more of: identifications of locations; changes in settings; personal data access events; application usage amounts or sequences;IMU device movement patterns; a SIM card change event; purchase events; or any combination thereof.5. The method of claim 1 , wherein the set of one or more constraints comprise one or more of: safe geographic zones where abnormal pattern matching is disabled; safe times of day where abnormal pattern matching is disabled; safe dates where abnormal pattern matching is disabled; or any combination thereof.6. The method of claim 1 , wherein the set of one or more constraints comprise unsafe dates where abnormal pattern matching is enabled; and wherein at least some of the unsafe dates are based on events automatically identified on a calendar of the user.7. The method of claim 1 , wherein the set of one or more constraints comprise a manual activation of abnormal pattern matching; and wherein the manual activation of abnormal pattern matching was in response to a prompt provided to the user in response to automatic identification of conditions for which a heightened security risk exists.8. The method of claim 1 , wherein the machine learning model is a first machine learning model trained to identify usage conditions corresponding to a stolen device; wherein the method further comprises obtaining a second machine learning model trained to identify usage conditions corresponding to normal use by the user; and wherein the determining that the abnormal usage pattern is occurring comprises: applying the first machine learning model to the current usage conditions for the mobile device to generate a first value estimating a first likelihood that the current usage conditions correspond to the mobile device being stolen; applying the second machine learning model to the current usage conditions for the mobile device to generate a second value estimating a second likelihood that the current usage conditions correspond to normal device actions for the user; combining results that are based on the first and second values into a combined abnormal usage prediction; and determining that the combined abnormal usage prediction is above an abnormal usage pattern threshold.9. The method of claim 8, wherein the method further comprises obtaining a third machine learning model trained to identify, based on providing the current usage conditions to the third machine learning model, weights between the results that are based on the first and second values; and wherein the combining of the results comprises: providing the current usage conditions to the third machine learning model to obtain the weights; applying the weights to the results that are based on the first and second values; and combining the weighted results into the combined abnormal usage prediction.10. The method of claim 1 , wherein at least the determining that the abnormal usage pattern is occurring and the permanently disabling the one or more physical components of the mobile device are controlled by a second processing component separate from a first processing component that executes an operating system of the mobile device.11. The method of claim 1 , wherein soliciting the disable override comprises sending a message to a computing system, other than the mobile device, and setting a timer for receiving the override; and wherein the permanently disabling the one or more physical components of the mobile device is performed in response to expiration of the timer.12. The method of claim 1 , wherein applying the machine learning model includes receiving a confidence value from the machine learning model specifying a likelihood that the abnormal usage pattern is occurring; and wherein permanently disabling the one or more physical components of the mobile device is in response to comparing the confidence value to a threshold over which the mobile device is permanently disabled and under which a non-permanent disabling of the mobile device is performed.13. A computing system for disabling devices exhibiting an abnormal usage pattern, the computing system comprising: one or more processors; and one or more memories storing instructions that, when executed by the computing system, cause the one or more processors to perform operations comprising: receiving data representative of a machine learning model trained to identify the abnormal usage pattern; determining, by applying the machine learning model to current usage conditions for a device, that the abnormal usage pattern is occurring; soliciting a disable override from a user associated with the device; and in response to the determining that the abnormal usage pattern is occurring and that no disable override was received, disabling the device.14. The computing system of claim 13, wherein the machine learning model is trained to identify the abnormal usage pattern corresponding to usage conditions for stolen devices; and wherein determining that the abnormal usage pattern is occurring comprises determining that an output from the machine learning model indicates the current usage conditions are above a threshold match for the usage conditions for stolen devices.15. The computing system of claim 13, wherein the current usage conditions comprise values for two or more of: identifications of locations; changes in settings; personal data access events; application usage amounts or sequences;IMU device movement patterns; a SIM card change event; or any combination thereof.16. The computing system of claim 13, wherein the operations further comprise: evaluating a set of one or more constraints that specify one or more circumstances where abnormal pattern matching is disabled; based on the evaluating, determining that no constraints, from the set of constraints, are active and, in response, enabling abnormal usage pattern matching.17. The computing system of claim 16, wherein the set of one or more constraints comprise one or more of: safe geographic zones where abnormal pattern matching is disabled; safe times of day where abnormal pattern matching is disabled; safe dates where abnormal pattern matching is disabled; or any combination thereof.18. The computing system of claim 16, wherein the set of one or more constraints comprise a manual activation of abnormal pattern matching; and wherein the manual activation of abnormal pattern matching was in response to a prompt provided to the user in response to automatic identification of conditions for which a heightened security risk exists.19. A non-transitory computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform operations for disabling devices, the operations comprising: obtaining a machine learning model trained to identify a usage pattern; determining that no constraints, from a set of one or more constraints specifying circumstances where abnormal pattern matching is disabled, are active; in response to the determination that none of the constraints are active, determining, by applying the machine learning model to current usage conditions for a device to identify the usage pattern for the mobile device, that an abnormal usage pattern is occurring; and in response to the determining that the abnormal usage pattern is occurring, disabling the device.20. The computer-readable storage medium system of claim 19, wherein applying the machine learning model includes receiving a confidence value from the machine learning model specifying a likelihood that the abnormal usage pattern is occurring; and wherein disabling the device comprises permanently disabling the device in response to comparing the confidence value to a threshold over which the device is permanently disabled and under which a non-permanent disabling of the device is performed.
DEVICE DEACTIVATION BASED ON BEHAVIOR PATTERNSTECHNICAL FIELD[0001] The present disclosure is directed to identifying individual or categorical device usage patterns for automatically disabling devices.BACKGROUND[0002] It is estimated that over 70 million mobile devices are stolen each year, and less than 7% of these are recovered. In addition, devices are becoming ever more valuable, often costing thousands of dollars to replace. However, the cost of a stolen device often goes far beyond the monetary value of the device itself. The theft of personal information, financial information, credentials to other systems, etc. can far outweigh the cost of replacing stolen hardware.[0003] There are numerous systems aimed at combating device theft. For example, many devices employ encryption technologies, authentication procedures, and biometrics to protect device data. However, the high value of devices and recoverable data available to device thieves results in the number of stolen devices rising each year.BRIEF DESCRIPTION OF THE DRAWINGS[0004] Figure 1 is a block diagram illustrating an overview of devices on which some implementations can operate.[0005] Figure 2 is a block diagram illustrating an overview of an environment in which some implementations can operate.[0006] Figure 3 is a block diagram illustrating components which, in some implementations, can be used in a system employing the disclosed technology.[0007] Figure 4 is a flow diagram illustrating a process used in some implementations for learning and applying device usage patterns to disable likely stolen devices. [0008] Figure 5 is a conceptual diagram illustrating an example of entities and operations for learning an abnormal device usage pattern and disabling the device when the abnormal device usage pattern is detected.[0009] The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements.DETAILED DESCRIPTION[0010] Embodiments are described for a pattern-based control system that learns and applies device usage patterns for identifying and disabling devices exhibiting abnormal usage patterns, such as patterns for stolen devices. In various implementations, disabling the device can include permanently destroying a physical component of the device, semi-permanently disabling a component of the device (e.g., requiring third-party override to re-enable), or temporarily disabling the device (e.g., through a software lock or encryption that can be reversed with a password or other authentication procedure).[0011] In some implementations, the pattern-based control system can learn a user's normal usage pattern and disable the device when a usage pattern is identified that is a threshold amount different from the normal usage pattern. In other implementations, the pattern-based control system can learn abnormal usage patterns, such as a typical usage pattern for a stolen device and can disable the device when such an abnormal usage pattern is found.[0012] In some cases, the pattern-based control system can learn a usage pattern as a set of values (e.g., binary values for whether particular activities have occurred, particular values for options in categories of activities, value ranges, etc.) for usage conditions. In other implementations, the pattern-based control system can use usage condition values as input to a machine learning model that can produce a usage pattern match estimation. A machine learning model can be trained to identify "stolen device" usage patterns based on usage patterns seen in other stole devices or based on activities known to commonly occur when a device is stolen (such as making multiple purchases, mining personal data, and swapping a SIM card). A machine learning model can also be trained to learn a particular user's typical usage pattern, e.g., by monitoring device usage and applying the activities as training data to the model, assuming a non-stolen device until the model has enough use examples to have built up the ability to identify the typical usage pattern.[0013] A "machine learning model" or "model," as used herein, refers to a construct that is trained using training data to make predictions or provide probabilities for new data items, whether or not the new data items were included in the training data. For example, training data can include items with various parameters and an assigned classification. A new data item can have parameters that a model can use to assign a classification to the new data item. As another example, a model can be a probability distribution resulting from the analysis of training data. Examples of models include: neural networks, deep neural networks, support vector machines, decision trees, Parzen windows, Bayes, probability distributions, and others. Models can be configured for various situations, data types, sources, and output formats.[0014] In various implementations the pattern-based control system can monitor a variety of current usage conditions. The pattern-based control system can determine a binary value for whether particular activities are occurring or identify individual values or value ranges for the circumstance of an activity. Such usage conditions caninclude, for example, values for identifications of locations, changes in settings, data access events (e.g., passwords, personal data), application usage amounts or sequences, bandwidth usage events or patterns, device movement patterns (e.g., inertial measurement unit or "IMU" data), a SIM change event, purchase events, identifications of a current user facial pattern, etc.[0015] In some implementations, a user can apply constraints on disabling a device, such as never disabling the device when at a particular known location, enabling abnormal usage pattern matching in certain circumstances (e.g., when the user indicates she is traveling), disabling checking for abnormal usage patterns in certain circumstances (e.g., if the device is lent to another user), enabling or disabling usage patterns matching at certain times of day and/or for certain dates, etc.[0016] In some implementations, the pattern-based control system can provide an option for the user to override device disabling once an abnormal usage pattern is identified. For example, the pattern-based control system can provide one or more alerts through the device or by sending notifications to other accounts (such as an email account or texting a backup device). The alert may require an authentication, following which the pattern-based control system can cancel disabling of the device. In some implementations, canceling disabling of the device can cause the pattern-based control system to change the matched abnormal usage pattern or retrain the machine model to not identify the usage pattern that caused the notification.[0017] Upon identifying an abnormal usage pattern (and if no override is provided) the pattern-based control system can disable the device. In some implementations, disabling the device includes permanently destroying a physical component of the device. For example, the pattern-based control system can overload fuses, corrupt memory, overheat a processor, etc. In other implementations, disabling the device includes semi-permanent disabling a component of the device such as turning off or disabling a device (e.g., memory) controller, disconnecting hardware components, etc. Such semi-permanent disabling may require a third-party (such as a manufacturer, network provider, etc.) to re-enable the device. In yet other implementations, the disabling can be non-permanent, e.g., through a software lock or encryption, which can be undone through an authentication procedure. In some implementations, the level of device disabling can be based on a severity of abnormal activities or a confidence value of the match to the abnormal usage. For example, if the pattern-based control system has a high confidence value for the device being stolen, it can permanently disable the device, but with a lower threshold confidence value, the pattern-based control system may only semi-permanently disable the device.[0018] There are existing systems aimed at reducing the security exposure due to device theft. These systems either rely on pre-established security policies, such as requiring strong passwords, data encryption, or local sandboxes for sensitive applications, or they rely on remote control, such as remote credential removal or memory wiping upon a notification from a device owner that the device was stolen. Flowever, these existing systems suffer from multiple drawbacks. Pre-established security policies, to remain secure, generally require users to comply with guidelines such as not reusing passwords and not entering passwords where they can be overseen. Remote control systems require a user to realize their device has been stolen and contact (without having access to their stolen device) a remote device administrator to perform the data removal. In addition to these security issues, both types of existing systems fail to adequately disincentivize device theft, as the devices retain significant value following typical actions a thief can take such as a simple factory reset, replacing a storage system, and/or replacing a SIM card.[0019] The disclosed technology is expected to overcome these drawbacks of existing systems. By providing automated device disabling based on learned usage patterns, security is improved through not relying on either user guideline compliance or prompt administrator notification of theft. Instead, device disabling occurs automatically when an abnormal usage pattern is detected. In addition, by providing an automated system for disabling a device in a manner which a thief is unable to overcome (e.g., by damaging device hardware or disabling hardware controllers - actions which performing a factory reset or replacing a storage system do not overcome), devices become much less attractive to steal and thus device theft may decrease.[0020] Several implementations are discussed below in more detail in reference to the figures. Figure 1 is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology can operate. The devices can comprise hardware components of a device 100 that learn device usage patterns. The same device 100 or another version of device 100 can detect abnormal usage pattern matches and, unless a constraint is active or an override occurs, disable the device 100. Device 100 can include one or more input devices 120 that provide input to the processor(s) 110 (e.g. CPU(s), GPU(s), HPU(s), etc.), notifying it of actions. The actions can be mediated by a hardware controller that interprets the signals received from the input device and communicates the information to the processors 110 using a communication protocol. Input devices 120 include, for example, a mouse, a keyboard, a touchscreen, an infrared sensor, a touchpad, a wearable input device, a camera- or image-based input device, a microphone, or other user input devices.[0021] Processors 110 can be a single processing unit or multiple processing units in a device or distributed across multiple devices. Processors 110 can be coupled to other hardware devices, for example, with the use of a bus, such as a PCI bus or SCSI bus. The processors 110 can communicate with a hardware controller for devices, such as for a display 130. Display 130 can be used to display text and graphics. In some implementations, display 130 provides graphical and textual visual feedback to a user. In some implementations, display 130 includes the input device as part of the display, such as when the input device is a touchscreen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: an LCD display screen, an LED display screen, a projected, holographic, or augmented reality display (such as a heads-up display device or a head-mounted device), and so on. Other I/O devices 140 can also be coupled to the processor, such as a network card, video card, audio card, USB, firewire or other external device, camera, printer, speakers, CD-ROM drive, DVD drive, disk drive, or Blu-Ray device.[0022] In some implementations, the device 100 also includes a communication device capable of communicating wirelessly or wire-based with a network node. The communication device can communicate with another device or a server through a network using, for example, TCP/IP protocols. Device 100 can utilize the communication device to distribute operations across multiple network devices.[0023] The processors 110 can have access to a memory 150 in a device or distributed across multiple devices. A memory includes one or more of various hardware devices for volatile and non-volatile storage and can include both read-only and writable memory. For example, a memory can comprise random access memory (RAM), various caches, CPU registers, read-only memory (ROM), and writable non volatile memory, such as flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, and so forth. A memory is not a propagating signal divorced from underlying hardware; a memory is thus non-transitory. Memory 150 can include program memory 160 that stores programs and software, such as an operating system 162, pattern-based control system 164, and other application programs 166. Memory 150 can also include data memory 170, e.g., device usage patterns, constraint settings, override notification templates, configuration data, settings, user options or preferences, etc., which can be provided to the program memory 160 or any element of the device 100.[0024] Some implementations can be operational with numerous other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, handheld or laptop devices, cellular telephones, wearable electronics, gaming consoles, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, or the like.[0025] Figure 2 is a block diagram illustrating an overview of an environment 200 in which some implementations of the disclosed technology can operate. Environment 200 can include one or more client computing devices 205A-D, examples of which can include device 100. Client computing devices 205 can operate in a networked environment using logical connections through network 230 to one or more remote computers, such as a server computing device.[0026] In some implementations, server 210 can be an edge server which receives client requests and coordinates fulfillment of those requests through other servers, such as servers 220A-C. Server computing devices 210 and 220 can comprise computing systems, such as device 100. Though each server computing device 210 and 220 is displayed logically as a single server, server computing devices can each be a distributed computing environment encompassing multiple computing devices located at the same or at geographically disparate physical locations. In some implementations, each server 220 corresponds to a group of servers.[0027] Client computing devices 205 and server computing devices 210 and 220 can each act as a server or client to other server/client devices. Server 210 can connect to a database 215. Servers 220A-C can each connect to a corresponding database 225A-C. As discussed above, each server 220 can correspond to a group of servers, and each of these servers can share a database or can have their own database. Databases 215 and 225 can warehouse (e.g. store) information such as typical uses of devices following a theft, normal usage patterns, constraint settings, etc. Though databases 215 and 225 are displayed logically as single units, databases 215 and 225 can each be a distributed computing environment encompassing multiple computing devices, can be located within their corresponding server, or can be located at the same or at geographically disparate physical locations.[0028] Network 230 can be a local area network (LAN) or a wide area network (WAN), but can also be other wired or wireless networks. Network 230 may be the Internet or some other public or private network. Client computing devices 205 can be connected to network 230 through a network interface, such as by wired or wireless communication. While the connections between server 210 and servers 220 are shown as separate connections, these connections can be any kind of local, wide area, wired, or wireless network, including network 230 or a separate public or private network.[0029] Figure 3 is a block diagram illustrating components 300 which, in some implementations, can be used in a system employing the disclosed technology. The components 300 include hardware 302, general software 320, and specialized components 340. As discussed above, a system implementing the disclosed technology can use various hardware including processing units 304 (e.g. CPUs, GPUs, APUs, etc.), main memory 306, storage memory 308 (local storage or as an interface to remote storage, such as storage 215 or 225), and input and output devices 310. In various implementations, storage memory 308 can be one or more of: local devices, interfaces to remote storage devices, or combinations thereof. For example, storage memory 308 can be a set of one or more hard drives (e.g. a redundant array of independent disks (RAID)) accessible through a system bus or can be a cloud storage provider or other network storage accessible via one or more communications networks (e.g. a network accessible storage (NAS) device, such as storage 215 or storage provided through another server 220). Components 300 can be implemented in a client computing device such as client computing devices 205 or on a server computing device, such as server computing device 210 or 220.[0030] General software 320 can include various applications including an operating system 322, local programs 324, and a basic input output system (BIOS) 326. Specialized components 340 can be subcomponents of a general software application 320, such as local programs 324. Specialized components 340 can include constraint controller 344, usage pattern learning module 346, usage pattern monitor 348, override module 350, disabling controller 352, and components which can be used for providing user interfaces, transferring data, and controlling the specialized components, such as interfaces 342. In some implementations, components 300 can be in a computing system that is distributed across multiple computing devices or can be an interface to a server-based application executing one or more of specialized components 340. Although depicted as separate components, specialized components 340 may be logical or other nonphysical differentiations of functions within a controller. For example, specialized components 340 may be aspects of a controller that resides on an application processor, modem processor, ASIC, FPGA, or the like. In some examples, specialized components 340 is a memory controller, represented as an IP block of a layout or design, in one of the foregoing.[0031] Constraint controller 344 can determine if any constraints that specify circumstances where abnormal pattern matching is disabled or enabled, are active. Examples of the circumstances that constraints specify can include unsafe or safe geographic zones where abnormal pattern matching is enabled or disabled; unsafe or safe times of day where abnormal pattern matching is enabled or disabled; unsafe or safe dates where abnormal pattern matching is enabled or disabled, manual activation or deactivation of abnormal pattern matching, etc. Additional details on constraints from abnormal pattern matching are provided below in relation to block 404 and software module 552.[0032] Usage pattern learning module 346 can learn device usage patterns corresponding to normal usage for a particular user or usage typically seen (or expected) on a stolen device. In various implementations, learning usage patterns can include training a machine learning model using usage conditions tagged as either being for or not for a particular user or tagged as occurring or not occurring in a stolen device. In other implementations, learning usage patterns can include human selections of sets of usage conditions or selecting usage conditions based on statistical analysis (e.g., where particular uses conditions occur above a threshold amount of times, they can be added to the selected set of usage conditions). Additional details on learning usage patterns are provided below in relation to block 402 and software module 532.[0033] Usage pattern monitor 348 can obtain the usage patterns (either as a machine learning model or a set of usage conditions) leaned by usage pattern learning module 346. Usage pattern monitor 348 can also obtain current usage conditions for the device 300 (e.g., from components 320 via interfaces 342) and apply them against the obtained usage patterns to determine whether an abnormal usage pattern is occurring. Additional details on monitoring current usage conditions for abnormal usage patterns are provided below in relation to block 406, software module 554, and neural network 506. [0034] Override module 350 can, when an abnormal usage pattern is detected by usage pattern monitor 348, solicit an override from a user, such as by providing a notification on the device 300 or by sending a message to another system such as an email account, phone number, or device administration system. If the user provides the override within a set time limit, disabling the device can be cancelled. Additional details on overrides are provided below in relation to block 408 and software module 556.[0035] Disabling controller 352 can disable the device 300 when an abnormal usage pattern is detected by usage pattern monitor 348 and no override is received by override module 350 within the set time limit. Disabling the device 300 can include permanently destroying a physical component of the device 300, semi-permanently disabling a component of the device 300 (e.g., requiring third-party override to re enable), or temporarily disabling the device 300 (e.g., through a software lock or encryption). In some implementations, the level of disabling can be based on a mapping of confidence values for abnormal usage (provided by usage pattern monitor 348) to types of disabling. Additional details on disabling a device are provided below in relation to block 410 and action 590.[0036] Those skilled in the art will appreciate that the components illustrated in Figures 1-3 described above, and in each of the flow diagrams discussed below, may be altered in a variety of ways. For example, the order of the logic may be rearranged, substeps may be performed in parallel, illustrated logic may be omitted, other logic may be included, etc. In some implementations, one or more of the components described above can execute one or more of the processes described below.[0037] Figure 4 is a flow diagram illustrating a process 400 used in some implementations for learning and applying device usage patterns to disable likely stolen devices. Parts of process 400 can be performed at different times. In some implementations, block 402, for learning device usage patterns, can be performed incrementally as a user uses a device. In other implementations, block 402 can be performed ahead of time using a set of training items indicating whether or not a particular use is part of abnormal device usage (e.g., for a stolen device). These cases allow process 400 to obtain a machine learning model by training it. In other cases, block 402 can be performed on a device different from the device that performs blocks 404 through 412. For example, a remote device can train a machine learning model to recognize usage patterns for stolen devices, and that machine learning model can be obtained by another device which uses the model to check for the usage pattern.[0038] While in some cases portions of process 400 can be performed with the normal processing elements and/or memory of a device, in other implementations, at least some parts of process 400 (e.g., blocks 404 through 412) can be performed using specialized security hardware for example a processor and/or memory separate from the processor and/or memory used for normal operations (e.g., that execute the operating system) of the device. This can allow the device to monitor for abnormal usage despite exceptional circumstances such as a factory data reset. Similarly, in some implementations, part of process 400 can be performed remotely, by providing current usage conditions over a network to a system performing the abnormal usage monitoring. In some circumstances, the device being monitored can include a default to disable the device if a verification is not periodically received from the remote system, preventing thieves from simply disabling communication to overcome the device disabling procedures.[0039] At block 402, process 400 can learn a device usage pattern. A device usage pattern can be based on usage conditions, such as a current location, changes made to settings, particular data access events (e.g., passwords or personal data access), application uses, bandwidth patterns, physical device movement (e.g., IMU data), a SIM card change event, financial events (e.g., e-commerce purchases), current user facial recognition data, other user I/O events, network traffic, etc. A usage pattern can be a typical usage pattern (e.g., typical use by a particular user) or an abnormal usage pattern (e.g., use that is typical for a stolen device).[0040] In some implementations, a device usage pattern can be learned by training a machine learning model to recognize when usage conditions amount to a normal or an abnormal usage pattern. In some implementations, the machine learning model can be a neural network with multiple input nodes that receive usage conditions. The input nodes can correspond to functions that receive the input and produce results. These results can be provided to one or more levels of intermediate nodes that each produce further results based on a combination of lower level node results. A weighting factor can be applied to the output of each node before the result is passed to the next layer node. At a final layer, ("the output layer,") one or more nodes can produce a value classifying the input that, once the model is trained for identifying typical or abnormal usage patterns, can be used as a gauge of whether a usage is typical or abnormal. As another example, the machine learning model can be a deep neural network and/or part of the neural network can have an internal state, allowing the neural network to identify usage patterns over time (e.g., a recurrent neural network) or can perform convolutions (e.g. a convolutional neural network). In some implementations, the neural network can be trained to identify abnormal usage patterns with the training data that includes input usage conditions performed by normal users mapped to typical usage output and training data that includes activities observed in devices that have been stolen or activities expected to be performed in stolen devices mapped to abnormal output. In other circumstances, the model can be trained to identify use of a particular user, in which case the training items can be monitored usage conditions by that user mapped to output indicating the use is normal. During training, a representation of the input usage conditions (e.g., as a feature vector) can be provided to the neural network. Output from the neural network can be compared to the desired output mapped to that training item and, based on the comparison, the neural network can be modified, such as by changing weights between nodes of the neural network and/or parameters of the functions used at each node in the neural network or between layers of a deep neural network. After applying each of the training items and modifying the neural network in this manner, the neural network model can be trained to evaluate new usage patterns for whether they are normal or abnormal.[0041] In some implementations, instead of or in addition to machine learning models, at block 402 the pattern-based control system can learn a device usage pattern as a set of usage conditions. For example, an abnormal usage pattern could be set as a change in location to an area never previously visited combined with a password database access followed by a factory data reset.[0042] In some implementations where the learned device usage pattern is particular to a user, the learned device usage pattern can be reset, allowing the device to be transferred to a new user to lean a new typical usage pattern. This reset process can be behind various security features such as a password, biometrics, or may require a third-party connection (such as to a network provider or device manufacturer) to perform the usage pattern reset. [0043] In cases where the usage pattern is learned on a device other than where it will be applied, the device used pattern can be pre-loaded into the device, e.g. by a device manufacturer, or can be provided to the device over a network.[0044] At block 404, process 400 can determine whether a constraint is active. Constraints can limit the circumstances where the pattern-based control system will perform abnormal device usage checking. For example, a user may be able to configure settings to explicitly turn pattern matching on or off or may set circumstances for pattern matching to be automatically enabled or disabled. For example, a user may loan her device to another user but will not want the abnormal usage by the other user to trigger disabling of the device. In this instance, the user can toggle a setting telling the pattern-based control system to stop checking for abnormal usage patterns until the setting is toggled back on. As another example, a user can establish safe geographic zones where abnormal pattern matching is disabled (e.g., at home, work, or auto- learned locations based on the user's routine) or can establish unsafe zones where abnormal pattern matching is enabled (e.g., when traveling or at a location not part of the user's usual routine). Similarly, a user can establish safe or unsafe times or dates to enable or disable abnormal pattern matching (e.g., the user can enable pattern matching for dates when the user is traveling, or the pattern-based control system can automatically identify events on the user's calendar where a theft is more likely to occur). In some cases, the pattern-based control system can automatically identify conditions (e.g., based on a current location, calendar event, or other scenario) for which a heightened security risk exists (e.g., where theft is more likely) and can prompt the user to activate abnormal pattern matching. If a constraint is active (preventing abnormal pattern matching), process 400 can remain at block 404 until the constraint is turned off. If no such constraints are active, process 400 can continue to block 406. In some implementations, no constraint checking is used and process 400 can skip block 404 and proceed directly to block 406.[0045] At block 406, process 400 can detect whether abnormal device usage is occurring. Depending on the type of device usage pattern learned at block 402, comparing current usage conditions to usage patterns can include determining a threshold match between the current usage conditions and a set of usage conditions learned at block 402 or applying the current usage conditions to a machine learning model trained at block 402. In various implementations, identifying abnormal device usage can include determining that current usage conditions do not sufficiently match a normal usage pattern or can include determining that current usage conditions sufficiently match an abnormal usage pattern.[0046] In some implementations, more than one device usage pattern can be used. For example, process 400 can have identified an abnormal device usage pattern for stolen devices and can have identified a typical usage pattern for a particular user (e.g., a separate machine learning model trained for each). Process 400 can combine the output from these two usage patterns to determine whether abnormal device usage is occurring, e.g., by determining abnormal usage when either model provides a confidence value above a threshold, when both models provide a confidence value above a threshold, or when the average determination by both models is above a threshold. In some cases, the output from each of these two models can be weighted, e.g. by static values or by using a third model trained to identify a weight for the output of each of the other two models based on input of the usage conditions. For example, an "is stolen" model can produce an estimation of 73% positive that the device has been stolen based on the current usage conditions, an "is typical usage" model can produce an estimation of 87% that the current usage conditions indicate non-typical usage, and a "weighting model" can provide a 61% weight for the "is typical usage" model and a 39% weight for the "is stolen" model for the current usage conditions. Thus, the overall result can be 73%*39%+87%*61%=81.54%. The threshold for abnormal usage in this example can be 80%, so the result will be a determination that abnormal usage is occurring. If abnormal device usage is not detected, process 400 can return to block 404, but if abnormal device usage is detected, process 400 can continue to block 408.[0047] At block 408, process 400 can determine whether an override for disabling the device has been provided. In response to detecting abnormal device usage, process 400 can provide a warning either on the device or to a separate system (e.g., a specified email account, device administrator, by text to a backup device, etc.) that abnormal device usage has been detected and, unless an override is provided within a threshold amount of time (e.g., 30 seconds, 1 , 2, or 5 minutes, etc.), the device will disable itself. Providing an override can include entering a master password, providing a biometric reading, performing a two-factor authentication, etc. In some cases, a provided override can indicate that the abnormal usage match at block 406 was incorrect, and so process 400 can retrain the learned device usage pattern with additional training data including the current usage conditions as a counterexample for abnormal usage. If the override was provided, process 400 can return to block 404, but if no override was provided, process 400 can proceed to block 410. In some implementations, no override checking is used and process 400 can skip block 408 and proceed directly to block 410.[0048] At block 410, process 400 can disable the device for which abnormal usage was detected. In various implications, the disabling can be a permanent disabling of hardware, a semi-permanent disabling outside of data stored in device memory, or can be a change in settings or other memory configuration. For example, permanently disabling hardware can include overloading fuses, corrupting memory, overheating a processor, etc. Semi-permanent disabling can include turning off or disconnecting hardware or disabling a hardware controller. Disabling by changes to settings or other memory configurations can include encrypting a drive, locking a device, or deleting particular sensitive information. In some implementations, the type of disabling that occurs can be based on a confidence score of how likely the device is to have been stolen determined at block 406. For example, a machine learning model can produce a value between zero and one, where the closer the value is to one, the more confident the model is that the current usage conditions indicate abnormal (or normal depending on the way the model was trained) usage. Various thresholds of confidence values can be mapped to different types of disabling, e.g., the higher the confident value is for the current usage indicating the device has been stolen, the more permanent the disabling of the device.[0049] At block 412, process 400 can provide an alert that the device was disabled. For example, a specified email account, phone number, or other system (e.g., an employer information security system), can receive notification that the device has been disabled. In some implantations, the alert can also provide the usage conditions that caused the device to be disabled and/or instructions for re-enabling the device. Process 400 can then end. In some implementations, no deactivation alerts are provided, and process 400 can skip block 412.[0050] Figure 5 is a conceptual diagram illustrating an example 500 of entities and operations for learning an abnormal device usage pattern and disabling the device when the abnormal device usage pattern is detected. Example 500 includes stolen device 502, monitored device 504, and a neural network 506. Example 500 also shows software modules including a module 532 that identifies usage patterns to train the neural network 506, a module 552 that applies constraints, a module 554 that monitors usage patterns using neural network 506, and a module 556 that solicits deactivation overrides.[0051] Example 500 begins with the module 532, at action 572, receiving usage conditions that were taken in relation to stolen device 502. These usage conditions are one set of many received from various stolen devices (not shown). In addition, other usage conditions from devices in normal operation are also obtained (also not shown). At action 574, these sets of usage conditions with corresponding labels for stolen or not stolen devices are provided to train neural network 506. The training can include providing the each set of usage conditions as input to the neural network 506 and adjusting parameters of the neural network 506 (e.g., using back propagation) based on how closely the output of the neural network 506 matches the label for that usage condition set.[0052] Once trained, the neural network 506 can be used by device 504 in monitoring for an abnormal (stolen device) usage pattern. Example 500 continues at action 582, where module 552 checks constraints, such as to not monitor usage patterns when in safe zones, during a selected timeframe, and when the user has switched pattern monitoring off. When no constraints indicate that pattern monitoring is disabled, example 500 continues to action 584 where module 554 determines current usage conditions and provides them, at action 586, to trained neural network 506. At action 588, trained neural network 506 provides a determination whether the current usage conditions indicate an abnormal usage pattern. In this example, an abnormal usage pattern has been identified, so module 556 solicits an override from the user, in this case by locking the device and prompting the user for a master password previously set on the device. When a threshold of 20 seconds has elapsed without receiving the master password, example 500 continues to action 590, where the device 504 is disabled. In example 500, disabling the device includes disabling the processor, wiping the hard drive, and corrupting the memory of device 504. Device 504 is now much less useful to the thief and is much less of a security threat, having removed the confidential data it included. [0053] Several implementations of the disclosed technology are described above in reference to the figures. The computing devices on which the described technology may be implemented can include one or more central processing units, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), storage devices (e.g., disk drives), and network devices (e.g., network interfaces). The memory and storage devices are computer-readable storage media that can store instructions that implement at least portions of the described technology. In addition, the data structures and message structures can be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links can be used, such as the Internet, a local area network, a wide area network, or a point-to-point dial-up connection. Thus, computer-readable media can comprise computer-readable storage media (e.g., "non-transitory" media) and computer-readable transmission media.[0054] Reference in this specification to "implementations" (e.g. "some implementations," "various implementations," “one implementation,” “an implementation,” etc.) means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. The appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation, nor are separate or alternative implementations mutually exclusive of other implementations. Moreover, various features are described which may be exhibited by some implementations and not by others. Similarly, various requirements are described which may be requirements for some implementations but not for other implementations.[0055] As used herein, being above a threshold means that a value for an item under comparison is above a specified other value, that an item under comparison is among a certain specified number of items with the largest value, or that an item under comparison has a value within a specified top percentage value. As used herein, being below a threshold means that a value for an item under comparison is below a specified other value, that an item under comparison is among a certain specified number of items with the smallest value, or that an item under comparison has a value within a specified bottom percentage value. As used herein, being within a threshold means that a value for an item under comparison is between two specified other values, that an item under comparison is among a middle specified number of items, or that an item under comparison has a value within a middle specified percentage range. Relative terms, such as high or unimportant, when not otherwise defined, can be understood as assigning a value and determining how that value compares to an established threshold. For example, the phrase "selecting a fast connection" can be understood to mean selecting a connection that has a value assigned corresponding to its connection speed that is above a threshold.[0056] As used herein, the word "or" refers to any possible permutation of a set of items. For example, the phrase "A, B, or C" refers to at least one of A, B, C, or any combination thereof, such as any of: A; B; C; A and B; A and C; B and C; A, B, and C; or multiple of any item such as A and A; B, B, and C; A, A, B, C, and C; etc.[0057] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Specific embodiments and implementations have been described herein for purposes of illustration, but various modifications can be made without deviating from the scope of the embodiments and implementations. The specific features and acts described above are disclosed as example forms of implementing the claims that follow. Accordingly, the embodiments and implementations are not limited except as by the appended claims.[0058] Any patents, patent applications, and other references noted above are incorporated herein by reference. Aspects can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations. If statements or subject matter in a document incorporated by reference conflicts with statements or subject matter of this application, then this application shall control.
Semiconductor devices having a passivation layer formed over their major electrodes and individual electrical connectors connected to the electrodes by conductive attach material through openings in the passivation layer are described.
WHAT IS CLAIMED IS: 1. A semiconductor device comprising: a semiconductor die having at least one electrode disposed on a major surface thereof; a passivation layer disposed over said at least one electrode of said semiconductor die; an opening in said passivation layer extending from the top surface of said passivation layer to said at least one electrode ; and an electrical connector having a base portion and a contact portion for making electrical contact with an external element; wherein said base portion of said electrical connector is electrically connected to said at least one electrode of said semiconductor die by a layer of conductive material disposed within said opening. 2. The semiconductor device of claim 1 wherein said conductive material is one of conductive epoxy and solder. 3. The semiconductor device of claim 1 wherein said contact portion is a raised bump. 4. The semiconductor device of claim 1 wherein said base portion is one of rectangular, circular and oval. 5. The semiconductor device of claim 1 wherein said opening has a shape corresponding to the shape of said base portion. <Desc/Clms Page number 12> 6. The semiconductor device of claim 1 wherein said base portion and said contact portion form a unitary body. 7. The semiconductor device of claim 1 wherein said electrical connector is punched out of a sheet of copper. 8. The semiconductor device of claim 1 wherein said at least one electrode is solderable. 9. The semiconductor device of claim 1 wherein said semiconductor die is a MOSFET having at least a source electrode, a gate electrode, and a drain electrode each being a major electrode, said passivation layer is disposed over each major electrode and includes an opening exposing each major electrode, and further comprising a plurality of electrical connectors each having a base portion and a contact portion; wherein said exposed portion of said top surface of each major electrode is electrically connected to said base portion of at least one of said plurality of electrical connectors by a layer of conductive material. 10. A process for manufacturing a semiconductor device comprising : providing a semiconductor wafer having a plurality of semiconductor dies formed thereon; 5 forming solderable electrodes on a major surface of each of said semiconductor dies; <Desc/Clms Page number 13> forming a passivation layer over said electrodes, and at least one opening in said passivation layer over each electrode to expose a portion of said electrode ; 10 depositing a layer of conductive attach material on said exposed portions; placing an electrical connector on each said layer of conductive attach material; and dicing said wafer into individual semiconductor die. 11. The process of claim 24 wherein said solderable electrodes comprise one of a combination of titanium or tungsten, nickel, silver and a combination of titanium or tungsten, aluminum, titanium, nickel, silver. 12. The process of claim 24, wherein said conductive attach material is solder, and said process further comprises reflowing said solder before dicing said wafer. 13. The process of claim 24, wherein said conductive attach material is a conductive epoxy and said process further comprises curing said conductive epoxy before dicing said wafer. 14. The process of claim 24, wherein said electrical connector is preformed from a sheet of copper by punching. 15. The process of claim 24 further comprising depositing a high strength thermal epoxy over said passivation layer and portions of said <Desc/Clms Page number 14> electrical connectors to further strengthen the connection between said electrical connectors and said semiconductor die.
<Desc/Clms Page number 1> FLIP CHIP DEVICE HAVING CONDUCTIVE CONNECTORS BACKGROUND OF THE INVENTION [0001] The present invention relates to semiconductor devices and more particularly to chip-scale flip-chip devices. [0002] Because of their relatively small size chip-scale semiconductor devices have been used to increase the density of parts in an electronic circuit and/or reduce the size of an electronic circuit. Some chip-scale semiconductor device have a footprint which is the size of the die or nearly the size of the die. One way to obtain such a small footprint is to place all of the major electrodes on one of the major surfaces of the die. International Patent Application WO 01/59842 Al, entitled Vertical Conduction Flip-Chip Device with Bump Contacts on Single Surface, which is assigned to the assignee of the present invention, discloses a chipscale flip-chip device which includes a die that has all its major electrodes disposed on one major surface thereof and is electrically mountable on a substrate via solder balls formed on its major electrodes. The device shown in WO 01/59841 Al has a reduced footprint because its solder balls are positioned directly under the die when it is mounted thus making it possible to limit the size of the device to the size of the chip. [0003] International Patent Application WO 01/75961, entitled Chip Scale Surface Mounted Device and Process of Manufacture, which is assigned to the assignee of the present application, shows a semiconductor device which has a semiconductor power MOSFET having two major electrodes on a first major surface thereof and another major electrode on a <Desc/Clms Page number 2> second major surface thereof. A passivation layer having a plurality of openings is disposed over the major electrodes on the first major surface of the MOSFET. The openings in the passivation layer are made through to a solderable top metal or a solderable surface can be formed over the exposed portions of the electrodes by nickel plating, gold flash or other series of metals so that solder may be received by the electrodes. In the device shown by WO 01/75961, the passivation layer acts as a plating resist, and a solder mask, designating and shaping the solder areas, as well as acting as a conventional passivation layer. The device shown in WO 01/75961 has a footprint that is close to the size of the chip because the connections to two of its three major electrodes are positioned directly under the power MOSFET. SUMMARY OF THE INVENTION [0004] A semiconductor device according to the present invention is a chip-scale flip-chip having connectors that contribute to the improvement of electrical and thermal characteristics and reliability of the device over conventional flip-chips. A semiconductor device according to the present invention may be a flip-chip device having a semiconductor die which has all of its major electrodes disposed on one major surface thereof. A passivation layer is disposed over all of the electrodes, and openings are created in the passivation layer to expose a portion of each major electrode. Individual electrical connectors, each having a base portion and a contact portion, are electrically connected at their base portions by a conductive attach material to the exposed portions of the electrodes. The passivation layer defines the <Desc/Clms Page number 3> areas of the electrode to which connectors are connected, as well as providing protection for the termination structure of the device. [0006] The electrical connectors used in the semiconductor devices according to the present invention are preformed out of a sheet of conductive metal such as copper by, for example, punching. The shape of the base of the connectors may be changed as desired to enhance the strength of the connection between the connectors and the electrodes of the semiconductor die and/or maximize the area of passivation between the connectors. For example, the shape of the base of the electrodes may be circular, rectangular or oval. The openings in the passivation layer according to the present invention may correspond to the shape and size of the base of the electrical connectors to conserve as much of the passivation layer as possible. Also, the number of contact portions on each base portion may be increased as desired to reduce contact resistance between the device and electrical pads of a substrate. [0008] A semiconductor device according to the present invention is manufactured by first providing a wafer having a plurality of semiconductor die formed thereon. Each semiconductor die on the wafer is then provided with solderable major electrodes. By using solderable electrodes the need for under bump metallization is eliminated. [0009] Preferably, all of the major electrodes of the die are provided on the same surface. A photo imageable epoxy is then deposited over the entire surface of the wafer, covering the electrodes of all the semiconductor die. The epoxy is then dried, and openings are formed over the major electrodes of each die by, for example, application of ultraviolet light through a mask and removal of the affected areas in the dried epoxy. Each <Desc/Clms Page number 4> opening exposes a portion of the surface of an electrode. Attach material such as solder or conductive epoxy is deposited on the exposed portions of the electrode through each opening followed by placement of electrical connectors in each opening. Thereafter, if conductive epoxy is used as attach material, it is cured, and if solder is used, it is reflowed. The wafer is then diced into individual chip-scale semiconductor devices. [0010] The semiconductor devices so manufactured have higher electrical and thermal performance and are more reliable. The manufacturing cost of the semiconductor devices is also less than conventional flip-chips. Also, using the technique described herein a semiconductor device with a much larger die area than the conventional devices can be obtained. [0011] Other features and advantages of the present invention will become apparent from the following description of the invention which refers to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0012] Fig. 1 shows a side view of a semiconductor device according to the first embodiment connected to electrical pads on a substrate. [0013] Fig. 2 is a plan top view of a semiconductor die having all major electrodes on one surface. [0014] Fig. 3 shows a wafer having a plurality of semiconductor die shown in Figure 2 before singulation. Fig. 4 shows the wafer of Fig. 3 having disposed thereon a passivation layer with openings to expose portions of electrodes of the semiconductor die. <Desc/Clms Page number 5> [0016] Fig. 5 shows a cross-sectional profile view of a copper strip from which copper connectors are punched out and used in the device according to the first embodiment of the invention. Fig. 6 shows the wafer of Fig. 4 having copper connectors connected to the electrodes of the semiconductor die through the openings in the passivation layer in accordance with the invention. [0018] Fig. 7A shows a bottom view of a singulated semiconductor device according to the first embodiment of the present invention. [0019] Fig. 7B is a cross-sectional view of a semiconductor device according to the first embodiment looking in the direction of line 7B-7B in Fig. 7A. [0020] Fig. 7C shows an electrical connector used in the first embodiment of the present invention. Fig. 8A shows a bottom view of a semiconductor device according to the second embodiment of the present invention. [0022] Fig. 8B is a side view of the semiconductor device shown in Fig. 8A looking in the direction of line 8B-8B in Fig. 8A. [0023] Fig. 8C shows a side view of an electrical connector used in the second embodiment of the present invention. [0024] Fig. 9A shows a bottom view of a semiconductor device according to the third embodiment of the present invention. [0025] Fig. 9B is a side view of the semiconductor device shown in Fig. 9A looking in the direction of line 9B-9B in Fig. 9A. [0026] Fig. 9C shows a side view of an electrical connector used in the third embodiment of the present invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION <Desc/Clms Page number 6> [0027] Fig. 1 shows semiconductor device 10 according to the first embodiment of the invention surface-mounted on conductive pads 11 of substrate 12 by layers of conductive material 14, such as solder or conductive epoxy. Substrate 12 may be an ordinary circuit board or an insulated metal substrate. Conductive pads 11 may be connected to other elements on the substrate 12 via copper traces (not shown) to form an electronic circuit. [0028] Semiconductor device 10 according to the first embodiment includes a plurality of electrical connectors 16. Each electrical connector 16 is connected to a respective major electrode disposed on a major surface of semiconductor die 18. Passivation layer 20 is disposed over the electrodes 22,24, 26 (Fig. 2) of semiconductor die 18 and around each electrical connector 16. Semiconductor device 10 according to the first embodiment is a flip-chip semiconductor device. This type of semiconductor device is surface-mounted on a substrate via connections that extend from only one of its major surfaces. [0030] Referring now to Fig. 2, semiconductor device 10 in a preferred embodiment of the present invention includes a semiconductor die 18 having disposed on only one major surface thereof control electrode 22 which receives signals that control the operation of die 18, first terminal electrode 24 and second terminal 26. Electrodes 22,24, 26 are formed with a solderable metal such as: (a) titanium/tungsten, nickel, silver or (b) titanium/tungsten, aluminum, titanium, nickel, silver. [0031] In this embodiment, semiconductor die 18 may be a MOSFET having a gate electrode which corresponds to control electrode 22, a source electrode which corresponds to first terminal electrode 24 and a drain <Desc/Clms Page number 7> electrode 26 which corresponds to second terminal electrode 26. It is to be understood that the present invention is not limited to MOSFETs, and that other semiconductor dies, namely, IGBTs, power diodes, and the like may be equally used to practice the present invention. [0032] Referring to Fig. 3, to make semiconductor device 10, silicon wafer 8 having a plurality of semiconductor dies 18 is provided. Each semiconductor die 18 in wafer 8 has disposed on a major surface thereof electrodes 22,24, 26 as shown in Fig. 2. Streets 19 in wafer 8 separate semiconductor die 18 from one another. Wafer 8 is then covered by a photosensitive liquid epoxy by, for example, screen printing so that electrodes 22,24, 26 of each semiconductor die 18 is covered. The photosensitive liquid epoxy should be a photo imageable material such as the material commercially known as"EP2793". After coating the wafer with the photosensitive liquid epoxy, the epoxy is dried and ultraviolet light is applied to the epoxy layer through a mask to define predesignated areas over each electrode 22,24, 26 of each semiconductor die 18 in the wafer 8. These designated areas are then removed to create openings 28 in the epoxy layer to expose portions of electrodes 22,24, 26 of each semiconductor die 18 in wafer 8 as shown in Fig. 4. The epoxy may then be cured to form a passivation layer 20. Of course, other methods for depositing passivation layer 20 on electrodes 22,24, 26 of semiconductor die 18 may be used instead of the one described above. [0033] Once openings 28 have been created, wafer 8 is held in place on a vacuum chuck and aligned using preferably optical alignment, which is a technique used in die bonding. After alignment, conductive attach material 35 such as solder paste or epoxy is deposited on the exposed surfaces of electrodes 22,24, 26 of die 18 through each opening 28. Thereafter, <Desc/Clms Page number 8> preformed electrical connectors, e. g. connectors 16, are placed on the conductive attach material 35 as shown in Fig. 6. Preferably, electrical connectors 16 used in the semiconductor devices of the present invention are punched out of a strip of copper 17 (Fig. 5) and placed in the openings in the passivation layer 20 by a vacuum transport arm. Preferably, multiple electrical connectors 16 are placed in respective openings in one operation. [0034] Referring to Figs. 7A-7C, electrical connector 16 includes a base portion 32 which has a contact surface 33 and contact portion 34 which makes electrical contact with a corresponding conductive pad 11 on a substrate 12 as shown in Fig. 1. Each electrical connector 16 is placed in an opening 28 so that its base portion 32 is connected to a respective electrode 22,24, 26 of a semiconductor die 18 by attach material 35 that is deposited through openings 28. Thereafter, if conductive epoxy is used as attach material 35 it is cured, and if solder is used it is reflown. Optionally, a high strength thermal epoxy may be deposited over the wafer to cover at least a portion of base portion 32 of connectors 16 to improve the strength of the connection between connectors 16 and respective electrodes 22,24, 26. Then, wafer 8 is diced along streets 19 to produce chip-scale flip-chip semiconductor devices 10 according to the first embodiment of the present invention. [0035] Alternatively, wafer 8 shown in Fig. 4 may be diced to produce individual semiconductor die 18 each having a passivation layer 20 and openings 28 formed in passivation layer 20 over its major electrodes 22, 24,26. Connectors 16 may then be placed over conductive attach material deposited on exposed surfaces of electrodes 22,24, 26 through openings 28. Thereafter, if solder is used as conductive attach material 35 it is reflown and if a conductive epoxy is used it is cured to obtain <Desc/Clms Page number 9> semiconductor device 10 according to the first embodiment of the present invention as shown in Figs. 7A and 7B. [0036] According to an aspect of the present invention, the area and the shape of the openings can be made to approximately correspond to the size and shape of the base portions of the connectors used so that the passivation layer may retain its maximum area of coverage. According to another aspect of the present invention the base portion of a connector may be enlarged so that a better and more stable connection can be made between the connector and the electrode of the die. For example, portion 32 of electrical connector 16 has a diameter that is larger than its contact portion 34. The comparatively larger diameter of base portion 32 allows for more stability when connector 16 is connected to an electrode of die 18. Also, a base portion having a larger area may be able to accommodate more than one contact portion, thereby reducing the resistance of the connector. Referring to Figs. 8A-8C, for example, semiconductor device 36 according to a second embodiment of the present invention includes connectors 38 each having base 40 and contact portion 42 for making electrical contact with an electrical pad on a substrate such as pad 11 on substrate 12 as shown in Fig. 1. Electrical connectors 38 have a rectangular base 40 which fits within a corresponding rectangular opening 44 in passivation layer 20. [0038] Referring to Figs. 9A-9C, semiconductor device 46 according to the third embodiment of the present invention includes electrical connectors 48 each having base 50 and two contact portions 52 for making electrical contact with an electrical pad on a substrate such as pad 11 on substrate 12 as shown in Fig. 1. Electrical connectors 48 have oval shaped <Desc/Clms Page number 10> bases 50 which fit within corresponding oval openings 54 in passivation layer 20. [0039] According to an aspect of the invention, whenever the base portion of a connector is enlarged, a conductive epoxy may be used instead of solder to connect the connector to an electrode of the die. Thus, for example, in the first embodiment of the present invention solder is preferred for connecting connector 16 to an electrode of die 18, while conductive epoxy may be used as attach material in the second and third embodiments given the enlarged area of their respective connectors. [0040] A semiconductor device according to the present invention exhibits improved electrical and thermal properties, as well as, better reliability over conventional flip-chips. In addition, the process used for making semiconductor devices according to the present invention eliminates the need for under-bump metallization which may be required for the production of conventional flip-chips. Although the present invention has been described in relation to particular embodiments thereof, many other variations and modifications and other uses will become apparent to those skilled in the art. It is preferred, therefore, that the present invention be limited not by the specific disclosure herein, but only by the appended claims.
A mechanism is described for facilitating encryption-free integrity protection of storage data at computing systems according to one embodiment. A method of embodiments of the invention includes receiving a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device. The read task may include reading the first data block. The method may further include accessing a first reference cryptographic code at a first metadata cache associated with the first data block, calculating a first new cryptographic code relating to the first data block, comparing the first new cryptographic code with the first reference cryptographic code, and accepting the read request if the first new cryptographic code matches the first reference cryptographic code. The accepting may further include facilitating the read task.
CLAIMS What is claimed is: 1. A method comprising: receiving a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; accessing a first reference cryptographic code at a first metadata cache associated with the first data block; calculating a first new cryptographic code relating to the first data block; comparing the first new cryptographic code with the first reference cryptographic code; and accepting the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. 2. The method of Claim 1, further comprising denying the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. 3. The method of Claim 1, wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). 4. The method of Claim 1, wherein the software application comprises an operating system running at the computing device. 5. The method of Claim 1, further comprising: receiving a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; accessing a second reference cryptographic code at a second metadata cache associated with the second data block; calculating a second new cryptographic code relating to the second data block; replacing the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and accepting the write request, wherein accepting includes facilitating the write task. 6. The method of Claim 5, wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. 7. The method of Claim 5, wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). 8. The method of Claim 5, wherein the software application comprises an operating system running at the computing device. 9. An apparatus comprising: first logic to receive a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; second logic to access a first reference cryptographic code at a first metadata cache associated with the first data block; third logic to calculate a first new cryptographic code relating to the first data block; forth logic to compare the first new cryptographic code with the first reference cryptographic code; and fifth logic to accept the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. 10. The apparatus of Claim 9, wherein the fifth logic is further to deny the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. 11. The apparatus of Claim 9, wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). 12. The apparatus of Claim 9, wherein the software application comprises an operating system running at the computing device. 13. The apparatus of Claim 9, wherein: the first logic is further to receive a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; the second logic is further to access second reference cryptographic code at a second metadata cache associated with the second data block; the third logic is further to calculate a second new cryptographic code relating to the second data block; the forth logic is further to replace the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and the fifth logic is further to accept the write request, wherein accepting includes facilitating the write task. 14. The apparatus of Claim 13, wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. 15. The apparatus of Claim 13, wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). 16. The apparatus of Claim 13, wherein the software application comprises an operating system running at the computing device. 17. A system comprising: a computing device having a memory to store instructions, and a processing device to execute the instructions, the computing device further having a mechanism to: receive a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; access a first reference cryptographic code at a first metadata cache associated with the first data block; calculate a first new cryptographic code relating to the first data block; compare the first new cryptographic code with the first reference cryptographic code; and accept the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. 18. The system of Claim 17, wherein the mechanism is further to deny the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. 19. The system of Claim 17, wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). 20. The system of Claim 17, wherein the software application comprises an operating system running at the computing device. 21. The system of Claim 17, wherein the mechanism is further to: receive a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; access second reference cryptographic code at a second metadata cache associated with the second data block; calculate a second new cryptographic code relating to the second data block; replace the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and accept the write request, wherein accepting includes facilitating the write task. 22. The system of Claim 21, wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. 23. At least one machine -readable storage medium comprising a plurality of instructions that in response to being executed on a computing device, causes the computing device to carry out a method according to one or more operations comprising: receive a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; access a first reference cryptographic code at a first metadata cache associated with the first data block; calculate a first new cryptographic code relating to the first data block; compare the first new cryptographic code with the first reference cryptographic code; and accept the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. 24. The machine -readable storage medium of Claim 23, wherein the one or more operations further comprise: denying the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. 25. The machine -readable storage medium of Claim 23, wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). 26. The machine -readable storage medium of Claim 23, wherein the software application comprises an operating system running at the computing device. 27. The machine -readable storage medium of Claim 23, wherein the one or more operations further comprise: receive a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; access a second reference cryptographic code at a second metadata cache associated with the second data block; calculate a second new cryptographic code relating to the second data block; replace the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and accept the write request, wherein accepting includes facilitating the write task. 28. The machine -readable storage medium of Claim 27, wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. 29. The machine -readable storage medium of Claim 27, wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). 30. The machine -readable storage medium of Claim 27, wherein the software application comprises an operating system running at the computing device.
MECHANISM FOR FACILITATING ENCRYPTION-FREE INTEGRITY PROTECTION OF STORAGE DATA AT COMPUTING SYSTEMS FIELD Embodiments of the invention relate to security systems. More particularly, embodiments of the invention relate to a mechanism for facilitating encryption-free integrity protection of storage data at computing systems. BACKGROUND Data security is a well-known branch of computer security that provides security for storage data against theft, corruption, natural disasters, etc. However, conventional systems for providing data security are limited and inefficient. For example, may conventional systems rely on data encryption to simply hide the data which can be intercepted and their integrity can be attacked through, for example, offline modifications. Additionally, these conventional systems are resource-inefficient and power-consuming and yet do not provide protection for any unencrypted portions of storage data. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements. Figure 1 illustrates a mapping framework-based mechanism for encryption-free protection of storage data employed at a computing device according to one embodiment. Figure 2 illustrates a mapping framework-based mechanism for encryption-free protection of storage data at computing devices according to one embodiment. Figure 3 illustrates a transaction sequence for facilitating security and protection of integrity storage data according to one embodiment. Figure 4A illustrates a method for facilitating security and protection of integrity storage data when processing a read operation according to one embodiment. Figure 4B illustrates a method for facilitating security and protection of integrity storage data when processing a write operation according to one embodiment. Figure 5 illustrates computer system suitable for implementing embodiments of the present disclosure according to one embodiment of the invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in details in order not to obscure the understanding of this description. Embodiments facilitate providing security and maintaining integrity of storage data against any unintentional or intentional modifications, such as due to hacking attacks, thefts, corruption, natural or man-made disasters, etc. Embodiments may provide a mechanism for facilitating encryption-free integrity protection of storage data at computing systems, where the mechanism may be mapping framework-based or implemented as a block device driver, etc., which, for example, may help store hash-based message authentication code (HMAC). For example, embodiments provide for device mapper-based security and integrity of storage data (e.g., data accessed by an operating system) against any modifications (e.g., offline modifications, etc.) through a secure internal calculation and verification mechanism (e.g., HMAC) without having to, for example, move or change the storage hardware, encrypt/decrypt storage data, etc. By facilitating security and integrity of storage data without having to encrypt the data, allows the storage data to remain unencrypted and thus, facilitating easier access and recovery of data, when necessitated. Moreover, without having to encrypt the storage data provides for a better resource efficiency and low consumption of power. Data on a storage device may be organized in sectors, e.g., 512, bytes, and regarded as the minimal unit of read and write operations. Sectors refer to groups in blocks, such as a 4K block may include 8 sectors. It is contemplated that sector and/or block sizes may vary. Further, a file system typically operates with blocks and may issue read/write requests, such as a read request to read one or more blocks which, in case of a single block, typically translates to, for example, reading all 8 sectors of a data 4K block. Regarding a storage device (SD), an entire storage may be divided into partitions, where each partition may consist of several blocks; for example, a storage device SDA may be divided into partitions, such as sdal, sda2, sda3 and so forth. These partitions may be formatted to different file systems, such as ext4, File Allocation Table (FAT), New Technology File System (NTFS) by Microsoft® Corporation, etc., such as sdal may be an ext4, sda2 may be an NTFS, etc. Further, file systems may call one or more block layers to read data blocks, such as when a read request is issued, a file system may immediately call a block layer, while when a write request is issued, pages may be updated but they may not immediately be writing to a block device. Such pages may be referred to as page-cache. Page-cache may be written back to the block device during the write-back operation and, during that, the file system may send a write request to the block layers. Embodiments provide for a mechanism for facilitating encryption-free integrity protection of storage data that is implemented in a mapping framework as is described throughout this document, but it is contemplated that these embodiments are not limited to the mapping framework, such that a block device driver may be used instead to implement the mechanism. Figure 1 illustrates a mapping framework-based mechanism for encryption-free protection of storage data employed at a computing device according to one embodiment. Computing device 100 serves as a host machine to employ mapping framework-based mechanism for encryption-free protection of storage data ("integrity protection mechanism") 110 to facilitate security and integrity of data stored at one or more storage devices. In one embodiment, storage data protection mechanism 110 may be provided as a plugin module for a mapping framework 116 (also referred to as "device mapper") that is part of an operating system kernel 114 (e.g., Linux® kernel). Kernel 114 may serve as a component of operating system 106 for managing system resources, such as serving as a bridge or by facilitate communication between software applications (e.g., provided via user space 112) and any data processing done at the hardware level (e.g., hardware storage device). Mapping framework 116 may serve as a framework or mapper to map one block device to another block device, or to map a block at one device to a block at the same or other device, serving as a foundation for enterprise volume management system (EVMS), logical volume manager (LVM), dm-crypt, etc. Computing device 100 may include mobile computing devices, such as cellular phones, including smartphones (e.g., iPhone® by Apple®, BlackBerry® by Research in Motion®, etc.), personal digital assistants (PDAs), etc., tablet computers (e.g., iPad® by Apple®, Galaxy Tab® by Samsung®, etc.), laptop computers (e.g., notebook, netbook, ultrabook™, etc.), e-readers (e.g., Kindle® by Amazon®, Nook® by Barnes and Nobles®, etc.), etc. Computing device 100 may further include set-top boxes (e.g., Internet-based cable television set-top boxes, etc.), televisions, in-vehicle entertainment systems, and larger computing devices, such as desktop computers, server computers, etc. Computing device 100 includes an operating system (OS) 106 serving as an interface between any hardware or physical resources of the computer device 100 and a user. Computing device 100 further includes one or more processors 102, memory devices 104, network devices, drivers, or the like, as well as input/output (I/O) sources 108, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, etc. It is to be noted that terms like "computing device", "node", "computing node", "client", "host", "server", "memory server", "machine", "device", "computing device", "computer", "computing system", and the like, may be used interchangeably throughout this document. Figure 2 illustrates a mapping framework-based mechanism for encryption-free protection of storage data at computing devices according to one embodiment. In one embodiment, data integrity protection mechanism 110 includes a number of components, such as reception logic 202, submission logic 204, reference logic 206, calculation logic 208, comparison logic 210, update logic 212, and decision logic 214. As described previously with reference to Figure 1, data integrity protection logic 110 may reside as part of an operating system and maintain communication between any number and type of software applications 222 and any number and type of storage devices 232 associated with a computing device, such as computing device 100 of Figure 1. Throughout this document, the term "logic" may be interchangeably referred to as "processing logic", "component" or "module" and may include, by way of example, software, hardware, and/or any combination of software and hardware, such as firmware. In one embodiment, data integrity protection mechanism 110 provides for security and integrity protection of data at one or more storage devices 232 coupled with the computing device. The storage data may be accessed and used by the operating system and/or one or more software applications 222 running at the computing device. For example, the data may include software codes, relevant sections of software codes, and/or any other relevant information, such as metadata cache, that may be accessed and used by the software applications 222 and/or the operating system. Examples of storage device 232 may include any number and type of storage systems, such as random access memory (RAM), redundant array of independent disk (RAID), non-uniform memory access (NUMA) memory systems, storage area networks, processor registers, memory channels, magnetic disks, optical disks, magnetic tapes, network-attached storage (NAS) devices, network file systems, relational databases, or the like. In one embodiment, reception logic 202 of data integrity protection mechanism 110 may be used to receive a call or request for task, such as data read or data write, from a software application 222. A read request refers to a read command (e.g., sys_read, such as a software application may use read() library function, which then calls system call sys_read) facilitated by an application 222 when attempting to read any portion (e.g., one or more data blocks) of the data at a storage device 232. In contrast, a write request refers to a write command (e.g., sys_write, such as a software application may use write() library function, which then calls system call sys_write) to further write one or more data blocks to the existing data at the storage device 232, including modifying the data, such as a software administrator or programmer changing the existing storage data. Submission logic 204 then submits or transmits the received request so that the requested task (e.g., read, write, etc.) can be performed to one or more relevant data blocks of the storage data, such as read or write one or more data blocks as set forth in the received request. In one embodiment, reference logic 206 accesses integrity metadata cache associated with the relevant data blocks to determine whether any of the data blocks have a corresponding HMAC stored in the metadata cache. Integrity metadata cache might be organized as a collection of integrity metadata data blocks, where each block is intended to hold several integrity metadata records, such as holding HMACs. Integrity metadata data blocks may be stored on the same partition (block device) where real data blocks reside, or can be stored on a dedicated partition residing on the same or different storage device. If an integrity metadata data block, containing reference HMAC is missing from the cache, it may be read from appropriate location. Meanwhile, calculation logic 208 uses a cryptographic key to calculate a new HMAC for each of the relevant data blocks. It is contemplated that the use of cryptographic keys to calculate HMACs is provided as an example and that embodiments of the invention are not limited to any particular processes or techniques. The cryptographic key may be received using any one or more of known processes, such as at initialization, at boot up, or may be supplied at any point during the process using any number of known methods. Further, using a combination of the cryptographic key and any of the known cryptographic functions (e.g., message digest algorithm 5 (MD5), secure hash algorithm 1 (SHA-1), etc.), an HMAC may be calculated for each relevant data block, such as HMAC-MD5, HMAC-SHA1, etc. For example, an HMAC may be calculated using the following: HMAC (K,m) = H ((K opad) IIH ((K φ ipad) II m)), where H refers to a cryptographic hash function, K refers to a secret key padded to the right with extra zeros to the input block size of the hash function, or the hash of the original key if it's longer than that block size, m refers to the message (a data block, contents of the request, etc.) to be authenticated, || denotes concatenation, © denotes exclusive or (XOR), opad refers to an outer padding (e.g., 0x5c5c5c...5c5c, one-block-long hexadecimal constant), and ipad refers to an inner padding (e.g., 0x363636...3636, one-block-long hexadecimal constant). For example, if a block size is 4K and HMAC-SHA256 size is 32 bytes, then a single block may have the ability to hold 128 HMACs. An integrity metadata cache may include integrity metadata (e.g., one or more HMACs, etc.) for a particular data block. For example, one 4K block of data might contain 128 HMAC-SHA256. Integrity metadata cache may refer to a collection of blocks (e.g., 4K blocks, etc.), containing the integrity metadata (HMACs) that is available in the random access memory. When the request is a read request, in one embodiment, regardless of the HMAC calculation method employed by calculation logic 208, once the new HMAC is calculated for each data block, it is then compared with the reference HMAC for that data block by comparison logic 210. Upon comparison, a decision is made by decision block 214 as to whether the request is be granted or denied. For example, upon successful comparison, such as when there is a match between the newly-calculated HMAC and the referenced HMAC), the read request is granted and forwarded on to the upper layers to the requested task may be performed, such as the relevant data blocks are read. In contrast, upon unsuccessful comparison, such there is a mismatch between the calculated HMAC and the referenced HMAC, the read request is denied and an error message is issued and forwarded on to the upper layers and the requested task is not performed, such as the relevant data blocks are discarded and not read. When the request is a write request, in one embodiment, regardless of the HMAC calculation method used by calculation logic 208, once the new HMAC is calculated for each data block that is to be written to the storage data, the corresponding reference HMAC for each data block is updated with or substituted by the newly-calculated HMAC for that data block using update logic 212. These newly-placed HMACs in the integrity metadata cache may then be referenced and compared in response to any future read requests. It is contemplated that any number and type of components may be added to and/or removed from data integrity protection mechanism 110 to facilitate various embodiments of the invention including adding, removing, and/or enhancing certain features. For brevity, clarity, and ease of understanding of data integrity protection mechanism 110 many of the standard and/or known components, such as those of a computing device, are not shown or discussed here. It is contemplated that embodiments of the invention are not limited to any particular technology, topology, system, architecture, and/or standard and are dynamic enough to adopt and adapt to any future changes. Figure 3 illustrates a transaction sequence for facilitating security and protection of integrity storage data according to one embodiment. Transaction sequence 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc.), software (such as instructions run on a processing device), or a combination thereof. In one embodiment, method 300 may be performed by data integrity protection mechanism 110 of Figure 1. Transaction sequence 300 begins with a software application 222 at user space 112 facilitating a call for a transaction (e.g., a read transaction (e.g., sys_read), a write transaction (e.g., sys_write), etc.). The request is then passed on to a virtual file system (VFS) 304 through a system call interface 302 at a kernel 114 of an operating system by having the system call interface 302 call on the VFS layer 304 (e.g., vfs_read, vfs_write, etc.). The VFS layer 304 calls on a file system layer 306 to read the request and forward it on to a block device layer 308. The file system layer 306 submits the request (e.g., block I/O read request) to the block device layer 308 which then queues the request to be received at a mapping framework or device mapper 116. In one embodiment, the mapping framework 116 may forward the request to or map the request through data integrity protection mechanism 110 that may be provided as a plugin module for the mapping framework 116. In one embodiment, at data integrity protection mechanism 110 processes the request and submits it to a block device layer 310. The block device layer 310, in one embodiment, may include a data block device and corresponding integrity block device. The data block device may include real storage data, while the integrity block device may include relevant integrity metadata (e.g., reference HMACs). In one embodiment, as aforementioned, in case of a read request, data integrity protection mechanism 110 may calculate an HMAC and compare it with a reference HMAC, while in case of a write request, calculate an HMAC and store it in the integrity block device to be used as a reference HMAC in the future. The block device layer 310 then queues the request to a block device driver 312 so the requested task as set forth in the request may be performed. The block device driver 312 then performs the requested task relating to one or more data blocks of the storage data at the relevant hardware 320, such as storage devices 232 including sda and/or sdb. Figure 4A illustrates a method for facilitating security and protection of integrity storage data when processing a read operation according to one embodiment. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc.), software (such as instructions run on a processing device), or a combination thereof. In one embodiment, method 400 may be performed by data integrity protection mechanism 110 of Figure 1. Method 400 begins at block 405 with a request being received from a software application (e.g., operating system or any other software application) running at a computing system to access and read a data block of data stored at a storage device that is in communication with the computing system. It is contemplated there is no limitation imposed as to how many or what type of data blocks may be read. In other words, a request may be received to read any number and type of data blocks. At block 410, the read request is submitted for processing so that the requested the data block may be read. At block 415, an integrity metadata cache associated with the data block is accessed to access a reference HMAC stored at the integrity metadata cache, where the reference HMAC corresponds to or references the requested data block. If a data block containing HMAC is missing or does not exist in the integrity metadata cache, reference logic 206 of Figure 2 may submit a read request to the relevant block layer to read the data block from the block device. At block 420, using a cryptographic key and any one or more of the known calculation processes, a new HMAC corresponding to the requested data block is calculated. At block 425, in one embodiment, the newly-calculated HMAC is compared with the reference HMAC obtained from the integrity metadata cache. At decision block 430, a determination is made as to whether the calculated HMAC matches the reference HMAC. If they match, at block 435, the process continues by granting the request and forwarding it to the upper layers on to the software application (at user space) facilitating the read request, such as the requested data block is read. If the match is not made, at block 440, the process is terminated and an error message is issued and forwarded on to the upper layers. Figure 4B illustrates a method for facilitating security and protection of integrity storage data when processing a write operation according to one embodiment. Method 450 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc.), software (such as instructions run on a processing device), or a combination thereof. In one embodiment, method 450 may be performed by data integrity protection mechanism 110 of Figure 1. Method 450 beings at block 455 with a request being received from a software application (e.g., operating system or any other software application) running at a computing system to write a data block to the existing data at a storage device that is in communication with the computing system. It is contemplated there is no limitation imposed as to how many or what type of data blocks may be read. In other words, a request may be received to read any number and type of data blocks. At block 460, the write request is submitted for processing so that the data block may be written. At block 465, an integrity metadata cache associated with the data block is accessed to access a reference HMAC stored at the integrity metadata cache, where the reference HMAC corresponds to or references the data block. At block 470, using a cryptographic key and any one or more of the known calculation processes, a new HMAC corresponding to the requested data block is calculated. At block 475, in one embodiment, the integrity metadata cache is updated by replacing the existing reference HMAC with the newly-calculated HMAC. Further, update logic 212 of Figure 2 may update the HMAC, but may not immediately send the write request to write an integrity block to the block device. At block 480, the process continues by granting the write request where the data block is written to the storage data. At block 485, a confirmation of the written data block is send to the upper layers. Figure 5 illustrates an embodiment of a computing system 500. Computing system 500 represents a range of computing and electronic devices (wired or wireless) including, for example, desktop computing systems, laptop computing systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes, smartphones, tablets, etc. Alternate computing systems may include more, fewer and/or different components. Computing system 500 includes bus 505 (or a link, an interconnect, or another type of communication device or interface to communicate information) and processor 510 coupled to bus 505 that may process information. While computing system 500 is illustrated with a single processor, electronic system 500 and may include multiple processors and/or co-processors, such as one or more of central processors, graphics processors, and physics processors, etc. Computing system 500 may further include random access memory (RAM) or other dynamic storage device 520 (referred to as main memory), coupled to bus 505 and may store information and instructions that may be executed by processor 510. Main memory 520 may also be used to store temporary variables or other intermediate information during execution of instructions by processor 510. Computing system 500 may also include read only memory (ROM) and/or other storage device 530 coupled to bus 505 that may store static information and instructions for processor 510. Date storage device 540 may be coupled to bus 505 to store information and instructions. Date storage device 540, such as magnetic disk or optical disc and corresponding drive may be coupled to computing system 500. Computing system 500 may also be coupled via bus 505 to display device 550, such as a cathode ray tube (CRT), liquid crystal display (LCD) or Organic Light Emitting Diode (OLED) array, to display information to a user. User input device 560, including alphanumeric and other keys, may be coupled to bus 505 to communicate information and command selections to processor 510. Another type of user input device 560 is cursor control 570, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor 510 and to control cursor movement on display 550. Camera and microphone arrays 590 of computer system 500 may be coupled to bus 505 to observe gestures, record audio and video and to receive and transmit visual and audio commands. Computing system 500 may further include network interface(s) 580 to provide access to a network, such as a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), Bluetooth, a cloud network, a mobile network (e.g., 3 rd Generation (3G), etc.), an intranet, the Internet, etc. Network interface(s) 580 may include, for example, a wireless network interface having antenna 585, which may represent one or more antenna(e). Network interface(s) 580 may also include, for example, a wired network interface to communicate with remote devices via network cable 587, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. Network interface(s) 580 may provide access to a LAN, for example, by conforming to IEEE 802.1 lb and/or IEEE 802.1 lg standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols, including previous and subsequent versions of the standards, may also be supported. In addition to, or instead of, communication via the wireless LAN standards, network interface(s) 580 may provide wireless communication using, for example, Time Division, Multiple Access (TDMA) protocols, Global Systems for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocols. Network interface(s) 580 may including one or more communication interfaces, such as a modem, a network interface card, or other well-known interface devices, such as those used for coupling to the Ethernet, token ring, or other types of physical wired or wireless attachments for purposes of providing a communication link to support a LAN or a WAN, for example. In this manner, the computer system may also be coupled to a number of peripheral devices, clients, control surfaces, consoles, or servers via a conventional network infrastructure, including an Intranet or the Internet, for example. It is to be appreciated that a lesser or more equipped system than the example described above may be preferred for certain implementations. Therefore, the configuration of computing system 500 may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples of the electronic device or computer system 500 may include without limitation a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, switch, machine, or combinations thereof. Embodiments may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parentboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term "logic" may include, by way of example, software or hardware and/or combinations of software and hardware. Embodiments may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments of the present invention. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media/machine- readable medium suitable for storing machine-executable instructions. Moreover, embodiments may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection). Accordingly, as used herein, a machine-readable medium may, but is not required to, comprise such a carrier wave. References to "one embodiment", "an embodiment", "example embodiment", "various embodiments", etc., indicate that the embodiment(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments. In the following description and claims, the term "coupled" along with its derivatives, may be used. "Coupled" is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them. As used in the claims, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. The following clauses and/or examples pertain to further embodiments or examples. Specifics in the examples may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Some embodiments pertain to a method comprising: receiving a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; accessing a first reference cryptographic code at a first metadata cache associated with the first data block; calculating a first new cryptographic code relating to the first data block; comparing the first new cryptographic code with the first reference cryptographic code; and accepting the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. Embodiments or examples include any of the above methods wherein further comprising denying the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. Embodiments or examples include any of the above methods wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include any of the above methods wherein the software application comprises an operating system running at the computing device. Embodiments or examples include any of the above methods further comprising receiving a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; accessing a second reference cryptographic code at a second metadata cache associated with the second data block; calculating a second new cryptographic code relating to the second data block; replacing the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and accepting the write request, wherein accepting includes facilitating the write task. Embodiments or examples include any of the above methods wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. Embodiments or examples include any of the above methods wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include any of the above methods wherein the software application comprises an operating system running at the computing device. In another embodiment or example, an apparatus comprises: first logic to receive a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; second logic to access a first reference cryptographic code at a first metadata cache associated with the first data block; third logic to calculate a first new cryptographic code relating to the first data block; forth logic to compare the first new cryptographic code with the first reference cryptographic code; and fifth logic to accept the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. Embodiments or examples include the apparatus above wherein the fifth logic is further to deny the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. Embodiments or examples include the apparatus above wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include the apparatus above wherein the software application comprises an operating system running at the computing device. Embodiments or examples include the apparatus above wherein the first logic is further to receive a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; the second logic is further to access second reference cryptographic code at a second metadata cache associated with the second data block; the third logic is further to calculate a second new cryptographic code relating to the second data block; the forth logic is further to replace the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and the fifth logic is further to accept the write request, wherein accepting includes facilitating the write task. Embodiments or examples include the apparatus above wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. Embodiments or examples include the apparatus above wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include the apparatus above wherein the software application comprises an operating system running at the computing device. In another embodiment or example, a system comprises: a computing device having a memory to store instructions, and a processing device to execute the instructions, the computing device further having a mechanism to: receive a read request, from a software application at a computing device, to perform a read task relating to a first data block of data stored at a storage device coupled to the computing device, wherein the read task includes reading the first data block; access a first reference cryptographic code at a first metadata cache associated with the first data block; calculate a first new cryptographic code relating to the first data block; compare the first cryptographic code with the first reference cryptographic code; and accept the read request if the first new cryptographic code matches the first reference cryptographic code, wherein accepting includes facilitating the read task. Embodiments or examples include the system above wherein the fifth logic is further to deny the read request if the first new cryptographic code mismatches the first reference cryptographic code, wherein denying includes issuing an error message in response to the read request, wherein if a data block containing the first reference cryptographic code is missing from the first metadata cache, the read request is submitted to facilitate the read task to the missing data block. Embodiments or examples include the system above wherein the first reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include the system above wherein the software application comprises an operating system running at the computing device. Embodiments or examples include the system above wherein the first logic is further to receive a write request, from the software application at a computing device, to perform a write task relating to a second data block, wherein the write task includes writing the second data block to the data stored at the storage device; the second logic is further to access second reference cryptographic code at a second metadata cache associated with the second data block; the third logic is further to calculate a second new cryptographic code relating to the second data block; the forth logic is further to replace the second reference cryptographic code by the second new cryptographic code in the second metadata cache; and the fifth logic is further to accept the write request, wherein accepting includes facilitating the write task. Embodiments or examples include the system above wherein the second metadata cache to maintain the second new cryptographic code such that the second new cryptographic code is used as a reference cryptographic code for future read requests. Embodiments or examples include the system above wherein the second reference and new cryptographic codes include a hash-based message authentication code (HMAC). Embodiments or examples include the system above wherein the software application comprises an operating system running at the computing device. In another embodiment or example, an apparatus comprises means for performing any one or more of the operations mentioned above. In yet another embodiment or example, at least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, causes the computing device to carry out a method according to any one or more of the operations mentioned above. In yet another embodiment or example, at least one non-transitory or tangible machine- readable comprising a plurality of instructions that in response to being executed on a computing device, causes the computing device to carry out a method according to any one or more of the operations mentioned above. In yet another embodiment or example, a computing device arranged to perform a method according to any one or more of the operations mentioned above. The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
3D memory devices are disclosed, such as those that include multiple two- dimensional tiers (60, 64) of memory cells. Each tier may be fully or partially formed over a previous tier to form a memory device having two or more tiers. Each tier may include strings of memory cells (78, 70) where each of the strings are coupled between a source select gate (82, 74) and a drain select gate (72,84) such that each tier is decoded using the source/drain select gates. Additionally, the device can include a wordline decoder (86, 88) for each tier that is only coupled to the wordlines for that tier.
CLAIMS 1. A memory array, comprising: a first tier comprising a first plurality of memory cells, a first plurality of source select gates, and a first plurality of drain select gates, wherein a string of memory cells of the first tier comprises one or more of the plurality of memory cells disposed between one of the plurality of source select gates and one of the plurality of select drain gates; a second tier formed at least partially over the first tier and comprising a second plurality of memory cells, a second plurality of source select gates, and a second plurality of drain select gates, wherein each of a string of memory cells of the second tier comprises one or more of the second plurality of memory cells disposed between one of the second plurality of source select gates and one of the second plurality of drain select gates; a contact extending at least partially through the first tier and the second tier, the contact coupling the first plurality of source select gates to the second plurality of source select gates and to a common source line. 2. The memory array of claim 1 , wherein the first tier comprises a first plurality of digitlines, wherein each of the first plurality of drain select gates is coupled to a respective one of the first plurality of digitlines, and wherein each of the first plurality of digitlines is coupled to a respective one of a second plurality of contacts extending at least partially through the first tier and the second tier. 3. The memory array of claim 2, wherein the second tier comprises a second plurality of digitlines, wherein each of the second plurality of drain select gates is coupled to a respective one of the second plurality of digitlines, and wherein each of the second plurality of digitlines is coupled to a respective one of the second plurality of contacts. 4. The memory array of claim 3, wherein the first plurality of digitlines and the second plurality of digitlines are coupled to a page buffer. 5. The memory array of claim 1, wherein the first plurality of source select gates and the second plurality of source select gates are coupled to a common source line driver via the common source line. 6. The memory array of claim 1 , wherein the first tier and the second tier are coupled to a well contact that extends at least partially through the first tier and the second tier. 7. The memory array of claim 6, wherein the well contact is coupled to a well driver. 8. The memory array of claim 1 , further comprising a third tier at least partially over the second tier and comprising a third plurality of memory cells, a third plurality of source select gates, and a third plurality of drain select gates, wherein a string of memory cells of the third tier comprises one or more of the third plurality of memory cells disposed between one of the third plurality of source select gates and one of the third plurality of drain select gates; wherein of the contact also couples the third plurality of source select gates to the common source line. 9. The memory array of claim 8, wherein the contact further extends at least partially through the first tier, the second tier, and the third tier. 10. The memory array of claim 8, wherein the third tier comprises a third plurality of digitlines, wherein each of the third plurality of drain select gates is coupled to a respective one of the third plurality of digitlines, and wherein each of the third plurality of digitlines is coupled to a respective one of a second plurality of contacts extending at least partially through the first tier, the second tier, and the third tier. 11. A memory array, comprising: a first tier comprising a first plurality of memory cells, a first plurality of source select gates, and a first plurality of drain select gates; a second tier at least partially over the first tier and comprising a second plurality of memory cells, a second plurality of source select gates, and a second plurality of drain select gates;a first decoder; a first plurality of access lines coupled to the first plurality of memory cells of the first tier and to the first decoder; a second decoder; and a second plurality of access lines coupled to the second plurality of memory cells of the second tier and to the second decoder. 12. The memory array of claim 11 , wherein the first decoder only decodes the first plurality of access lines. 13. The memory array of claim 11 , wherein the second decoder only decodes the second plurality of access lines. 14. The memory array of claim 11 , wherein the each of the first plurality of memory cells comprise a floating gate and a control gate. 15. The memory array of claim 11 , wherein the each of the second plurality of memory cells comprise a floating gate and a control gate. 16. The memory array of claim 11 , wherein the first plurality of source select gates and the first plurality of drain select gates comprise field effect transistors. 17. The memory array of claim 11 , wherein the second plurality of source select gates and the second plurality of drain select gates comprise field effect transistors. 18. The memory array of claim 11 , comprising a third tier at least partially over on the second tier and comprising a third plurality of memory cells, a third plurality of source select gates, and a third plurality of drain select gates. 19. The memory array of claim 18, comprising a third decoder and a third plurality of access lines coupled to the third tier and the third decoder.75 20. The memory array of claim 19, wherein the third decoder only decodes the third plurality of wordlines. 21. The memory array of claim 11 , comprising a contact coupled to each of the first plurality of source select gates and each of the second plurality of source select gates. 22. The memory array of claim 21, comprising a second contact coupled to each of the first plurality of drain select gates and each of the second plurality of drain select gates. 23. A system, comprising: a memory controller; a memory device coupled to the memory controller and comprising: a memory array comprising: a first tier comprising a first plurality of memory cells, wherein the first tier is accessed via a first plurality of source select gates of the first tier, wherein the first plurality of source select gates are coupled to a contact extending at least partially through the first tier and a second tier; the second tier comprising a second plurality of memory cells wherein the second tier is accessed via a second plurality of source select gates of the second tier, wherein the second plurality of source select gates are coupled to the contact. 24. The memory array of claim 23, wherein the first tier is further accessed via a first plurality of drain select gates, wherein the each of the first plurality of drain select gates is coupled to a respective one of a second plurality of contacts extending at least partially through the first tier and the second tier. 25. The memory array of claim 24, wherein the second tier is further accessed via a second plurality of drain select gates, wherein each of the second plurality of drain select gates is coupled to a respective one of the second plurality of contacts. 26. The system of claim 23, comprising a processor coupled to the memory controller and the memory device. 27. A memory device, comprising: a first tier comprising a first plurality of rows of memory cells, wherein each row of memory cells is coupled to one of a first plurality of wordlines; a second tier at least partially over the first tier and comprising a second plurality of rows of memory cells, wherein each row of memory cells is coupled to one of a second plurality of wordlines; a first decoder configured to decode the first plurality of wordlines; and a second decoder configured to decode the second plurality of wordlines. 28. The device of claim 27, wherein the first tier is accessed via a first plurality of source select gates and a first plurality of drain select gates. 29. The device of claim 28, wherein the second tier is accessed via a second plurality of source select gates and second plurality of drain select gates. 30. A method of operating a memory array, comprising: accessing a first tier of memory cells via a first plurality of source select gates and a first plurality of drain select gates; and accessing a second tier of memory cells via a second plurality of source select gates and a second plurality of drain select gates. 31. The method of claim 30, comprising decoding a first plurality of wordlines of the first tier via a first decoder and decoding a second plurality of wordlines of the second tier via a second decoder.
3D MEMORY DEVICES DECODING AND ROUTING SYSTEMS AND METHODS BACKGROUND Field of Invention [0001] Embodiments of the invention relate generally to memory devices and, specifically, to non- volatile memory array architectures. Description of Related Art [0001] Electronic systems, such as computers, personal organizers, cell phones, portable audio players, etc., typically include one or more memory devices to provide storage capability for the system. System memory is generally provided in the form of one or more integrated circuit chips and generally includes both random access memory (RAM) and read-only memory (ROM). System RAM is typically large and volatile and provides the system's main memory. Static RAM and Dynamic RAM are commonly employed types of random access memory. In contrast, system ROM is generally small and includes nonvolatile memory for storing initialization routines and identification information. Nonvolatile memory may also be used for caching or general data storage. Electrically- erasable read only memory (EEPROM) is one commonly employed type of read only memory, wherein an electrical charge may be used to program data in the memory. [0002] One type of non- volatile memory that is of particular use is a flash memory. A flash memory is a type of EEPROM that can be erased and reprogrammed in blocks. Flash memory is often employed in personal computer systems in order to store the Basic Input Output System (BIOS) program such that it can be easily updated. Flash memory is also employed in portable electronic devices, such as wireless devices, because of the size, durability, and power requirements of flash memory implementations. Various types of flash memory may exist, depending on the arrangement of the individual memory cells and the requirements of the system or device incorporating the flash memory. For example,NAND flash memory is a common type of flash memory device. [0003] In some architectures, flash memory stores information in an array of floating gate transistors, called "cells", each of which traditionally stores one bit of information that is represented as a "0" or a "1". In other architectures, each cell may store more or less digits of information, such as in multi-level cell (MLC) flash or when a state of a cell may be used to represent a non-integer value. The memory device often includes a grid-like arrangement of the cells. Each of the cells in the grid consumes a given amount of area and is spaced from one another by a generally uniform distance (e.g., pitch). Accordingly, the size and the pitch of the cells directly contribute to the overall size of the memory device. This becomes more evident as the number of cells and associated storage capacity of memory devices increase. [0004] As technology continues to advance, it is often desirable that memory devices decreases in size. Smaller memory devices can be employed in smaller spaces and/or can increase storage capacity in a limited area or volume. One technique for reducing the memory device size may include stacking memory cells in a vertical arrangement (creating a "3D" architecture). As the cells and associated transistors are scaled and densities of such devices increase, manufacture and functionality of such devices may introduce challenges with respect to contacts and signaling for the cells of this 3D architecture. BRIEF DESCRIPTION OF DRAWINGS [0005] FIG. 1 illustrates a block diagram of an embodiment of a processor-based device having a memory that includes memory devices in accordance with embodiments of the present invention; [0006] FIG. 2 illustrates a block diagram of an embodiment of a flash memory device having a memory array in accordance with embodiments of the present invention; [0007] FIG. 3 is a schematic diagram of a 3D array having two tiers in accordance with an embodiment of the present invention;[0008] FIG. 4 is schematic diagram of a 3D array having three tiers in accordance with an embodiment of the present invention; [0009] FIG. 5 is a cross-sectional diagram of the 3D array of FIG. 4 in accordance with an embodiment of the present invention; [0010] FIG. 6 is a cross-sectional view of the digitlines of a 3D array in accordance with an embodiment of the present invention. DETAILED DESCRIPTION [0002] FIG. 1 is a block diagram depicting an embodiment of a processor-based system, generally designated by reference numeral 10. The system 10 may be any of a variety of types such as a computer, pager, cellular phone, personal organizer, portable audio player, control circuit, camera, etc. In a typical processor-based device, a processor 12, such as a microprocessor, controls the processing of system functions and requests in the system 10. Further, the processor 12 may comprise a plurality of processors that share system control. [0003] The system 10 typically includes a power supply 14. For instance, if the system 10 is a portable system, the power supply 14 may advantageously include permanent batteries, replaceable batteries, and/or rechargeable batteries. The power supply 14 may also include an AC adapter, so the system 10 may be plugged into a wall outlet, for instance. The power supply 14 may also include a DC adapter such that the system 10 may be plugged into a vehicle cigarette lighter, for instance. [0004] Various other devices may be coupled to the processor 12, depending on the functions that the system 10 performs. For instance, an input device 16 may be coupled to the processor 12. The input device 16 may include buttons, switches, a keyboard, a light pen, a stylus, a mouse, and/or a voice recognition system, for instance. A display 18 may also be coupled to the processor 12. The display 18 may include an LCD, a CRT, LEDs, and/or an audio display, for example.[0005] Furthermore, an RF sub-system/baseband processor 20 may also be coupled to the processor 12. The RF sub-system/baseband processor 20 may include an antenna that is coupled to an RF receiver and to an RF transmitter (not shown). A communications port 22 may also be coupled to the processor 12. The communications port 22 may be adapted to be coupled to one or more peripheral devices 24 such as a modem, a printer, a computer, or to a network, such as a local area network, remote area network, intranet, or the Internet, for instance. [0006] Generally, the memory is coupled to the processor 12 to store and facilitate execution of various programs. For instance, the processor 12 may be coupled to system memory 26 through a controller 28. The system memory 26 may include volatile memory, such as Dynamic Random Access Memory (DRAM) and/or Static Random Access Memory (SRAM). The system memory 26 may also include non- volatile memory, such as read-only memory (ROM), PC-RAM, silicon-oxide-nitride-oxide-silicon (SONOS) memory, metal-oxide -nitride-oxide-silicon (MONOS) memory, polysilicon floating gate based memory, and/or other types of flash memory of various architectures (e.g., NAND memory, NOR memory, etc.) to be used in conjunction with the volatile memory. [0007] As described further below, the system memory 26 may include one or more memory devices, such as flash memory devices, that may be fabricated and operated in accordance with embodiments of the present invention. Such devices may be referred to as or include solid state drives (SSD's), MultimediaMediaCards (MMCs), SecureDigital (SD) cards, CompactFlash (CF) cards, or any other suitable device. Further, it should be appreciated that such devices may couple to the system 10 via any suitable interface, such as Universal Serial Bus (USB), Peripheral Component Interconnect (PCI), PCI Express (PCI-E), Small Computer System Interface (SCSI), IEEE 1394 (Firewire), or any other suitable interface. To facilitate operation of the system memory 26, such as the flash memory devices, the system 10 may include a memory controller 28, as described in further detail below. As will be appreciated, the memory controller 28 may be an independent device or it may be integral with the processor 12. Additionally, the system 10 may include a hard drive 29, such as a magnetic storage device.[0008] FIG. 2 is a block diagram illustrating a flash memory device 30 that may be included as a portion of the system memory 26 of FIG. 1. FIG. 2 also depicts the memory controller 28 coupled to the memory device 30. The flash memory device 30 can include a 3D memory array 32 having multiple tiers of memory cells (as illustrated below in FIGS. 3 and 4). The memory array 32 generally includes many rows and columns of conductive traces arranged in a grid pattern to form a number of memory cells. The lines used to select cells in the memory array 32 are generally referred to herein as "access lines", and are referred to in the industry as "wordlines." The lines used to sense (e.g., read) the cells are generally referred to herein as "digit lines," which are often referred to in the industry as "bit lines." The size of the memory array 32 (i.e., the number of memory cells) will vary depending on the size of the flash memory device 30. [0009] To access the memory array 32, a row decoder block 34 and a column decoder block 36 are provided and are configured to receive and translate address information from the controller 28 via the address bus 38 to access a particular memory cell in the memory array 32. In some embodiments, the address and data information may be multiplexed and provided on the same bus. As discussed in more detail below, each group of wordlines for a tier of the memory array 32 may be decoded by a separate row decoder. A sense block 40, such as one having a plurality of sense amplifiers, is also provided between the column decoder 36 and the memory array 32 to sense (and in some cases amplify) individual values stored in the memory cells. Further, a row driver block 42 is provided between the row decoder block 34 and the memory array 32 to activate a selected word line in the memory array according to a given row address. [0011] During read and program operations, such as a write operation, data may be transferred to and from the flash memory device 30 from the controller 28 via the data bus 44. The coordination of the data and address information may be conducted through a data control circuit block 46. Finally, the flash memory device 30 may include a control circuit 48 configured to receive control signals from the controller 28 via the control bus 50. The control circuit 48 is coupled to each of the row decoder block 34, the column decoder block 36, the sense block 40, the row driver block 42 and the data control circuit block 46, and isgenerally configured to coordinate timing and control among the various circuits in the flash memory device 30. [0012] As mentioned above, the controller 28 provides control signals over the control bus 50, address signals via the address bus 38, and data via the data bus 44, to the memory device 30. As mentioned above, in some embodiments, the address signals and data may be multiplexed and provided on a single bus. The controller 28 may include a memory interface 52, control logic 54, memory 56 (such as registers) and striping and error control logic 58. The memory interface 52 enables the controller 28 to communicate with the memory device 30. The control logic 54 processes incoming requests and data, such as from the processor 12, and provides signals to the memory device 30 to perform the requests. [0013] FIG. 3 is a schematic diagram of one potential embodiment 3D array 32 in accordance with an embodiment of the present invention. As shown in FIGS. 3 and 4, the 3D array 32 may include two, three, or more tiers. Each tier may include one or more layers used to form a horizontal array of memory cells. As described below, a single tier may include memory cells logically arranged in rows and columns in the horizontal plane of the array. In some embodiments, the memory cells may be single-level cells (SLC), multi-level cells (MLC), or any other suitable memory element. [0014] The 3D array 32 may include a first tier 60 having a first two-dimensional plane of memory cells 62. A second tier 64 may be fully or partially formed over the first tier 60 in a direction perpendicular to the plane of the first tier 60. The second tier 64 includes a second two-dimensional plane of memory cells 66. [0015] The first tier 60 includes word lines WL 1 0 - WL I M and intersecting local digit lines DL 1 0 - DL I N. The first tier 60 includes a memory cell, such as a floating gate transistor 68, located at each intersection of a word line (WL) and a string of memory cells coupled to a digit line (DL). The floating gate transistors 68 serve as non-volatile memory cells for storage of data in the memory array 32. As will be appreciated, each floating gate transistor 68 includes a source, a drain, a floating gate, and a control gate.The control gate of each floating gate transistor 68 is coupled to (and in at least some cases form) a respective local word line (WL). The floating gate transistors 68 are connected in series, source to drain, to form NAND strings 70, which are each formed between respective select gates. Specifically, each of the NAND strings 70 are formed between a local drain select gate 72 and a local source select gate 74. The drain select gates 72 and the source select gates 74 may each comprise a field-effect transistor (FET), for instance. A "column" of the first tier 60 includes a NAND string 70 and the source select gate 74 and drain select gate 72 connected thereto. A "row" of the floating gate transistors 68 are those transistors commonly coupled to a given access line, such as a word line (WL). As used herein, the terms "row" and "column" are used to describe the logical arrangement of the embodiment depicted in FIG. 3 and are not limiting to any specific physical arrangement. For example, in other embodiments a "rows" and/or "column" may include a stagger or other non linear arrangement, or the "rows" may not necessarily be perpendicular to the "columns" or vice-versa. [0016] The second tier 64 includes word lines WL2 0 - WL2 M and intersecting local digit lines DL2 0 - DL2 N. Similar to the first tier 60, the second tier 64 includes a memory cell, such as a floating gate transistor 76, located at each intersection of a wordline (WL) and a string of memory cells coupled to a digitline (DL). The control gate of each floating gate transistor 76 is coupled to (and in at least some cases form) a respective local word line (WL). The floating gate transistors 76 ca connected in series, source to drain, to form NAND strings 78 formed between respective select gates. Each of the NAND strings 78 are formed between a local drain select gate 80 and a local source select gate 82. The drain select gates 80 and the source select gates 82 may each comprise a field-effect transistor (FET), for instance. A "column" of the second tier 64 includes a NAND string 78 and the source select gate 84 and drain select gate 82 connected thereto. A "row" of the floating gate transistors 76 are those transistors commonly coupled to a given access line, such as a word line (WL). [0017] As shown in FIG. 3, each group of parallel wordlines for a tier are decoded together. The wordlines WL 1 0 - WL I M of the first tier 60 are coupled to a first wordline decoder 86. The first wordline decoder 86 only decodes wordlines coupled tofloating gate transistors 68 of the first tier 60. As also shown in FIG. 3, the wordlines WL2 0 and WL2 M are coupled to a second wordline decoder 88. The second wordline decoder 88 is only coupled to the wordlines of the floating gate transistors 76 of the second tier 64. As also shown in FIG. 3, the digitlines DLO through DLN are coupled to a page buffer 85. In some embodiments, as described below, the decoders 86 and 88 may be in a single tier or base of the device 32. In other embodiments, each wordline decoder 86 and 88 may be a part of, e.g., in the same horizontal structure as, the respective tier. For example, the wordline decoder 86 may a part of the first tier 60 and the wordline decoder 88 may be a part of the second tier 64. [0018] The wells (e.g., a p-well) of both the first tier 60 and the second tier 64 may be coupled together and to a well driver 89. As shown in FIG. 3, the contact for the well driver 89 may extend at least partially through the second tier 64 and the first tier 60 to the well driver 89. [0019] Each tier of the array 32 may be uniquely accessed using the source and drain select gates of a selected tier. For example, the source select gates 82 of the second tier 64 may be used to couple the NAND strings of the second tier to a common source line (CSL) and CSL driver 87. The drain select gates 84 of the second tier may be used to couple the NAND strings of the second tier 64 to respective digitlines DL2_0 though DL2_N. The source select gates 74 of the first tier may also be used to couple the NAND strings of the first tier 60 to the common source line and the CSL driver 87, as a contact(s) that extend at least partially through the second tier 64 and the first tier 60 and couples, for example, the sources of the source select gates 82 to the sources of the source select gates 74. Thus, a contact couples each source select gate of the second tier 64 to a corresponding source select gate of the first tier 60. Each tier 60 and 62 may be uniquely selected by activating the select gates of a desired tier, i.e., by activating the source select gates of a desired tier via respective source select line (SSL) and activating the drain select gates of the desired tier via a respective drain select line (DSL). Any or all of the row and column decode circuitry, e.g., the wordline decoders, the page buffer, drivers, etc., may be located on a base logic substrate 91. The tiers 60 and 64 are disposed on the base logic substrate, such that the first tier 60 is disposed on the base logic substrate 91 and the second tier 64 isdisposed on the first tier 60 in the manner described above. Thus, the connections described above, e.g., between wordlines and the wordline decoders 88 and 86 and/ the digitlines and page buffer 85, may electrically connect each tier 60 and 64 to the base logic substrate 91. [0020] FIG. 4 is a schematic of another potential embodiment of the 3D array 32 illustrating a third tier 90 in accordance with an embodiment of the present invention. The 3D array 32 may include the first tier 60 and the second tier 64 described above. The third tier 90 includes a third two-dimensional plane of memory cells 92. As noted above, the memory cells 92 may be SLC memory elements, MLC memory elements, or any other suitable memory element. [0021] The third tier 90 may be partially or fully formed over the second tier 64. The third tier 90 includes word lines WL3 0 - WL3 M and intersecting local digit lines DL3 0 - DL3 N. The third tier 90 includes a memory cell, such as a floating gate transistor 94, located at each intersection of a word line (WL) and a string of memory cells coupled to a digit line (DL). [0022] The floating gate transistors 94 can be connected in series, source to drain, to form NAND strings 96 formed between respective select gates. Each of the NAND strings 96 are formed between a local drain select gate 98 and a local source select gate 100, which may each comprise a field-effect transistor (FET), for instance. A "column" of the first tier 90 includes a NAND string 96 and the source select gate 100 and drain select gate 98 connected thereto. A "row" of the floating gate transistors 94 are those transistors commonly coupled to a given access line, such as a word line (WL). [0023] The digitlines of the third tier 90 are coupled to the page buffer 85. The wordlines WL3 0 - WL3 N of the first tier 90 are coupled to a third wordline decoder 102. The third wordline decoder 102 only decodes wordlines coupled to floating gate transistors 94 of the first tier 90. Similarly, as described above with regard to the first and second tiers 60 and 64, the well contact extends through to the third tier 90, coupling the well of the third tier 90, second tier 64, and first tier 60 to the well driver 89. The wordlinedecoder 102 may a part of, e.g., of the same horizontal structure as, the third tier 90. [0024] As in the embodiment depicted in FIG. 3, each tier of the array 32 can be uniquely decoded using the source and drain select gates of that tier. The source select gates 100 of the third tier 90 can also be used to couple the NAND strings 96 of the third tier to a common source line and the CSL driver 87. As mentioned above, a source line contact may extend at least partially through the third tier 90, the second tier 64, and the first tier 60, coupling the sources of the source select gates of each tier to the common source line (CSL) and CSL driver 87. The drains of the drain select gates 98 of the third tier 90 are coupled to respective digitlines and the page buffer 85. For example, the drains of the drain select gates 98 can be coupled to the drains of drain select gates 84 and 72 by digit line contacts extending at least partially through the third tier 90, second tier 64, and first tier 60. Thus, the digitline contacts couple the digitlines of each tier to the page buffer 85. Thus, each tier 90, 62, and 60 may be uniquely selected by activating the select gates for a desired tier. As described above, the row and column decode logic may all reside in the base logic substrate 91. Thus, the connections between the wordlines of the third tier 90 and the third wordline decoder 102 may electrically couple the wordlines of the third tier to the third wordline decoder 102. [0025] FIG. 5 is a cross-sectional diagram showing the 3D array 32 in accordance with an embodiment of the present invention. FIG. 5 depicts wordlines 106 extending at least partially through one or more of the tiers 60, 64, and 90, select gate lines 108, and digitlines 110. Each of the wordlines 106 and select gate lines 108 may be coupled to an appropriate driver via conductors 112. [0026] FIG. 6 depicts a cross-sectional diagram of the digitlines of the tiers of the 3D NAND array 32 in accordance with an embodiment of the present invention. As depicted in FIG. 6, the 3D NAND array 32 includes the first tier 60, the second tier 64, and the third tier 90. Additionally, a fourth tier 113 is shown disposed on the third tier 90. As discussed above, the array 32 may include digitline conductors 110 in each tier, e.g., DLX TO, DLX Tl, DLX T2, and DLX T3 (for the fourth tier 113).[0027] The digitline conductors 110 provide contact between the digitline for that tier and a base logic substrate that provides the decoding logic. Also shown in each tier of FIG. 6 are a field oxide layer 114, a silicon layer 116, and a bonding oxide layer 118. Each tier may be bonded to the previous tier by Smart Cut or any suitable bonding technique. A final conductor 120 is also depicted to enable contact between the digitline conductors 110 and the underlying base logic substrate. An insulating spacer 122 may be provided to isolate each digitline conductor 110 of a tier from the silicon substrate layer. For example, as shown in FIG. 6, one of the insulating spacers electrically insulates the digitline conductors DLX Tl from the silicon substrate layer of the second tier 64. [0028] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims.
An output driver for electrostatic discharge (ESD) protection includes a first pair of stacked metal oxide semiconductor field-effect transistor (MOS) devices coupled between a power terminal and a first differential output terminal. The output driver also includes a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal.
CLAIMS What is claimed is: 1. An output driver, comprising: a first pair of stacked metal oxide semiconductor field-effect transistor (MOS) devices coupled between a power terminal and a first differential output terminal; and a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal. 2. The output driver of claim 1, in which at least one of the first pair of stacked MOS devices or the second pair of stacked MOS devices comprises an NMOS device. 3. The output driver of claim 1, in which the output driver is further configured as a voltage-mode output driver further comprising a current mode pre- driver operable to supply differential signals to the voltage-mode output driver. 4. The output driver of claim 1, in which the output driver is further configured to replicate scaled versions of currents, voltages and/or impedances of replica circuitry and in which an output swing of the output driver is set by a supply voltage provided by the replica circuitry. 5. The output driver of claim 4, further comprising: a voltage rail circuit configured to receive the supply voltage from the replica circuitry. 6. The output driver of claim 1, further comprising: a third pair of stacked MOS devices coupled between the power terminal and the second differential output terminal, in which a MOS device of the first pair of stacked MOS devices also belongs to the third pair of stacked MOS devices. 7. The output driver of claim 6, further comprising: a fourth pair of stacked MOS devices coupled between the ground terminal and the first differential output terminal, in which a MOS device of the second pair of stacked MOS devices also belongs to the fourth pair of stacked MOS devices. 8. The output driver of claim 1, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit. 9. A method of operating an output driver, comprising: generating a first set of bias voltages for a first pair of stacked metal oxide semiconductor field-effect transistor (MOS) devices coupled between a power terminal and a first differential output terminal to match a first transmission line characteristic; and generating a second set of bias voltages for a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal to match a second transmission line characteristic. 10. The method of claim 9, further comprising: supplying differential signals to the output driver to generate an on resistance (Ron) that is matched to the first transmission line characteristic or the second transmission line characteristic. 11. The method of claim 9, further comprising: setting an output swing of the output driver by a supply voltage from replica circuitry, the output driver being configured to replicate scaled versions of currents, voltages and/or impedances of the replica circuitry. 12. The method of claim 11, further comprising: receiving the supply voltage from the replica circuitry at a voltage rail circuit. 13. The method of claim 9, further comprising: generating a third set of bias voltages for a third pair of stacked MOS devices coupled between the power terminal and the second differential output terminal, in which a MOS device of the first pair of stacked MOS devices also belongs to the third pair of stacked MOS devices. 14. The method of claim 13, further comprising: generating a fourth set of bias voltages for a fourth pair of stacked MOS devices coupled between the ground terminal and the first differential output terminal, in which a MOS device of the second pair of stacked MOS devices also belongs to the fourth pair of stacked MOS devices. 15. The method of claim 9, further comprising integrating the output driver into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit. 16. An output driver, comprising: first means for switching electronic signals stacked on a second means for switching electronic signals, the first and second switching means coupled between a power terminal and a first differential output terminal; and third means for switching electronic signals stacked on a fourth means for switching electronic signals, the third and fourth switching means coupled between a second differential output terminal and a ground terminal. 17. The output driver of claim 16, in which the output driver is further configured as a voltage-mode output driver further comprising means for supplying differential signals to the voltage-mode output driver. 18. The output driver of claim 16, in which the output driver is further configured to replicate scaled versions of currents, voltages and/or impedances of replica circuitry and in which an output swing of the output driver is set by a supply voltage from the replica circuitry. 19. The output driver of claim 16, further comprising: a fifth means for switching electronic signals stacked on a sixth means for switching electronic signals, the fifth and sixth switching means coupled between the power terminal and the second differential output terminal, in which one of the first and second switching means is also one of the fifth and sixth switching means. 20. The output driver of claim 19, further comprising: a seventh means for switching electronic signals stacked on an eighth means for switching electronic signals, the seventh and eighth switching means coupled between the ground terminal and the first differential output terminal, in which one of the third and fourth switching means is also one of the seventh and eighth switching means. 21. The output driver of claim 16, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.
METHODS AND DEVICES FOR MATCHING TRANSMISSION LINE CHARACTERISTICS USING STACKED METAL OXIDE SEMICONDUCTOR (MOS) TRANSISTORS TECHNICAL FIELD [0001] The present disclosure relates generally to voltage-mode drivers. More specifically, the disclosure relates methods and devices for matching transmission line characteristics using stacked MOS transistors. BACKGROUND [0002] When electrostatic discharge (ESD) flows into an integrated semiconductor chip, internal circuits in the semiconductor chip may be damaged or malfunction. The ESD mainly flows into the input/output driver stages. Conventionally, input protection circuits may be employed at an input driver stage to accommodate electrostatic discharge flows. Similar input protection circuits, however, might not be employed at an output driver stage because design constraints do not permit the use of a resistance between an output buffer and an interface terminal. Further, output driver designs are specified to meet certain minimum ESD specifications. SUMMARY [0003] According to one aspect of the present disclosure, an output driver is described. The output driver includes a first pair of stacked metal oxide semiconductor field-effect transistor (MOS) devices coupled between a power terminal and a first differential output terminal. The output driver further includes a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal. [0004] According to another aspect of the present disclosure, a method of operating an output driver is described. The method includes generating a first bias voltage for a first pair of stacked MOS devices coupled between a power terminal and a first differential output terminal to match a first transmission line characteristic. The method also includes generating a second bias voltage for a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal to match a second transmission line characteristic. [0005] According to a further aspect of the present disclosure, an output driver is described. The output driver includes a first means for switching electronic signals stacked on a second means for switching electronic signals. The first and second switching means are coupled between a power terminal and a first differential output terminal. The output driver also includes a third means for switching electronic signals stacked on a fourth means for switching electronic signals. The third and fourth switching means are coupled between a second differential output terminal and a ground terminal. [0006] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0007] The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. [0008] FIGURE 1 illustrates an exemplary replica circuitry of a voltage mode driver according to an aspect of the present disclosure. [0009] FIGURE 2 is a schematic diagram illustrating an exemplary voltage-mode driver including stacked NMOS transistors according to an aspect of the present disclosure. [00010] FIGURE 3 illustrates a method for operating a voltage-mode driver including stacked NMOS transistors according to an aspect of the present disclosure. [00011] FIGURE 4 shows an exemplary wireless communication system in which an aspect of the disclosure may be advantageously employed. [00012] FIGURE 5 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component. DETAILED DESCRIPTION [00013] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. As described herein, the use of the term "and/or" is intended to represent an "inclusive OR", and the use of the term "or" is intended to represent an "exclusive OR". [00014] Aspects of the present disclosure may include an improved output driver and an improved method of ESD protection for the output driver. [00015] In particular, some aspects of the disclosure generate an on resistance (Ron) substantially equal to an impedance characteristic of a transmission line, while satisfying electrostatic discharge specifications for an output buffer design. One aspect of the present disclosure generates an on resistance of 50 Ohms with a stack of transistors (e.g., n-type metal oxide semiconductor field-effect transistors (NMOS transistors)) that matches a transmission line impedance characteristic. The stacked NMOS transistors include more than one transistor arranged between a differential output terminal of a voltage mode driver and a power source of an output buffer. The stacked NMOS transistors also include more than one transistor arranged between the differential output terminal of the voltage mode driver and a ground terminal of the output buffer. [00016] Referring to FIGURES 1 and 2, replica circuitry 100 of a voltage mode driver 200 is illustrated, according to one aspect of the present disclosure. The voltage mode driver 200 replicates currents/voltages/impedances (or scaled versions thereof) provided by the replica circuitry 100. Based on the currents/voltages/impedances (or scaled versions thereof) provided by the replica circuitry 100, the voltage mode driver 200 is configured to control an output impedance associated with an output driver circuit 260 of an output driver stage 240. [00017] In this configuration, the replica circuitry 100 includes first, second and third circuit portions. The first circuit portion includes a first current source II, and resistors Rl, R2, R3, and R4. The second circuit portion includes a second current source 12, an operational amplifier 102, a transistor Tl and a resistor R5. The third circuit portion includes a third current source 13, an operational amplifier 104, a second driver transistor T2, a third driver transistor T3, and a resistor R6. In the third circuit portion, the transistors T2 and T3 are arranged in a stacked configuration. The transistors Tl, T2 and/or T3 may be NMOS transistors. [00018] In the configuration shown in FIGURE 1, each of the current sources is coupled to a voltage source VDD and controlled by a programmable current control source Ictrl. In particular, each input of the current sources II, 12, and 13 is coupled to the power source VDD. In one configuration, the current sources II, 12, and 13 generate substantially the same output current. Each of the first, second and third circuit portions is coupled to a ground terminal 106. The operational amplifiers 102 and 104 may output a voltage (e.g., Vr or Vb) for the replica circuitry 100. In this configuration, a desired resistance of the replica circuitry 100 is achieved based on the voltages. [00019] In FIGURE 1 , a voltage at a drain D 1 of the transistor Tl is defined by a product of the output current from the second current source 12 and the combination of the impedance at the transistor Tl and the resistance of the resistor R5. As noted, the second current source 12 is coupled to the programmable current control source Ictrl to control the current sources II, 12, and 13. A gate Gl of the transistor Tl is coupled to an output of the operational amplifier 102 at terminal 110. The voltage at the terminal 110 may be equivalent to the output voltage Vr of the operational amplifier 102. A source SI of the transistor Tl is coupled to a terminal 112 of the resistor R5. A terminal 114 of the resistor R5 is coupled to a ground terminal 106. A second input terminal 120 of the operational amplifier 102 may be coupled to a terminal 128 of the first circuit portion. The voltage at the terminal 128 is Vs. A first input terminal 108 of the operational amplifier 102 is coupled to the output of the second current source 12. [00020] As further shown in FIGURE 1, a drain D2 of a transistor T2 is coupled to a terminal 116 of the resistor R6. A voltage at a terminal 124 of the resistor R6 is defined by a product of an output current from the third current source 13, the combination of the impedances at the transistor T2 and T3 and the resistance of the resistor R6. A gate G2 of the transistor T2 is coupled to the output of the operational amplifier 102 at terminal 110. The voltage at terminal 110 is equivalent to the output voltage Vr of the operational amplifier 102. A source S2 of the transistor T2 is coupled to a drain (D3) of a transistor T3. A gate G3 of the transistor T3 is coupled to an output of the operational amplifier 104 at a terminal 118. The voltage at the output of the operational amplifier is Vb. A source S3 of the transistor T3 is coupled to the ground terminal 106. A second input terminal 126 of the operational amplifier 104 is coupled to the terminal 128 of the first circuit portion through the second input terminal 120 of the operational amplifier 102. The voltage Vs at the second input terminal 126 is equal to the voltage defined at the terminal 128 and the second input terminal 120. A first input terminal 122 of the operational amplifier 104 is coupled to the output of the third current source 13. [00021] In the configuration of FIGURE 1, the supply voltage Vs is the supply voltage for both the second input terminal 120 and the second input terminal 126 of the operational amplifiers 102 and 104, respectively. In particular, a current generated by the first current source II, and the resistors Rl, R2, R3, and R4 defines the supply voltage Vs at the terminal 128. The voltage at a terminal 130, associated with the resistors R2, R3, and R4, is equal to Vs. In one aspect of the disclosure, the resistors R2, R3, and R4 are arranged in a parallel configuration. The resistor Rl may be coupled in series with the parallel resistors R2, R3, and R4. The first current source II is coupled to terminal 128. A terminal 132 is a shared terminal of the resistors Rl, R2, R3, and R4. A terminal 134 of the resistor Rl is coupled to the ground terminal 106. [00022] In one configuration, the resistors Rl, R2, R3, and R4 are calibrated to a predetermined value (e.g., Rl equals 1.5 kilo (1.5K) Ohms) and the resistance of the combination of the parallel resistors R2, R3, and R4 is calibrated to 500 Ohms. Calibrating the resistors Rl, R2, R3, and R4 maintain a consistent resistance across the resistors Rl, R2, R3, and R4 over temperature, power and voltage changes. [00023] In one aspect of the present disclosure, the resistor R5 corresponds to the calibrated resistance Rl, and the impedance across the transistor Tl corresponds to the resistance across the parallel resistors R2, R3, and R4. In particular, the resistor R5 is equal to 1.5K Ohms or substantially equal to the resistance of Rl, and the impedance of the transistor Tl is 500 Ohms or substantially equal to the resistance across the parallel resistors R2, R3, and R4. Therefore, the total resistance from the terminal 128 to the ground terminal 106 is equal or substantially equal to the total resistance from the first input terminal 108 to the ground terminal 106. Because the current through the first input terminal 108 and the terminal 128 is also equal (i.e., the current from II equals the current from 12), the voltages at the first input terminal 108 and the terminal 128 are also equal. Because the voltage defined at the terminal 128 is the same as the voltage at the second input terminal 120 when the transistor Tl is on, the input voltages at the first input terminal 108 and the second input terminal 120 of the operational amplifier 102 are also the same when transistor Tl is active. If any difference arises, the circuit works to make the input voltages the same. [00024] Similarly, the resistor R6 corresponds to the calibrated resistance Rl, and the sum of the impedance across the transistors T2 and T3 corresponds to the resistance across the parallel resistors R2, R3, and R4. In particular, the resistor R6 is equal to 1.5K Ohms or substantially equal to the resistance of Rl, and the sum of the impedance of the transistors T2 and T3 is 500 Ohms or substantially equal to the resistance across the parallel resistors R2, R3, and R4. Therefore, the total resistance from the terminal 128 to the ground terminal 106 is equal or substantially equal to the total resistance from the first input terminal 122 to the ground terminal 106. Because the current through the terminal 128 and the first input terminal 122 is equal (i.e., the current from the current source II equals the current from the current source 13), the voltages at the terminal 128 and the first input terminal 122 and are also equal. Because the voltage defined at the terminal 128 is the same as the voltage defined at the first input terminal 122, the input voltages at the first input terminal 122 and the first input terminal 122 of the operational amplifier 104 are the same. [00025] The transistors Tl, T2, and T3 may be biased to their respective impedances by bias voltages Vr and Vb generated by the operational amplifiers 102 and 104. In particular, the output voltage Vr loops back from the output of the operational amplifier 102 to bias the transistors Tl and T2 to their respective impedances, and the output from the operation amplifier 104 biases the transistor T3. In addition, the bias voltages Vr and Vb may be varied such that the impedance of the transistors Tl, T2, and T3 correspond to the respective calibrated resistances of the first circuit configuration. [00026] FIGURE 2 is a schematic diagram illustrating a voltage mode driver 200 including stacked NMOS transistors according to an aspect of the present disclosure. As noted, the voltage mode driver 200 of FIGURE 2 replicates currents/voltages/impedances (or scaled versions thereof) provided by the replica circuitry 100 of FIGURE 1. The transistors Tl, T2, and T3 of the replica circuitry 100 have a ratio of 1: 10 with respect to the impedance characteristic of the respective transistors T4, T5, T6, T7, and T9 of the voltage mode driver 200 of FIGURE 2. For example, although the replica circuitry 100 of FIGURE 1 generates an impedance of 500 Ohms across the transistor Tl and across the combination of transistors T2 and T3, the total impedance generated across the corresponding transistor T4 or T6 or the corresponding combination of transistors T5 and T9 or T7 and T9 at the output driver stage 240 of FIGURE 2 is 50 Ohms. That is, the 50 Ohm impedance at the output driver stage 240 is due to the 1: 10 impedance ratio between the transistors of the replica circuitry 100 and the transistors of the voltage mode driver 200. [00027] In this configuration, the total impedance at the output driver stage 240 is 50 Ohms because the output driver stage 240 of the voltage mode driver 200 is implemented with transistors T4, T5, T6, T7 and T9 that have a ratio of 10: 1 with respect to the impedance characteristic of the respective transistors Tl, T2, and T3 of the replica circuitry 100. As a result, a single-ended output resistance of the voltage mode driver 200 of FIGURE 2 is 50 Ohms (e.g., 500/10 Ohms due to the 10: 1 ratio). In this configuration, the total impedance (e.g., 50 Ohms) matches the impedance of a transmission line associated with the voltage mode driver 200. [00028] As shown in FIGURE 2, the voltage mode driver 200 selectively couples to transmission lines via differential output terminals, outp 270 and outn 272. The transmission lines may have a characteristic impedance of 50 Ohms. In this configuration, the voltage mode driver 200 includes a pre-driver stage 210 and an output driver stage 240. The pre-driver stage 210 includes a first power rail circuit 220 and a pre-driver circuit 230. The output driver stage 240 includes a second power rail circuit 250 and an output driver circuit 260. [00029] In one aspect of the present disclosure, the replica circuitry 100 of FIGURE 1 controls the pre-driver stage 210, and the pre-driver stage controls the output impedance of the output driver stage 240. In this configuration, the replica circuitry 100 generates a voltage Vr for the pre-driver stage 210. An output voltage swing of the pre- driver stage 210 is set by the supply voltage Vr. In particular, the pre-driver stage 210 toggles between, for example, 0 volts and a real voltage, such as the voltage Vr. An upper rail of the pre-driver stage output is Vrl (i.e., the voltage defined at the second input terminal 214 and at a drain D 10 of a transistor T10) which is equal to Vr. In particular, the output voltage Vrl loops back from the drain D10 of the transistor T10 to a second input terminal 214 of an operational amplifier 222. A tail current from pre- driver stage 210 may be adjusted with the current source 14 to control the output voltage swing. [00030] In this configuration, the first power rail circuit 220 includes the operational amplifier 222, a power source VDD and the transistor T10. A source S10 of the transistor T10 is coupled to the power source VDD, a gate G10 is coupled to an output of the operational amplifier 222, and a drain D10 is coupled to a terminal 234. A first terminal 226 of a capacitor Cr is coupled to the terminal 234 and a second terminal 228 of the capacitor Cr is coupled to a direct current ground terminal 216. A first input terminal 212 of the operational amplifier 222 receives first the voltage Vr generated by the replica circuitry 100. In this configuration, an output swing of the pre-driver circuit 230 is set by the supply voltage Vr generated by the replica circuitry 100. A second input terminal 214 of the operational amplifier 222 is coupled to the drain D10 to receive a voltage defined at the drain D10. [00031] The pre-driver circuit 230 may be based on a current-mode logic structure. Representatively, the pre-driver circuit 230 may include transistors Tl 1 and T12, resistors R7 and R8, a ground terminal 218 and a current source 14. A source SI 1 of the transistor Tl 1 is coupled to a terminal 238 of the current source 14; a gate Gl 1 is coupled to a differential input terminal, inp 202; and a drain Dl 1 is coupled to a terminal 232 between the resistor R7 and the drain Dl 1. A terminal 239 of the current source 14 is coupled to a ground terminal 218. A source S12 of a transistor T12 is coupled to the terminal 238 of the current source 14; a gate G12 is coupled to a differential input terminal, inn 204; and a drain D12 is coupled to a terminal 236. Each of the resistors R7 and R8 may be coupled to the terminal 234. A resistance value of the resistors R7 and R8 may be approximately 200 ohms. The differential input terminals (inp 202 and inn 204) receive differential input signals. In one aspect of the disclosure, the transistor T10 is a p-type metal oxide semiconductor field-effect transistor (PMOS transistor) and the transistors Ti l and T12 are NMOS transistors. In operation, transistors T10, Tl 1 and T12 may have an increased impedance as a result of operating in a saturation state. [00032] As shown in FIGURE 2, the voltage mode driver 200 also includes a second power rail circuit 250 and an output driver circuit 260. In one aspect of the disclosure, the second power rail circuit 250 includes an operational amplifier 252, the power source VDD and a transistor T8. A source S8 of the transistor T8 is coupled to the power source VDD; a gate G8 is coupled to an output of the operational amplifier 252; a drain D8 is coupled to a first terminal 264 of a capacitor Cs through a terminal 262; and a second terminal 269 of the Cs is coupled to the ground terminal 246 to provide a direct current ground. In this configuration, a first input terminal 242 of the operational amplifier 252 receives a voltage Vs generated by the replica circuitry 100. A second input terminal 244 of the operational amplifier 252 may be coupled to the drain D8 to receive a voltage generated at the drain D8. In particular, an output swing of the output driver stage 240 is set by the supply voltage Vs. The second power rail circuit 250 of the output driver stage 240 and provides an upper rail output voltage Vsl at a terminal 262 of the output driver circuit 260. In particular, the voltage defined at the second input terminal 244 and at the drain D8 is equal to Vs. In this configuration, the output voltages Vsl loops back from the drain D8 of the transistor T8 to the second input terminal 244 of the operational amplifier 252. [00033] The output driver circuit 260 may include transistors T4, T5, T6, T7, and T9. The transistors T4, T5, T6, and T7 are arranged in a cross configuration, as illustrated in FIGURE 2, for facilitating current flow through the output driver circuit 260. A source S4 of the transistor T4 is coupled to a drain D5 of the transistor T5, and a gate G4 of the transistor T4 is coupled to the drain Dl 1 of the transistor Ti l through the terminal 232. A source S5 of the transistor T5 is coupled to a drain D9 of the transistor T9, and a gate G5 of the transistor T5 is coupled to the drain D12 of the transistor T12 through the terminal 236. A source S9 of the transistor T9 is coupled to a ground terminal 248, and a gate G9 of the transistor T9 receives the voltage Vb from the replica circuitry 100. A source S6 of the transistor T6 is coupled to a drain D7 of the transistor T7, and a gate G6 of the transistor T6 is coupled to the drain D12 and to the gate G5. A source S7 of the transistor T7 is coupled to the drain D9, and a gate G7 of the transistor T7 is coupled to the drain Dl 1 and to the gate G4. In one aspect of the disclosure, the transistor T8 is a PMOS transistor and the transistors T4, T5, T6, T7, and T9 are NMOS transistors. [00034] In this configuration, the transistors T5 and T9 or T7 and T9 of the output driver stage 240 correspond to the transistors T2 and T3 of the replica circuitry 100. The transistors T4 or T6 of the output driver stage 240 also correspond to the transistor Tl of the replica circuitry 100. The voltage mode driver 200 is driven by the replica circuitry 100 such that the impedance of the transistors of the replica circuitry 100 and the corresponding transistors of the voltage mode driver 200 during normal operation are equal or substantially equal. In particular, the transistor Tl of the replica circuitry 100 is a duplicate of the transistor T4 or T6 in the voltage mode driver 200. Similarly, the transistor T2 and T3 of the replica circuitry 100 are duplicates of the transistors T5 and T9 or T7 and T9 in the voltage mode driver 200. Because a matching output impedance is desirable, the output driver stage 240 outputs an impedance equal to the characteristic impedance of the transmission line. [00035] A differential signal is driven into the pre-driver circuit 230 via the differential input terminals, inp 202 and inn 204, and the transistors Tl 1 and T12 are biased according to a switching implementation at the pre-driver stage 210. For, example, a logic low level of the differential input terminals, in a particular logic state, is designed to be low enough to turn off transistors Tl 1 and T12. When the transistor Tl 1 of the pre-driver stage 210 is on, such that the transistor T4 of the output driver stage 240 is also on, the transistor T4 is biased in the same way as the transistor Tl of the replica circuitry 100 (see FIGURE 1). During normal operation, the impedance of the transistor T4 is the same as the impedance of the transistor Tl of the replica circuitry 100. When the transistor T12 of the pre-driver stage 210 is on, such that the transistor T5 and T6 of the output driver stage 240 are also on, the transistor T6 is biased in the same way as the transistor Tl of the replica circuitry 100. During normal operation, the impedance of the transistor T6 is also the same as the impedance of the transistor Tl of the replica circuitry 100. [00036] In some applications, (e.g., memory physical layer (M-PHY)), the second power rail circuit 250 of the output driver stage 240 may be specified at 200 millivolts (mv) or 400mv. In the 200mv application, for example, the current generated by the current sources II, 12 and 13 in the replica circuitry 100 is set at 100 microamperes. In this configuration, the voltage Vs at terminal 128, second input terminal 120 and second input terminal 126 of the replica circuitry 100 is 200mv (i.e., 100 microamperes multiplied by the resistance (2 kilo Ohms) at terminal 128). In this configuration, a first input terminal 242 of the operational amplifier 252 receives the voltage Vs (i.e., 200mv) generated by the replica circuitry 100. Because Vs is equal to Vsl, the voltage at a second input terminal 244 of the operational amplifier 252 is also 200mv. [00037] As shown in FIGURE 2, the current through the output driver circuit 260 is defined by this voltage Vs in combination with the impedance of the transistors of the of the output driver circuit. For example, the impedance of transistor T4 (i.e., 50 Ohms), the impedance of the transistors T5 and T9 (i.e., 50 Ohms), and the impedance of the transmission line (i.e., 100 Ohms, of which 50 Ohms is output impedance and 50 Ohms is input impedance) in combination with the voltage Vs determine the current through the output driver circuit 260. The output driver current may also be determined by the voltage Vs in combination with the impedance of the transistor T6, the impedance of the transistors T7 and T9, and the impedance of the transmission line. The transistor T4 or T6 may be implemented as voltage dividers. [00038] In operation, the input terminals 202 and 204 of the pre-driver circuit 230 of the pre-driver stage toggle between an on state and an off state. As a result, the transistors Tl 1 and T12 of the pre-driver circuit toggle between the on and off state. When the transistor Tl 1 is in the on state, a voltage is generated at the gate G4 of the transistor T4 and the gate G7 of the transistor T7, such that the transistors T4 and T7 are turned on. As a result, current flows from the second power rail circuit 250, through the transistor T4, to the differential output terminal, outn 272, and to the transmission line. The current flows back from the transmission line via the differential output terminal pad, outp 270, through the second output terminal 266, through the transistors T7 and T9, and then to the ground terminal 248. When the transistor T12 is in the on state, a voltage is generated at the gate G5 of the transistor T5 and the gate G6 of the transistor T6 such that the transistors T5 and T6 are turned on. As a result, current flows from the second power rail circuit 250, through the transistor T6 to the output terminal pad, outp 270, through the first output terminal 266 and to the transmission line. The current flows back from the transmission line via the output terminal pad, outn 272, through the second output terminal 268, through the transistors T5 and T9 and then to the ground terminal 248. [00039] In one aspect of the present disclosure, multiple stacked transistors disposed between an output terminal of the voltage mode driver 200, and the power source (e.g., VDD), and/or a ground terminal 248, drive the output terminals of the output driver circuit 260. The stacked transistors may include stacked NMOS transistors. The impedance of the stacked NMOS transistors is biased to 50 Ohms (in this example) to match the impedance characteristics of the transmission line. For example, looking into the output driver stage 240 from a first output terminal 266 to the ground terminal 248, there are two stacked NMOS transistors; namely, transistors T7 and T9. Similarly, two stacked NMOS transistors; T5 and T9 are shown between the second output terminal 268 to the ground terminal 248. The sum of the impedances of the stacked transistors T5 and T9 or T7 and T9, is 50 Ohms (in this example), which matches the impedance characteristics of the transmission line. [00040] Similarly, looking into the output driver stage 240 from the first output terminal 266 to the power source VDD, there are two stacked transistors; namely, NMOS transistor T6 and PMOS transistor T8. In addition, stacked NMOS transistor T4 and PMOS transistor T8 are disposed between the power source VDD and the second output terminal 268. The capacitor Cs includes a first terminal 264 that is coupled to the terminal 262 and a second terminal 269 coupled to the ground terminal 246. As a result, the transistor T4 or T6 is biased, for example to 50 Ohms, to match the impedance characteristic of the transmission line. Therefore, the impedance of the transistor T4 or T6 corresponds to the impedance of the transistor Tl of the replica circuitry 100. Similarly, the transistor T4 is biased to 50 Ohms, to match the impedance characteristic of the transmission line. [00041] Having, stacked transistors T5 and T9 or T7 and T9 between the ground terminal 248 and the output terminal satisfies an electrostatic discharge (ESD) specification by having more than one transistor between the output terminal and the ground terminal 248. For example, if the sum of the impedances of the stacked transistors T2 and T3 is 50 Ohms, then the impedance of the stacked transistors T5 and T9 is also 50 Ohms. This feature of the stacked transistors T5 and T9 also applies to the stacked transistors T7 and T9 based on the switching implementation at the pre-driver stage 210. [00042] Similarly, having, stacked transistors T6 and T8 or transistors T4 and T8 between the power source VDD and the output terminal satisfies the electrostatic discharge (ESD) specification by having more than one transistor between the output terminal and the power source VDD. For example, if the impedance of the transistor Tl is 50 Ohms then the impedance of the transistor T4 is also 50 Ohms. This feature of the transistor T4 also applies to the transistor T6 based on the switching implementation at the pre-driver stage 210. [00043] FIGURE 3 illustrates a method 300 for implementing a voltage- mode driver including stacked NMOS transistors according to an aspect of the present disclosure. At block 302, the method starts with generating a first bias voltage for a first pair of stacked MOS devices coupled between a power terminal and a first differential output terminal to match a first transmission line characteristic. In the illustration of FIGURE 2, the first pair of stacked MOS devices includes the transistors T6 and T8 or T4 and T8. At block 304, the method includes generating a second bias voltage for a second pair of stacked MOS devices coupled between a second differential output terminal and a ground terminal, to match a second transmission line characteristic. In the illustration of FIGURE 2, the second pair of stacked MOS devices includes the transistors T5 and T9 or T7 and T9. [00044] In one configuration, the output driver includes a means for generating a first bias voltage and a means for generating a second bias voltage. In one aspect of the disclosure, the first and/or second bias voltage means may be the first power rail circuit 220, the second power rail circuit 250 and/or the pre-driver circuit 230 configured to perform the functions recited by the first and/or second bias voltage means. [00045] In one configuration, the output driver includes first, second, third and fourth means for switching electronic signals. In one aspect of the disclosure, the first, second, third and fourth switching means may be transistors such as transistors T4, T5, T6, T7, T8, and/or T9 of the output driver stage 240 of the voltage mode driver 200 of FIGURE 2. [00046] FIGURE 4 shows an exemplary wireless communication system 400 in which an embodiment of the voltage-mode driver including stacked NMOS transistors may be advantageously employed. For purposes of illustration, FIGURE 4 shows three remote units 420, 430, and 450 and two base stations 440. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 420, 430, and 450 include the voltage-mode driver including stacked NMOS transistors 425A, 425B, and 425C. FIGURE 4 shows forward link signals 480 from the base stations 440 and the remote units 420, 430, and 450 and reverse link signals 490 from the remote units 420, 430, and 450 to base stations 440. [00047] In FIGURE 4, the remote unit 420 is shown as a mobile telephone, remote unit 430 is shown as a portable computer, and remote unit 450 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, and/or fixed location data units such as meter reading equipment. Although FIGURE 4 illustrates remote units, which may employ a voltage-mode driver including stacked NMOS transistors 425A, 425B, and 425C according to the teachings of the disclosure, the disclosure is not limited to these exemplary illustrated units. For instance, a voltage-mode driver including stacked N-type metal oxide semiconductor field-effect transistors according to embodiments of the present disclosure may be suitably employed in any device. [00048] FIGURE 5 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the voltage-mode driver including stacked NMOS transistors disclosed above. A design workstation 500 includes a hard disk 501 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 500 also includes a display 502 to facilitate design of a circuit 510 or a semiconductor component 512 such as a voltage-mode driver including stacked NMOS transistors. A storage medium 504 is provided for tangibly storing the circuit design 510 or the semiconductor component 512. The circuit design 510 or the semiconductor component 512 may be stored on the storage medium 504 in a file format such as GDSII or GERBER. The storage medium 504 may be a CD-ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 500 includes a drive apparatus 503 for accepting input from or writing output to the storage medium 504. [00049] Data recorded on the storage medium 504 may specify logic circuit configurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 504 facilitates the design of the circuit design 510 or the semiconductor component 512 by decreasing the number of processes for designing semiconductor wafers. [00050] Although specific circuitry has been set forth, it will be appreciated by those skilled in the art that not all of the disclosed circuitry is required to practice the disclosed embodiments. Moreover, certain well known circuits have not been described, to maintain focus on the disclosure. [00051] The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware, firmware, software, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof. [00052] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine or computer readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software code may be stored in a memory and executed by a processor. When executed by the processor, the executing software code generates the operational environment that implements the various methodologies and functionalities of the different aspects of the teachings presented herein. Memory may be implemented within the processor or external to the processor. As used herein, the term "memory" refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. [00053] The machine or computer readable medium that stores the software code defining the methodologies and functions described herein includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, disk and/or disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer readable media. [00054] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims. [00055] Although the present teachings and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the teachings as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized according to the present teachings. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Methods, devices, apparatus, computer-readable media and processors are provided that protect the distribution of media content. Media content is encrypted and the associated cryptographic mechanisms are stored and accessible either remotely at a networked database or internally within a data storage device memory. Access to the cryptographic mechanisms is granted by associating the cryptographic mechanisms with a data storage device identification and, optionally, a computing device identification.
CLAIMS What is claimed is: 1. A method for obtaining content in a protected environment, the method comprising: receiving a storage device comprising a storage device identifier and protected content; forwarding the storage device identifier to a network device; receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier; and accessing at least a portion of the protected content with the cryptographic mechanism. 2. The method of claim 1, further comprising: forwarding to the network device a request for access to at least one other portion of the protected content different from the portion accessed with the cryptographic mechanism, the request comprising at least one of the storage device identifier and a computing device identifier, the computing device identifier associated with a computing device operable to receive the storage device; receiving at least a reference to at least one other cryptographic mechanism from the network device based on an association with at least one of the storage device identifier and the computing device identifier, the other cryptographic mechanism corresponding to at least the other portion of the protected content; and accessing at least the other portion of the protected content with the other cryptographic mechanism. 3. The method of claim 1, further comprising providing payment information prior to receiving the cryptographic mechanism, the payment information corresponding to a payment to access at least the portion of the protected content. 4. The method of claim 2, further comprising providing payment information prior to receiving the other cryptographic mechanism, the payment information corresponding to a payment to access at least the other portion of the protected content. 5. The method of claim 1, wherein receiving at least the reference to the cryptographic mechanism from the network device is further based on a confirmation of an availability for use of the storage device. 6. The method of claim 5, wherein the confirmation of the availability for use of the storage device is based on an association between the storage device identifier and a procurement transaction involving the storage device. 7. The method of claim 1, further comprising forwarding a computing device identifier to the network device, the computing device identifier associated with a computing device operable to receive the storage device, and wherein receiving the reference to the cryptographic mechanism from the network device further comprises receiving at the computing device based on an association with the computing device identifier. 8. The method of claim 7, further comprising forwarding another computing device identifier to the network device, the other computing device identifier associated with another computing device operable to receive the storage device, and wherein receiving at least the reference to the cryptographic mechanism from the network device further comprises receiving at the other computing device based on an association with the other computing device identifier. 9. The method of claim 1, wherein receiving at least the reference to the cryptographic mechanism further comprises receiving the cryptographic mechanism. 10. The method of claim 1, wherein the protected content comprises content obscured by a predetermined cryptographic mechanism. 1 1. The method of claim 1 , wherein the step of receiving a storage device comprising a storage device identifier and protected content further comprises the step of receiving a storage device comprising a storage device identifier, protected content and non-protected content. 12. The method of claim 11, further comprising the step of accessing the non-protected content, wherein the non-protected content includes preview content. 13. The method of claim 11, further comprising the step of accessing the non-protected content, wherein the non-protected content includes limited-use content. 14. The method of claim 13, wherein the step of accessing the non-protected content, wherein the non-protcctcd content includes limitcd-usc content further defines the limited-use content as being limited-use content based on a computing device that receives the storage device. 15. A computer readable medium tangibly storing a sequence of instructions that, when executed, cause a computer device to perform the actions of: receiving a storage device comprising a storage device identifier and protected content; forwarding the storage device identifier to a network device; receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier; and accessing at least a portion of the protected content with the cryptographic mechanism. 16. At least one processor configured to perform the actions of: receiving a storage device comprising a storage device identifier and protected content; forwarding the storage device identifier to a network device; receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier; and accessing at least a portion of the protected content with the cryptographic mechanism. 17. A wireless device, comprising: means for receiving a storage device comprising a storage device identifier and protected content; means for forwarding the storage device identifier to a network device; means for receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier; and means for accessing at least a portion of the protected content with the cryptographic mechanism. 18. A computing device, comprising: a processing engine; and a content access initiator module executable by the processing engine, the content access initiator operable to recognize protected content stored on a storage device, communicate a storage device identifier to a network device, receive from the network device at least a reference to a first cryptographic mechanism associated with the storage device identifier and apply the first cryptographic mechanism to at least a portion of the protected content to convert the portion of the protected content to a portion of un-protected content. 19. The computing device of claim 18, further comprising the storage device identifier and the first cryptographic mechanism each stored in communication with the content access initiator, wherein the content access initiator module is further operable to communicate, wirelessly, the storage device identifier to the network device and receive, wirelessly, from the network device at least the reference to the first cryptographic mechanism associated with the storage device identifier. 20. The computing device of claim 18, further comprising a computing device identifier associated with the computing device, wherein the content access initiator module is further operable to communicate the computing device identifier and the storage device identifier to the network device, and wherein at least the reference to the first cryptographic mechanism corresponds to a predetermined association between the computing device identifier and the storage device identifier. 21. The computing device of claim 18, further comprising the storage device in removable communication with the computing device, the storage device comprising the storage device identifier and the protected content. 22. The computing device of claim 21, wherein the storage device further comprises non-protected content, wherein the non-protected content includes preview content. 23. The computing device of claim 21, wherein the storage device further comprises non-protected content, wherein the non-protected content includes limited- use content. 24. The computing device of claim 23, wherein the limited-use content limits use based on which computing device is associated with the data storage device. 25. The computing device of claim 21, wherein the storage device is selected from the group consisting of a flash media card, a compact disc (CD) and a digital video disc (DVD). 26. The computing device of claim 21, wherein the protected content comprises a plurality of protected content portions each corresponding to one of a plurality of cryptographic mechanisms, wherein the access initiator module is operable to receive at least one of the plurality of cryptographic mechanism corresponding to at least one of the plurality of protected content portions. 27. The computing device of claim 18, further comprising the first cryptographic mechanism and a second cryptographic mechanism stored in communication with the content access initiator, wherein the protected content comprises the portion of protected content and another portion of protected content, wherein the access initiator module is operable to apply the second cryptographic mechanism to at least the other portion of the protected content to convert the other portion of the protected content to another portion of un-protected content. 28. The computing device of claim 27, further comprising a memory having payment information, wherein the content access initiator module is further operable to forward the payment information to the network device in exchange for the second cryptographic mechanism. 29. The computing device of claim 18, wherein the computing device comprises a wireless device operable on a wireless network. 30. A method for distributing content in a protected environment, the method comprising: obtaining an association between a first storage device identifier and a cryptographic mechanism; obtaining at least a reference to the cryptographic mechanism; receiving a request from a computing device for access to at least a portion of a protected content, the request comprising a second storage device identifier; and forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. 31. The method of claim 30, further comprising: obtaining an association between the first storage device identifier, at least one other portion of the protected content, and at least one other cryptographic mechanism; obtaining at least a reference to the other cryptographic mechanism; receiving a request from the computing device for access to at least the one other portion of the protected content, the request comprising the second storage device identifier; and forwarding at least the reference to the one other cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. 32. The method of claim 30, further comprising: obtaining an association between a computing device identifier, the first storage device identifier and the cryptographic mechanism, the computing device identifier associated with the computing device; wherein receiving the request from the computing device further comprises one other computing device identifier; and wherein forwarding at least the reference to the cryptographic mechanism to the computing device further comprises forwarding based on a correspondence between the computing device identifier and the one other computing device identifier. 33. The method of claim 30, further comprising: obtaining an association between a procurement transaction, the first storage device identifier and the cryptographic mechanism, the procurement transaction associated with the computing device; and wherein forwarding at least the reference to the cryptographic mechanism to the computing device further comprises forwarding based on correspondence between at least the portion of the second storage device identifier, the first storage device identifier and the procurement transaction. 34. The method of claim 30, further comprising: receiving a request from at least one other computing device for access to at least a portion of the protected content, the request comprising a third storage device identifier; and forwarding a purchase option message to the one other computing device based on a lack of correspondence between at least a portion of the third storage device identifier and the first storage device identifier. 35. The method of claim 33, further comprising receiving payment information from the one other computing device in response to the purchase option message, and forwarding at least the reference to the cryptographic mechanism to the computing device based on the received payment information. 36. The method of claim 30, wherein forwarding at least the reference to the cryptographic mechanism further comprises forwarding the cryptographic mechanism. 37. The method of claim 30, wherein obtaining an association between a first storage device identifier and a cryptographic mechanism further comprises obtaining an association between a plurality of portions of the protected content and a corresponding plurality of cryptographic mechanisms, wherein each of the plurality of portions of protected content corresponds to at least one of the plurality of cryptographic mechanisms. 38. The method of claim 30, further comprising: loading unprotected content on a storage device having the first storage device identifier; obscuring at least a portion of the unprotected content with the cryptographic mechanism, thereby defining at least the portion of the protected content; and defining an association between the first storage device identifier and the cryptographic mechanism. 39. The method of claim 30, further comprising: storing one or more procurement transaction data, wherein each procurement transaction is associated with a corresponding portion of the protected data; and recognizing authorization of the computer device to access protected content portions based on the stored procurement transaction data. 40. The method of claim 39, further comprising communicating protected content portion-related data to the computer device based on recognition of the authorization to access protected content portions. 41. The method of claim 30 , further comprising : monitoring content access activity of the computer device; and communicating to the computer device content purchase recommendations based on the monitoring of content access activity. 42. The method of claim 41 , further comprising : monitoring environmental attributes of the computer device; and communicating to the computer device content purchase recommendations based on the monitoring of environmental attributes. 43. A computer readable medium tangibly storing a sequence of instructions that, when executed, cause a computer device to perform the actions of: obtaining an association between a first storage device identifier and a cryptographic mechanism; obtaining at least a reference to the cryptographic mechanism; receiving a request from a computing device for access to at least a portion of a protected content, the request comprising a second storage device identifier; and forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. 44. At least one processor configured to perform the actions of: obtaining an association between a first storage device identifier and a cryptographic mechanism; obtaining at least a reference to the cryptographic mechanism; receiving a request from a computing device for access to at least a portion of a protected content, the request comprising a second storage device identifier; and forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. 45. A network device, comprising: means for obtaining an association between a first storage device identifier and a cryptographic mechanism; means for obtaining at least a reference to the cryptographic mechanism; means for receiving a request from a computing device for access to at least a portion of a protected content, the request comprising a second storage device identifier; and means for forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. 46. A network device, comprising: a processing engine; and a personalization module executed by the processing engine, the personalization module operable to receive a storage device identifier from a networked computing device, determine a cryptographic mechanism associated with the storage device identifier and communicate at least a reference to the cryptographic mechanism to the computing device. 47. The network device of claim 46, further comprising a network database in communication with the personalization module, the network database comprising the storage device identifier in association with at least the reference to the cryptographic mechanism. 48. The network device of claim 47, wherein the network database further comprises an identification of a plurality of portions of a protected content in association with a plurality of cryptographic mechanisms, wherein the plurality of portions of protected content are further associated with the storage device identifier, wherein the personalization module is further operable to receive an identification of one of the plurality of portions of the protected content, and wherein the reference to the cryptographic mechanism further comprises a reference to the one of the plurality of cryptographic mechanism corresponding to the identified one of the plurality of portions of protected content. 49. The network device of claim 47, wherein the network database further comprises a computing device identifier associated with the storage device identifier, wherein the personalization module is further operable to communicate at least the reference to the cryptographic mechanism to the computing device if the computing device corresponds to the computing device identifier. 50. The network device of claim 47, wherein the network database further comprises procurement information associated with the storage device identifier, wherein the personalization module is further operable to communicate at least the reference to the cryptographic mechanism to the computing device if the computing device is associated with the procurement information. 51. A method of distributing content, comprising: loading unprotected content on a storage device having a storage device identifier, the storage device configured for removable communication with a computing device; obscuring at least a portion of the unprotected content with a cryptographic mechanism, thereby defining at least a portion of a protected content; defining an association between the storage device identifier and the cryptographic mechanism; and forwarding the defined association to a network device operable to provide access to at least the portion of the protected content to a networked computing device having the storage device identifier. 52. The method of claim 51, wherein obscuring further comprises obscuring a plurality of portions of the unprotected content with at least one of a plurality of cryptographic mechanisms, thereby defining a plurality of portions of the protected content, wherein defining the association further comprises associating the plurality of portions of the protected content with the corresponding one of the plurality of cryptographic mechanisms used to obscure the respective portion, and wherein forwarding the defined association further comprises forwarding to the network device operable to provide access to a requested portion of the protected content. 53. A data storage device, comprising: a memory comprising a data storage device identifier and protected content, wherein the protected content is convertible to unprotected content by communicating the identifier to a network device that responds with a cryptographic mechanism associated with the identifier. 54. The data storage device of claim 53, wherein the data storage device is selected from the group consisting of a flash media card, a compact disc (CD) and a digital video disc (DVD). 55. The data storage device of claim 53, wherein the protected content is selected from the group consisting of a gaming application, an executable application, a text file, an audio file, an image file, and a video file. 56. The data storage device of claim 53 wherein the data storage device is a removable data storage device that is capable of being removably secured and read by a computing device. 57. The data storage device of claim 56, wherein the computing device is operable to recognize the protected content, communicate the identifier to the network device, receive the cryptographic mechanism from the network device and apply the cryptographic mechanism to the protected content. 58. The data storage device of claim 53, wherein the protected content further comprises a plurality of protected content wherein each of the plurality of protected content has a content identifier. 59. The data storage device of claim 58, wherein at least a portion of the plurality of protected content is convertible to unprotected content by communicating a respective content identifier to the network device that responds with one of a plurality of cryptographic mechanisms corresponding with the respective content identifier. 60. The data storage device of claim 53, wherein the memory further comprises non-protected content, wherein the non-protected content includes preview content. 61. The data storage device of claim 53, wherein the memory further comprises non-protected content, wherein the non-protected content includes limited- use content.
METHODS AND APPARATUS FOR PROTECTED DISTRIBUTION OF APPLICATIONS AND MEDIA CONTENTFIELD OF INVENTIONThe described aspects relate generally to protected distribution of media content in a network environment. More particularly, the described aspects relate to protected distribution of media content and applications on a removable data storage device.BACKGROUNDRemovable data storage devices, such as compact disc (CDs), digital video discs (DVDs), flash media cards and the like have been become increasingly more prevalent in the distribution of digital media content, such as music files, video files, multimedia files, video gaming applications, business applications, text files and the like. These types of data storage devices afford the media distributor a relatively inexpensive medium for physical data storage, while affording the user of the removable data storage device a means for interfacing the storage device with a wide variety of computing devices, such as desktop computers, laptop computers, video game consoles, handheld computing devices and the like.One on-going concern of the media content providers is the protection of intellectual property rights associated with the media content. If the content can readily be moved between computing devices and, thus, between users, the copyright and patent protection (i.e., the digital rights) associated with the media content and/or applications may be compromised. Current means for distributing data in protected environment that insures strong intellectual property protection are either cost prohibitive and/or technically prohibitive. Realizing that removable data storage devices are generally inexpensive devices, content providers are reluctant to implement methods for intellectual property protection that may add cost to the devices. [0004] In addition to content provider concerns with the protection of intellectual property rights, the user of the content desires a protection means that does not otherwise burden their access to the media content. Uscr-fricndly access to the content is important from a device marketability standpoint, insuring that the user continues to purchase data storage devices of this type. Thus, a need exists to develop intellectual property protection means that are seamlessly operable and, as such, provide minimal burden to the user of the storage device.10005] Other protection concerns may also be related to removable data storage devices depending on the type and form of content stored on the device. In this regard, data storage devices that store large media files and or objects may pose additional concerns. For example, a content provider may provide a large amount of media content, applications or files in a single data storage device. Some of the content, such as executable files and the like, may require protection, while other content, such as resource files and the like, may not warrant protection. Being able to limit protection to content of interest provides many benefits. For example, by limiting protection to only relevant content (e.g. the music file, the video file, the video game application), the remaining content may be accessible to multiple users and the remaining content may entice the additional users to purchase the protected content. Additionally, by limiting the protected content to only relevant content, the overall process of converting the content from protected to un-protected content is streamlined, thus adding to the efficiency of the process and making the user experience more friendly. [0006] In addition, large media flies and/or applications may warrant individual protection for designated portions of the content. By developing methods and processes that allow for individual protection of various portions of content, new and innovative business models may be devised to provide users access to such content. For example, in today's video gaming market, users that wish to purchase additional features or upgrades to a video game application are typically required to purchase an additional data storage device that warrants a return visit to the video game retailer. Therefore, a need exists to provide a data storage device that allows the user on-demand access to additional features, upgrades, etc., thus, eliminating the need for the user to re-visit the retail outlet or otherwise find a purchasing option for the additional features. [00071 Therefore, a need exists to develop a means for implementing data protection in removable data storage devices that affords the content provider a reasonable solution from a cost perspective and affords the device user a user-friendly means of accessing the protected content. Additionally, a need exists to provide a data storage device that includes storage of a large volume of content/applications, some of which require protection and some of which do not require protection. Also, a need exists to develop methods and apparatus for providing on-demand protected access to additional features or content related to main content stored on the data storage device.SUMMARYThus, devices, methods, apparatus, computer-readable media and processors are presented that provide data protection in removable data storage devices, such as CDs, DVDs, flash media cards and the like. The data protection that is afforded is both simplistic in technological design and reasonable from a cost implementation standpoint. The devices, methods, apparatus, computer-readable media and processors can be configured to provide protection to only those portions of content stored on the device that require such protection, thereby, allowing for un-protected content to remain accessible to all users. Additionally, the methods, apparatus, computer-readable media and processors may be configured to limit the access to the protected content based on association of the storage device with one or more computing devices. Also, the methods, apparatus, computer-readable media and processors may be configured to provide individual protection to portions of the content stored on the devices, thus, limiting user access to individual portions of the content based upon the licensing rights of the user.In some aspects, a method for obtaining content in a protected environment comprises receiving a storage device comprising a storage device identifier and protected content. The method further includes forwarding the storage device identifier to a network device. Further, the method includes receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier. Additionally, the method includes accessing at least a portion of the protected content with the cryptographic mechanism. In a related aspect, a computer readable medium tangibly stores a sequence of instructions that, when executed, cause a computer device to perform the actions described above. In a further related aspect, at least one processor may be configured to perform the operations described above.Tn other aspects, a wireless device comprises means for receiving a storage device comprising a storage device identifier and protected content. The wireless device further comprises means for forwarding the storage device identifier to a network device, and means for receiving at least a reference to a cryptographic mechanism from the network device based on an association with the storage device identifier. Additionally, the wireless device includes means for accessing at least a portion of the protected content with the cryptographic mechanism.In yet other aspects, a computing device, such as wireless device, a desktop computer, a laptop device, a gaming console or the like, comprises a processing engine and a content access initiator module executable by the processing engine. The content access initiator being operable to recognize protected content stored on a storage device, communicate a storage device identifier to a network device, receive from the network device at least a reference to a first cryptographic mechanism associated with the storage device identifier and apply the first cryptographic mechanism to at least a portion of the protected content to convert the portion of the protected content to a portion of un-protected contentIn still other aspects, a method for distributing content in a protected environment comprises obtaining an association between a first storage device identifier and a cryptographic mechanism, and obtaining at least a reference to the cryptographic mechanism. The method further includes receiving a request from a computing device for access to at least a portion of a protected content, where the request comprises a second storage device identifier. Additionally, the method includes forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion of the second storage device identifier and the first storage device identifier. In a related aspect, a computer readable medium tangibly stores a sequence of instructions that, when executed, cause a computer device to perform the actions described above. In a further related aspect, at least one processor may be configured to perform the operations described above.In further aspects, a network device, such as a network server or any other device capable of being networked with a computing device is defined. The network device comprises means for obtaining an association between a first storage device identifier and a cryptographic mechanism, and means for obtaining at least a reference to the cryptographic mechanism. The network device further includes means for receiving a request from a computing device for access to at least a portion of a protected content, the request comprising a second storage device identifier. Additionally, the network device includes means for forwarding at least the reference to the cryptographic mechanism to the computing device based on a correspondence between at least a portion, of the second storage device identifier and the first storage device identifier. [0014] In other aspects, a network device comprises a processing engine and a personalization module executed by the processing engine. The personalization module being operable to receive a storage device identifier from a networked computing device, determine a cryptographic mechanism associated with the storage device identifier and communicate at least a reference to the cryptographic mechanism to the computing device.In still other aspects, a method of distributing content comprises loading unprotected content on a storage device having a storage device identifier, the storage device configured for removable communication with a computing device. The method further includes obscuring at least a portion of the unprotected content with a cryptographic mechanism, thereby defining at least a portion of a protected content. Also, the method includes defining an association between the storage device identifier and the cryptographic mechanism. Additionally, the method includes forwarding the defined association to a network device operable to provide access to at least the portion of the protected content to a networked computing device having the storage device identifier.In some aspect, a data storage device, such as media card, CD, DVD, game cartridge or the like, includes a memory comprising a data storage device identifier and protected content, such as encrypted content. The data storage device identifier may be a serial number or and other identifier associated with the device. The protected content is convertible to unprotected content by communicating the identifier to a network device that responds with a cryptographic mechanism associated with the identifier. [0017] Thus, the described aspects provide for a cost effective and efficient means for protecting content stored on removable data storage devices.BRTEF DESCRTPTTON OF THE DRAWINGSThe disclosed aspects will hereinafter be described in conjunction with the appended drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote the elements, an in which:Fig. 1 illustrates one aspect of a general system for providing content distribution in a protected environment;Fig. 2 is a block diagram of one aspect of a system for providing content distribution in a protected environment;] Fig.3 illustrates one aspect of a wireless network, specifically a cellular device network, associated with the computing device of Fig. 2;Fig. 4 is a flow diagram of one aspect for provisioning a removable data storage device;Fig. 5 is a flow diagram of aspects for personalizing a removable data storage device, a computing device and protected content in a communications network;Figs. 6 and 7 are process flow diagrams of one aspect for providing content distribution in a protected environment;Figs. 8 and 9 are process flow diagrams of an alternate aspect for providing content distribution in a protected environment; andFigs. 10-12 are process flow diagrams of yet another alternate aspect for providing content distribution in a protected environment.DETAILED DESCRIPTIONThe present devices, apparatus, methods, computer-readable media and processors are described with reference to the accompanying drawings, in which aspects of the invention are shown. The devices, apparatus, methods, computer-readable media and processors may, however, be embodied in many different forms and should not be construed as limited to the aspects set forth herein; rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Further, in this description, like numbers refer to like elements throughout.The present devices, apparatus, methods, computer-readable media and processors provide for protected distribution of content that is stored in removable data storage devices, such as magnetic media, optical media, tape, soft disk, hard disk or the like. For example, the removable data storage device may take the form of a CD, DVD, flash media card or the like. Content, as referred to herein, encompasses any digital media file, application, routine, data or other information, executable or non-executable that may be stored on a data storage device. Further, protected content, as referred to herein, comprise a secured and/or obscured form of the content, such as may be obtained by encrypting the content, hashing the content, ciphering the content, etc. Additionally, key, as referred to herein, comprises a cryptographic mechanism to transform unprotected content into and/or out of protected content, such as an encryption algorithm applied to the content, a hash, a cipher, a public key, a private key, a symmetric key, etc.Referring to Fig. 1, in one aspect, a system for providing protected distribution of content is schematically illustrated. The system includes removable data storage device 10, such as CD 1OA, DVD 1OB, flash media card 1OC and smart card 10D. The removable data storage devices shown in Figure 1 are by way of example only, other removable data storage devices are also contemplated and within the scope of the present aspects. The removable data storage device includes memory 12 that stores protected content 14 and a data storage device identifier 16. Protected content is the term herein used to refer to all content that is protected from user access; typically protected content may take the form of coded or ciphered content (i.e., encrypted content). The data storage identifier 16 is typically a data storage device serial number or some other identifier that will uniquely differentiate the data storage device from other data storage devices.The removable data storage device 10 is in data communication with the computing device 20. The computing device may include wireless communication device 2OA, wireless gaming device 2OB, laptop computer 2OC or desktop computer 2OD. The computing devices shown in Fig. 1 are by way of example only, other computing devices are also contemplated and within the scope of the present aspects. In many aspects, data communication between the data storage device and the computing device requires the storage device to be removably secured within the computing device. However, in other aspects it is also possible for the system to be configured such that the storage device is in wired or wireless data communication with the computing device while the storage device is remote from the computing device. For example, the data storage device may be configured to communicate with the computing device via short-range communication, such as via infrared (TR) waves, Bluetooth(R) protocol messages, Wi-Fi technology, Wi-Max technology, or the like.The computing device 20 includes a computer platform 22 that provides for the execution of content access initiator module 24. Content access initiator module includes executable instructions for recognizing protected content 14 on a data storage device 10 in communication with the computing device, communicating the data storage device identifier 16 to a network device 40 in response to the recognition of protected content; receiving one or more content keys 42 from the network device in response to the communication of the identifier 16 and applying the one or more keys to the protected content 14 for the purpose of accessing the contentThe system may also include a network device 40, such as a network server that is in network communication with the computing device 20. The network device executes personalization module 44, which determines associations between data storage device identifiers 16 and content keys 42. Once the personalization module determines association between data storage device identifiers 16 and content keys 42, the personalization module may retrieve the one or more content keys from network database 46. In turn, the network device 40 may communicate the one or more content keys 42 to the computing device, which applies the one or more keys to the protected content 14 for the purpose of converting the protected content to user-accessible unprotected content. The network device shown in Figure 1 is by way of example only, any device capable of being networked to the computing device 20 and capable of executing personalization module 44 are also contemplated and within the scope of the present aspects. As dictated by the functionality of the computing device, the network device may be in wired, wireless or both wired and wireless communication with the computing device.In accordance with the system aspect, Figure 2 provides a more detailed block diagram of the system for providing protected distribution of content. The removable data storage device 10 may include memory 12, such as flash, read-only and/or random- access memory (RAM and ROM), EPROM, EEPROM or the like, that stores protected content 14 and a data storage device identifier 16. As illustrated, the data storage device 10 may store a single entity of protected content, such as first protected content 14A or the data storage device may optionally store a plurality of protected content, such as second protected content 14B and nth protected content 14C. In aspects in which the data storage device stores a plurality of protected content, each protected content portion or entity may, optionally, have an associated protected content portion identifier 18A, 18B and 18C. The protected content portion identifiers may be associated with one or more content keys 42 that are applied to the protected content portion to convert the content to un-protected content.In some aspects all of the content stored on the data storage device may be protected content, while in other aspects the data storage device may store additional non-protected content 15. The non-protected content 15 may be content that is readily accessible to all users at any time. For example, the non-protcctcd content may be a media player application and the protected content may be one or more media files (e.g., music files, video files or the like). Alternatively, the non-protected content may be files, applications, routines or the like that are used in conjunction with the protected content once the protected content has been converted to content. For example, the data storage device may store a large quantity of applications and/or media resources, where the core applications may be protected and the resource files may be non-protected. Once access has been granted to the protected core applications, the core applications arc deemed to be executable and may utilize the non-protcctcd resource files during execution.In some aspects, the non-protected content 15 may include a preview of the protected content 14 stored on the storage device 10 and/or a preview of additional related content that is either stored and protected on the storage device or stored remotely at a network device, such as additional versions of a gaming application, additional related music or video files or the like. In such aspects, the non-protected content may include an embedded link that provides the user access to a network server or network site for the purpose of purchasing the protected content and/or additional related content. In aspects in which the non-protected content 15 includes a preview of the protected content 14, the data storage devices may be gratuitously distributed to potential content buyers, with the non-protected preview content acting as an enticement to purchase the protected content. In other aspects in which the non-protected content is a preview of additional related content (i.e., content not originally purchased by the buyer of the data storage device), the additional content may be additional protected content stored on the data storage device or the additional content may be remotely stored content that is downloaded to the computing device 20 upon purchase. [0036] Additionally, in some aspects the non-protected content 15 may include limited-use of the protected content 14 stored on the storage device 10 and/or limited- use of additional related content that is either stored and protected on the storage device or stored remotely at a network device. For example, the non-protected content 15 may include a limited-use gaming application, music file, video file or the like. In such aspects, the data storage device 10 may be configured such that the non-protected content 15 has limited-use, such as: a predetermined finite number of uses or plays; a predetermined limited time period in which the non-protected content may be available; a predetermined set of functionality less than the full functionality of the protected content; and, an accessibility to a predetermined limited portion of the full amount of content. Alternatively, in other aspects, the data storage device 10 may be configured such that limited-use of the non-protected content is associated with the computing device. For example, a non-protected music file may be limited to two plays per computing device, thus, allowing for the non-protected music file to be played up to two times on any accommodating computing device. In such aspects, the network device 40 may provide for the tracking of limited-use to computing device by requiring the computing device to communicate a device identifier to the network device upon initial activation of the non-protcctcd limitcd-usc content.The system additionally includes a computing device 20 that has a computer platform 22 that can transmit and receive data across network 68, and execute routines and applications stored in computing device data repository 26 or data storage device memory 12. The data repository 26 stores content access initiator module 24 that provides instructions that are executed by content access initiator logic 27 for recognizing protected content on data storage devices that are read by the computing device, communicating the data storage device identifier to a network device in response to the recognition of protected content; receiving one or more content keys from the network device in response to the communication of the identifier and applying the one or more keys to the protected content for the purpose of accessing the content. In other aspects, the content access initiator module 24 may be stored on the data storage device as non-protected content 15.The data repository 26 may typically also store a computing device identifier 29. In some aspects, the computing device identifier may be implemented to associate the computing device with data storage device and/or the content keys. [0039] The data repository 26, which may comprise volatile and nonvolatile memory such as read-only and/or random-access memory (RAM and ROM), EPROM, EEPROM, flash cards, or any memory common to computer platforms. Further, data repository 26 may include one or more flash memory cells or may be any secondary or tertiary storage device, such as magnetic media, optical media, tape, or soft or hard disk. [0040] Further, computer platform 22 also includes at least one processing engine 28, which may be an application-specific integrated circuit ("ASTC"), or other chipset, processor, logic circuit, or other data processing device. Processing engine 28 or other processor such as ASIC may execute an application programming interface ("API") layer 30 that interfaces with any resident or non-resident programs, such as content access initiator module 24, stored in a data repository 26 of the computing device 20 or in the memory 12 of the data storage device 10. In aspects in which the computing device is a wireless computing device, the API 30 is typically a runtime environment interface executing on the computing device. One such runtime environment is Binary Runtime Environment for Wireless<(R)> (BREW<(R)>) software developed by Qualcomm, Inc., of San Diego, California. Other runtime environments may be utilized that, for example, operate to control the execution of applications on wireless computing devices.Processing engine 28 typically includes various processing subsystems 32 embodied in hardware, firmware, software, and combinations thereof, that enable the functionality of computing device 20 and the operability of the computing device on network 68. For example, processing subsystems 32 allow for initiating and maintaining network communications, and exchanging data, with other networked devices. In one aspect, in which the computing device is embodied by a wireless communication device, communications processing engine 28 may include one or a combination of processing subsystems 32, such as: sound, non-volatile memory, file system, transmit, receive, searcher, layer 1, layer 2, layer 3, main control, remote procedure, handset, power management, diagnostic, digital signal processor, vocoder, messaging, call manager, Bluetooth system, Bluetooth LPOS, position determination, position engine, user interface, sleep, data services, security, authentication, USIM/SIM, voice services, graphics, USB, multimedia such as MPEG, GPRS, etc. For the disclosed aspects, processing subsystems 32 of processing engine 28 may include any subsystem components that interact with applications executing on computer platform 22. For example, processing subsystems 32 may include any subsystem components, which receive data reads, and data writes from API 30 on behalf of the content access initiator module 24.Computer platform 22 may further include a communications module 34 embodied in hardware, firmware, software, and combinations thereof, that enables communications among the various components of the computing device 20, as well as between the device 20 and the network 68. The communication module may include the requisite hardware, firmware, software and/or combinations thereof for establishing a wireless communication connection.Additionally, computing device 20 may include input mechanism 36 for generating inputs into wireless device, and output mechanism 38 for generating information for consumption by the user of the computing device. For example, input mechanism 36 may include a mechanism such as a key or keyboard, a mouse, a touchscreen display, a microphone in association with a voice recognition module, etc. Further, for example, output mechanism 38 may include a display, an audio speaker, a haptic feedback mechanism, etc.The system additionally includes a network device 40 that has a computing platform 48 that can transmit and receive data across network 68. The computer platform 48 includes a processing engine 50 that is capable of executing modules, routines and/or applications stored in network device data repository 52 or in network database 46. The processing engine 50 may be an application-specific integrated circuit ("ASIC"), or other chipset, processor, logic circuit, or other data processing device. The network database 46 may reside in a device remote from the network device 40 or the database may reside internally within the network device. In aspects in which the database 46 resides internally within the network device 40 the database may be included within the data repository 52.[00451 The data repository 52 may comprise volatile and nonvolatile memory such as read-only and/or random-access memory (RAM and ROM), EPROM, EEPROM, flash cards, or any memory common to computer platforms. Further, data repository 26 may include one or more flash memory cells or may be any secondary or tertiary storage device, such as magnetic media, optical media, tape, or soft or hard disk. The data repository 52 may include a personalization module 44 that includes instructions utilized by the personalization logic 54 for determining associations between data storage devices 10 and content keys 42. In alternate aspects, the personalization module 44 may also determine associations between computing devices 20 and data storage devices 10 and/or associations between protected content portions 18 and content keys 42. The personalization module 44 determines associations by accessing the network database 48 and locating associations within a specific look-up table or some other form of association element. As such, the network database may include a data storage device identifier and content key look-up table 56 for determining associations between data storage devices 10 and content keys 42. In alternate aspects, the network database may include data storage device identifier and computing device identifier look-up table 58 for determining associations between data storage devices 10 and computing device 20. In still further aspects, the network database may include protected content portion identifier and content key look-up table 60 for determining associations between protected content portions 18 and content keys 42. [0046] In some aspects, the network database may also store the protected content 14 for the purpose of initially downloading the protected content to data storage devices and/or updating/replacing the protected content on the data storage devices. For example, if the user of the data storage device 10 misplaces, loses or otherwise no longer has possession of the data storage device the user may be able to contact the network device and retrieve the protected content based on the computing device identifier or a user identifier that associates either the computing device or the user with the protected content.Additionally, the network database may also store individual data storage device files 63, which provide the remote storage of parameters, settings and the other information related to the protected content. For example, if the protected content 14 is a gaming application the data storage device files may store game settings, levels of the game achieved, an interrupted game or the like. By providing for remote storage of files 63, the network device may limit the storage ability of the computing device 20 and/or serve as a back-up storage device. For example, if a user loses possession of the data storage device and requires a replacement device or purchases an updated or new version of the initial data storage device, the network device can initially recognize the user or the computing device and apply the settings in the data storage device files 63 to the content found in the replacement device, updated device or new version/sequel device.The network device may additionally include a monitoring module 65 that includes instructions utilized by the monitoring logic 67 for monitoring the use of content on the computing device 20. In this regard the monitoring module acknowledges the content stored on the data storage device 10 and any other content accessed or otherwise executed by the computing devices. Additionally, the monitoring module may monitor environmental attributes of the computing device, such as the geographic location of the wireless device, movement of the device, point-in-time of the device, etc. Monitoring of the content accessed or used by the computing devices may be accomplished by uploading logs from the computing device or otherwise communicating with the computing device. Based on content stored on the data storage device and any other content accessed or otherwise executed by the computing device, the recommendation logic 69 will push recommendations for other similar content to the device, such as similar music files, audio files, gaming applications or the like. Additionally, the monitoring module 65 may use the environmental data to base the recommendation on the environmental attributes of the computing device, such as the location of the device, the time of day or the like.The data repository 52 may additionally include a communication module 64 that includes instructions utilized by the communication logic 66 for receiving identifier communications from computing devices and transmitting content key communications to computing devices. The communications module 64 may embodied in hardware, firmware, software, and combinations thereof, that enables communications among the various components of the network device 40, as well as between the device 40 and the network 68. The communication module may include the requisite hardware, firmware, software and/or combinations thereof for establishing a wireless and/or wired communication connection.In one aspect, a method for protected distribution of content in a wireless network environment is provided. Fig. 3 provides a block diagram of an illustrative wireless system, specifically a cellular telephone system. As previously noted, the aspects herein disclosed are not limited to a wireless network environment and may also be implemented in a wired network environment. Network communication, according to the present aspects, includes, but is not limited to, communicating the identifiers (i.e., the data storage identifier, the computing device identifier and/or or the protected content portion identifiers) to the network device and communicating the control keys from the network device to the computing device.Referring to Fig. 3, in one aspect, computing device 20 comprises a wireless communication device, such as a cellular telephone. A cellular telephone system 70 may include wireless network 72 connected to a wired network 74 via a carrier network 76. Wireless communication devices 20 are being manufactured with increased computing capabilities and often can communicate packets including voice and data over wireless network 72. As described earlier, these "smart" wireless devices 20 have APIs 30 resident on their local computer platform 22 that allow software developers to create software applications that operate on the wireless communication device 20, and control certain functionality on the device. Fig. 3 is a representative diagram that more fully illustrates the components of a wireless communication network and the interrelation of the elements of one aspect of the present system. Wireless network 72 is merely exemplary and can include any system whereby remote modules, such as wireless communication devices 20 communicate over-the-air between and among each other and/or between and among components of a wireless network 72, including, without limitation, wireless network carriers and/or servers.In system 70, network device 40 can be in communication over a wired network 74 (e.g. a local area network, LAN) with a separate network database 46 for storing content keys 42 and associated look-up tables. Further, a data management server 78 may be in communication with network device 40 to provide post-processing capabilities, data flow control, etc. Network device 40, network database 46 and data management server 78 may be present on the cellular telephone system 70 with any other network components that are needed to provide cellular telecommunication services. Network device 40, and/or data management server 78 communicate with carrier network 76 through a data links 80 and 82, which may be data links such as the Internet, a secure LAN, WAN, or other network. Carrier network 76 controls messages (generally being data packets) sent to a mobile switching center ("MSC") 84. Further, carrier network 76 communicates with MSC 84 by a network 82, such as the Internet, and/or POTS ("plain old telephone service"). Typically, in network 82, a network or Internet portion transfers data, and the POTS portion transfers voice information. MSC 84 may be connected to multiple base stations ("BTS") 86 by another network 88, such as a data network and/or Internet portion for data transfer and a POTS portion for voice information. BTS 86 ultimately broadcasts messages wirelessly to the wireless communication devices 20, by short messaging service ("SMS"), or other over-the-air methods.Fig. 4 provides a flow diagram of an aspect for provisioning data storage devices with protected content and the association of the data storage devices to the protected content keys. Referring primarily to Fig. 4, and secondarily to Figs. 1 and 2, data element 200 is the content (i) for which the content provider desires protection. As previously discussed the data may be all of the content that the provider stores on the data storage device or any portion of the content stored on the device. Exemplary content includes, but is not limited to, music files, video files, multimedia files, executable files and the like. Data element 210 is the content encryption key (CEK) (42). Tn the illustrated aspect, a conventional encryption algorithm, such as a XOR encryption algorithm, generates the CEK resulting in a random key. Key length may be determined based the degree of security desired, in some aspects, a key length of 128 bits may provide requisite security. It should be noted that content may be encrypted with multiple keys for additional security. [0054] In alternate aspects, the content encryption key may be generated using the identifier of the data storage device or the computing device. In these aspects, the identifier is used as the "seed" in an encryption algorithm to generate the encryption key. In these alternate aspects, the encryption keys may be generated at further stages in the process, such as at the point the content is stored on the data storage device or at the point the data storage device is placed in communication with the computing device. In such alternate aspects, it may be possible to obviate the need for a "store-forward" approach by storing the key on the data storage device or on the computing device. In such aspects, in which the key is stored and retrieved from the data storage device or the computing device, the need for back-end network storage and retrieval of the content keys is obviated and, hence, not a required element of the process or system. [0055] At Event 220, the content encryption key is communicated to a network database (46) for subsequent association with data storage device identifiers. The content encryption key may be communicated to the network database electronically over a communications network or the key may be communicated manually by data entry function.At Event 230, the encryption key (or encryption keys) (42) are applied to the content resulting in protected content (14), i.e., encrypted content and, at Event 240, the protected content (14) is stored on a data storage device. Data Element 250 represents the unique identifier (16) associated with each data storage device, typically a serial number or the like associated with the data storage device. The unique identifier is stored as metadata within the device memory. Additionally, it should be noted that Event 240 may optionally include storing non-protected content (15) on the data storage device. As previously noted, the non-protected data may include a preview of the protected content, and/or additional protected content stored either on the data storage device or on a remote server. Additionally, in aspects in which the non-protected content includes a preview, an embedded link may be provided for access and/or purchase of the full content. The non-protected content may additionally provide for limited-use content that is accessible to the user of the storage device for a limited number of uses.At Event 260, the data storage identifier, along with the CEK or an appropriate CEK identifier, is communicated to the network database (46) and, at Event 270, the association between CEK (i) and the data storage device is entered into a corresponding CEK and data storage device look-up table (56). Association between the CEK (i) and the data storage device is accomplished by a personalization module (44) that is executed at a network device and is in network communication with the network database (46).10058] At optional Event 280, the data storage device (10), which includes protected content (14) in memory (12), is marketed through a conventional sales outlet or otherwise placed in the commercial marketplace. In other aspects, the data storage device may be procured by a user without a commercial transaction transpiring, for example, in those instances in which the data storage device is not used for commercial gain or is otherwise offered to the user without compensation. At optional Event 290, the data storage device is purchased by a user or otherwise lawfully procured by a user. For example, the data storage device with protected content may be purchased in a commercial sale or transferred to employees of an enterprise. At optional Event 292, the purchase, lawful procurement, transfer and/or exchange of the data storage device may be authenticated by communicating the sale, procurement, transfer and/or exchange transaction and the device identifier to the network database (46).Referring primarily to Fig. 5, and secondarily to Figs. 1 and 2, according to one aspect, a process includes the personalization of protected content (14) stored on a data storage device (10) and the subsequent accessing of the content. At Event 400, the data storage device is activated by placing the storage device in communication with a computing device (20). In many aspects, the computing device may include a receptacle for receiving and securing a removable data storage device, such as a CD, DVD, flash media card or the like. However, in alternate aspects, the computing device may include short-range communication functionality, such as IR or Bluetooth(R) communications, that allows for the computing device to read data without coming into physical contact with the storage device. Once the data storage is initially read, a protected access initiator module (24) that resides on the computing device, on the storage device or on an associated network is executed on the computing device to recognize protected content.Once protected content has been recognized at the computing device, at Event 410, the computing device communicates the storage device identifier (16), and alternately the computing device identifier (29) associated with the respective computing device (20), to a network device (40). Network device (40), for example, may exist in a wired or wireless network, optionally beyond a suitable firewall (90). Receipt of the storage identifier and, optionally, the computing device identifier, by the network device may invoke the execution of personalization module (44) within the network device. The personalization module determines associations between the data storage identifiers, computing device identifiers (if any) and content keys. As such, at Event 420, the data storage identifier (16), and alternately the computing device identifier (29), is forwarded from the network device to the network database (46). In some aspects that do not include the computing device identifier, the process may proceed directly to Event 440, as is discussed below. In alternate aspects including the computing device identifier, however, at Event 430, the personalization module determines if the data storage device has been previously associated with the computing device. (See Figs. 8 and 9 and the related discussion for a detailed flow for associating data storage devices to computing devices and determining associations). If the data storage device has not previously been associated with any computing device, or if it has been associated with the computing device associated with the current computing device identifier, then the process proceeds to Event 440. At Event 440, the CEK and data storage device identifier look-up table (56) are utilized to retrieve the content key(s) (42) associated with the data storage device identifier. At Event 450, the content encryption keyLs are retrieved from the network database and communicated to the network device and, at Event 460, the network device communicates the content encryption keys to the computing device.Once the computing device (20) has received the content encryption keys (42), at Event 470, the computing device applies the encryption keys to the protected content to decrypt or otherwise convert the content from a protected/secured form to an unprotected/in-the-clear form. As such, after Event 470, the computing device has access to at least selected portions of the content. At Event 480, the computing device may store the content encryption key in a secure portion of the computing device memory. By storing the content key in computing device memory, personalization of the data storage device within the computing device only needs to occur once. Each subsequent use of the data storage device by the computing device may rely on the key stored in the computing device memory for decryption.Figs. 6 and 7 provide process flow charts, according to one aspect, for personalization of a data storage device having protected content. Referring to primarily to Figs. 6 and 7, and secondarily to Figs. 1 and 2, at Event 600, one or more content encryption keys (42) are applied to content and, at Event 610, the content encryption keys arc stored at a network database (46). As previously discussed, the content encryption keys may be generated randomly using a conventional random number generating algorithm or the keys may be generated by using the data storage device identifier or the computing device identifier as the "seed" in a random number generator (KNG) algorithm. At Event 620, the protected content (i.e., the encrypted content) is stored on a data storage device that includes a unique identifier. At Event 630, the data storage device identifier is associated with the content encryption key(s) and the association between the identifier and the key(s) are stored at the network database.At Event 640, the data storage device is obtained by a user who desires access to the content stored on the device. In some aspects, for example, the data storage device may be sold in a commercial transaction. In other aspects, such as in an enterprise, the data storage device may be issued to a user, such as an employee or agent. Upon a purchase or other transaction transferring the device a user, an authentication of the purchase or transfer can be accomplished by storing the purchase confirmation or transfer confirmation at the network database as procurement data. Optionally, at Event 650, the procurement data is communicated to the network database and stored therein. For example, in a commercial sale, the procurement data comprises information related to the sale, such as a purchase confirmation or a transfer confirmation, which may be communicated to the database at the point of sale/transfer by automated means, such via a communications network.At Event 660, the data storage device is placed in communication with a computing device and the computing device attempts to access data stored on the storage device. At Decision 670, a determination is made as to whether the storage device stores non-protected content. If the data storage device stores non-protected content then, at Event 680, the non-protected content may be accessed on the computing device. Tf the data storage device does not include non-protected content or after accessing the non-protected content, then, at Event 690, the computing device may recognize the protected content and, at Event 700, establish network communication with a network device. The network communication connection may be established "seamlessly", i.e., without knowledge of the device user or the computing device may interface with the user asking permission to establish the network communication as a means of providing access to protected content.Once the connection has been established, at Event 710, the data storage device identifier is communicated to the network device. At optional Decision 720, the network device may determine if the right to use the data storage device/content can be verified and/or authenticated. For example, the network device may attempt to determine if the data storage identifier has been placed in a use state, i.e. if the device has been properly sold or transferred to a user, as opposed, for example, to being a device that was stolen and is being used illicitly or out of the control of the entity that controls the use rights associated with the content. If the procurement cannot be authenticated then, at optional Event 730, the network device sends either a purchase option message to the computing device or an error/access denied message to the user. The purchase option message may allow for the super-distribution of the content on the data storage device by allowing a first user to pass the storage device to a second user, who may then validly obtain access to the protected content by making an ad hoc purchase of the rights. If the rights can be authenticated then, at Decision 740, the network device determines if the storage device identifier is associated with one or more keys. If a determination is made that the data storage device is not associated with an encryption key then, at Event 750, the network device sends an error/access denied message to the computing device.[0066J If the determination is made that the data storage device is associated with one or more keys, then, at Event 760 (refer to Fig 7), the key(s) are retrieved from the network database and, at Event 770, the keys are communicated to the computing device. At Event 780, the keys are applied to the protected content to decrypt the content (convert the protected content to un-protcctcd content) and, at Event 790, the computing device grants access to the content. At Event 800, the key(s) are stored, in the computing device memory for subsequent decoding of the protected content. [0067] Figs. 8 and 9 provide process flow charts, according to one aspect, for personalization of a data storage device having protected content and personalization of the storage device to a computing device. Referring primarily to Figs. 8 and 9, and secondarily to Figs. 1 and 2, at Event 900, one or more content encryption keys (42) are applied to content and, at Event 910, the content encryption keys are stored at a network database (46). At Event 920, the protected content (i.e., the encrypted content) is stored on a data storage device that includes a unique identifier. At Event 930, the data storage device identifier is associated with the content encryption key(s) and the association between the identifier and the key(s) are stored at the network database. [0068] At Event 940, the data storage device is obtained by a user who desires access to the content stored on the device, as discussed above in detail (sec Fig. 6, Event 640). Optionally, at Event 950, information, relating to the procurement of the data storage device is communicated to the network database and stored therein. [0069] At Event 960, the data storage device is placed in communication with a computing device and the computing device attempts to access data stored on the storage device. At Decision 970, a determination is made as to whether the storage device stores non-protected content. If the data storage device stores non-protected content then, at Event 980, the non-protected content may be accessed on the computing device. If the data storage device docs not include non-protcctcd content or after accessing the non-protected content, then, at Event 990, the computing device may recognize the protected content and, at Event 1000, establish network communication with a network device. The network communication connection may be established "seamlessly", i.e., without knowledge of the device user or the computing device may interface with the user asking permission to establish the network communication as a means of providing access to protected content.Once the connection has been established, at Event 1010, the data storage device identifier and the computing device identifier are communicated to the network device. At optional Decision 1020, the network device determines if the rights of the user to the data storage device/content can be authenticated, as discussed above in detail (See Fig. 6, Event 720). If the rights cannot be authenticated then, at optional Event 1030, the network device sends either a purchase option message to the computing device or an error/access denied message to the user. If the rights can be authenticated then, at Decision 1040, the network device determines if the data storage device is associated with any computing device or a pre-determined maximum number of computing devices. If the determination is made that the data storage device has not been associated with a computing device or the pre-determined maximum number of computing devices has yet to be attained, then at Event 1050, the network device stores an association between the computing device and the data storage device. [0071] If a determination is made that the data storage device is associated with any computing device or the pre-determined maximum number of computing devices associations has been achieved, then at Decision 1060 (refer to Fig. 9), the network device determines if the data storage device is associated with the currently communicating computing device. If a determination is made that the data storage is not associated with the current communicating computer device then, at Event 1070, the network device sends a purchase option message or an error/access denied message to the computing device. If a determination is made that the storage device is associated with the current communicating computer device then, at Decision 1080, the network device determines f the storage device identifier is associated with one or more keys. If a determination is made that the data storage device is not associated with an encryption key then, at Event 1090, the network device sends an error/access denied message to the computing device.If the determination is made that the data storage device is associated with one or more keys then, at Event 1100, the kcy(s) arc retrieved from the network database and, at Event 1110, the keys are communicated to the computing device. At Event 1120, the keys are applied to the protected content to decrypt the content (converting the protected content to un-protected content) and, at Event 1130, the computing device grants access to the content. At Event 1140, the one or more keys may be stored in a secure portion of the computing device memory for subsequent decoding of the protected content.[00731 Figs. 10-12 provide process flow charts, according to an alternate aspect, for personalization of a data storage device having protected content. In the described flow the data storage devices includes multiple protected content portions with each portion being individually accessible. Referring primarily to Figs. 10-12, and secondarily to Figs. 1 and 2, at Event 1200, one or more content encryption keys (42) are applied to each content portion and, at Event 1210, the content encryption keys are associated with the corresponding content portion identifiers and the associations arc stored at a network database (46). At Event 1220, the protected content portions are stored on a data storage device that includes a unique identifier. At Event 1230, the data storage device identifier is associated with the content encryption key(s) and the association between the storage device identifier and the content key(s) are stored at the network database. [0074] At Event 1240, the data storage device is obtained by a user who desires access to the content stored on the device, as discussed above in detail. Optionally, at Event 1250, the purchase confirmation or transfer confirmation is communicated to the network database and stored therein. Typically, information relating to the procurement of the data storage device by the user is communicated to the database at the point of sale/transfer by automated means, such via a communications network. [0075] At Event 1260, the data storage device is placed in communication with a computing device and the computing device attempts to access data stored on the storage device. At Decision 1270, a determination is made as to whether the storagedevice stores non-protected content. If the data storage device stores non-protected content then, at Event 1280, the non-protected content may be accessed on the computing device. If the data storage device does not include non-protected content or after accessing the non-protected content, then, at Event 1290, the computing device may recognize the protected content and, at Event 1300, establish network communication with a network device. The network communication connection may be established "seamlessly", i.e., without knowledge of the device user or the computing device may interface with the user asking permission to establish the network communication as a means of providing access to protected content. [0076] Once the connection has been established, at Event 1310, the data storage device identifier and the first protected content portion identifier is communicated to the network device. At optional Decision 1320, the network device determines if the rights of the user to the data storage device/content can be authenticated, as discussed above in detail. If the rights cannot be authenticated then, at optional Event 1330, the network device sends either a purchase option message to the computing device or an error/access denied message to the user. If the rights can be authenticated then, at Decision 1340 (refer to Fig. 11), the network device determines if the storage device identifier and the first protected content portion identifier are associated with one or more keys. If a determination is made that the data storage device or the content portion are not associated with an encryption key then, at Event 1350, the network device sends an error/access denied message to the computing device.If the determination is made that the data storage device and. the first protected content portion are associated with one or more predetermined keys then, at Event 1360, the key(s) are retrieved from the network database and, at Event 1370, the keys are communicated to the computing device. At Event 1380, the keys are applied to the first protected content portion to decrypt the first portion of content (convert the protected content to un-protected content) and, at Event 1390, the computing device grants access to the first content portion. At Event 1400, the key(s) may be stored in a secure portion of the computing device memory for subsequent decoding of the first protected content portion.At Event 1410, the computing device provides a user prompt asking if the user desires to access additional protected content portions. Access to the additional protected content portions may require the user to purchase the protected content portions or otherwise gain a license to access the additional content portions. For example, the additional content portions may be additional audio or video files associated with the initial audio or video file (i.e., the first protected content portion), an additional game level associated with the initial game application, an additional enhancement/feature for the initial game application or the like. The computing device may be configured to prompt the user periodically or after the user has completed or executed the initial content in its entirety.At Event 1420, the user elects to access one or more of the additional protected content portions (subsequently referred to herein as the "nth" portion) and, in some aspects, such an election may require additional payment. In alternate aspects, the additional protected portions may be configured to be automatically accessed without the need for prompting or election (i.e., keys retrieved and applied automatically). Such automatic access may occur at predetermined intervals or upon occurrence of a predetermined event.At Event 1430 (refer to Fig. 12), a network connection is established between the computing device and a network device. Once the connection has been established, at Event 1440, the data storage device identifier and the "nth" protected content portion identifier are communicated to the network device. At Decision 1450, the network device determines if the storage device identifier and the "nth" protected content portion identifier are associated with one or more keys. If a determination is made that the data storage device or the content portion are not associated with an encryption key then, at Event 1460, the network device sends an error/access denied message to the computing device.If the determination is made that the data storage device and the "nth" protected content portion are associated with one or more predetermined keys then, at Event 1470, the key(s) are retrieved from the network database and, at Event 1480, the keys are communicated to the computing device. At Event 1490, the keys are applied to the "nth" protected content portion to decrypt the "nth" portion of content (convert the protected content to un-protected content) and, at Event 1500, the computing device grants access to the "nth" content portion. At Event 1510, the key(s) may be stored in a secure portion of the computing device memory for subsequent decoding of the "nth" protected content portion.Thus, the described aspects provide for methods, devices, apparatus, computer- readable media and processors that protect the distribution of media content. The simplistic approach to the present aspects allows for media content to be encrypted and the associated content encryption keys stored and accessible either remotely at a networked database or internally with data storage device memory. Once encrypted, access to the content encryption keys is granted by determining association between the content encryption keys and data storage device identification and, optionally, computing device identification. The present aspects provide a method for securing a large volume of media content on a data storage device by protecting or encrypting primary or important portions of the content, such as executables or audio/video files, while allowing secondary or less important portions of the content to remain nonprotected.The various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.Further, the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASTC. The ASTC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.While the foregoing disclosure shows illustrative aspects and/or embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the described aspects and/or embodiments as defined by the appended claims. Furthermore, although elements of the described embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise.Thus, many modifications and other embodiments of the invention may come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.